path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
2_one_step_FF_univariate.ipynb | ###Markdown
One step univariate feed-forward neural network modelIn this notebook, we demonstrate how to:- prepare time series data for training a feed-forward neural network (NN) forecasting model- get data in the required shape for the keras API- implement a simple feed-forward NN model in keras to predict the next step ahead (time *t+1*) in the time series- enable early stopping to reduce the likelihood of model overfitting- evaluate the model on a test datasetThe data in this example is taken from the GEFCom2014 forecasting competition1. It consists of 3 years of hourly electricity load and temperature values between 2012 and 2014. The task is to forecast future values of electricity load. In this example, we show how to forecast one time step ahead, using historical load data only.1Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli and Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016. Please run this notebook after completing 0_data_setup notebook.
###Code
import os
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
from glob import glob
from collections import UserDict
from common.utils import load_data, mape
from IPython.display import Image
%matplotlib inline
pd.options.display.float_format = '{:,.2f}'.format
np.set_printoptions(precision=2)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Load the data from csv into a Pandas dataframe
###Code
energy = load_data('data')[['load']]
energy.head()
###Output
_____no_output_____
###Markdown
Create train, validation and test setsWe separate our dataset into train, validation and test sets. We train the model on the train set. The validation set is used to evaluate the model after each training epoch and ensure that the model is not overfitting the training data. After the model has finished training, we evaluate the model on the test set. We must ensure that the validation set and test set cover a later period in time from the training set, to ensure that the model does not gain from information from future time periods.We will allocate the period 1st November 2014 to 31st December 2014 to the test set. The period 1st September 2014 to 31st October is allocated to validation set. All other time periods are available for the training set.
###Code
valid_start_dt = '2014-09-01 00:00:00'
test_start_dt = '2014-11-01 00:00:00'
energy[energy.index < valid_start_dt][['load']].rename(columns={'load':'train'}) \
.join(energy[(energy.index >=valid_start_dt) & (energy.index < test_start_dt)][['load']] \
.rename(columns={'load':'validation'}), how='outer') \
.join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
.plot(y=['train', 'validation', 'test'], figsize=(15, 8), fontsize=12)
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Data preparation - training setFor this example, we will set *T=6*. This means that the input for each sample is a vector of the prevous 6 hours of the energy load. The choice of *T=6* was arbitrary but should be selected through experimentation.*HORIZON=1* specifies that we have a forecasting horizon of 1 (*t+1*)
###Code
Image('./images/one_step_forecast.png')
T = 6
HORIZON = 1
###Output
_____no_output_____
###Markdown
Our data preparation for the training set will involve the following steps:1. Filter the original dataset to include only that time period reserved for the training set2. Scale the time series such that the values fall within the interval (0, 1)3. Shift the values of the time series to create a Pandas dataframe containing all the data for a single training example4. Discard any samples with missing values5. Transform this Pandas dataframe into a numpy array of shape (samples, features) for input into Keras 1. Filter the original dataset to include only that time period reserved for the training set Create training set containing only the model features
###Code
train = energy.copy()[energy.index < valid_start_dt][['load']]
###Output
_____no_output_____
###Markdown
2. Scale the time series such that the values fall within the interval (0, 1) Scale data to be in range (0, 1). This transformation should be calibrated on the training set only. This is to prevent information from the validation or test sets leaking into the training data.
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
train['load'] = scaler.fit_transform(train)
train.head(10)
###Output
_____no_output_____
###Markdown
Original vs scaled data:
###Code
energy[energy.index < valid_start_dt][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
3. Shift the values of the time series to create a Pandas dataframe containing all the data for a single training example First, we create the target (*y_t+1*) variable. If we use the convention that the dataframe is indexed on time *t*, we need to shift the *load* variable forward one hour in time. Using the freq parameter we can tell Pandas that the frequency of the time series is hourly. This ensures the shift does not jump over any missing periods in the time series.
###Code
train_shifted = train.copy()
train_shifted['y_t+1'] = train_shifted['load'].shift(-1, freq='H')
train_shifted.head(10)
###Output
_____no_output_____
###Markdown
We also need to shift the load variable back 6 times to create the input sequence:
###Code
for t in range(1, T+1):
train_shifted[str(T-t)] = train_shifted['load'].shift(T-t, freq='H')
y_col = 'y_t+1'
X_cols = ['load_t-5',
'load_t-4',
'load_t-3',
'load_t-2',
'load_t-1',
'load_t']
train_shifted.columns = ['load_original']+[y_col]+X_cols
train_shifted.head(10)
###Output
_____no_output_____
###Markdown
4. Discard any samples with missing values Notice how we have missing values for the input sequences for the first 5 samples. We will discard these:
###Code
train_shifted = train_shifted.dropna(how='any')
train_shifted.head(5)
###Output
_____no_output_____
###Markdown
5. Transform into a numpy arrays of shapes (samples, features) and (samples,1) for input into Keras Now convert the target and input features into numpy arrays.
###Code
y_train = train_shifted[[y_col]].as_matrix()
X_train = train_shifted[X_cols].as_matrix()
###Output
_____no_output_____
###Markdown
We now have a vector for target variable of shape:
###Code
y_train.shape
###Output
_____no_output_____
###Markdown
The target varaible for the first 3 samples looks like:
###Code
y_train[:3]
###Output
_____no_output_____
###Markdown
The tensor for the input features now has the shape:
###Code
X_train.shape
###Output
_____no_output_____
###Markdown
And the first 3 samples looks like:
###Code
X_train[:3]
###Output
_____no_output_____
###Markdown
We can sense check this against the first 3 records of the original dataframe:
###Code
train_shifted.head(3)
###Output
_____no_output_____
###Markdown
Data preparation - validation set Now we follow a similar process for the validation set. We keep *T* hours from the training set in order to construct initial features.
###Code
look_back_dt = dt.datetime.strptime(valid_start_dt, '%Y-%m-%d %H:%M:%S') - dt.timedelta(hours=T-1)
valid = energy.copy()[(energy.index >=look_back_dt) & (energy.index < test_start_dt)][['load']]
valid.head()
###Output
_____no_output_____
###Markdown
Scale the series using the transformer fitted on the training set:
###Code
valid['load'] = scaler.transform(valid)
valid.head()
###Output
_____no_output_____
###Markdown
Prepare validation inputs in the same way as the training set:
###Code
valid_shifted = valid.copy()
valid_shifted['y+1'] = valid_shifted['load'].shift(-1, freq='H')
for t in range(1, T+1):
valid_shifted['load_t-'+str(T-t)] = valid_shifted['load'].shift(T-t, freq='H')
valid_shifted = valid_shifted.dropna(how='any')
y_valid = valid_shifted['y+1'].as_matrix()
X_valid = valid_shifted[['load_t-'+str(T-t) for t in range(1, T+1)]].as_matrix()
y_valid.shape
X_valid.shape
###Output
_____no_output_____
###Markdown
Implement Feedforward Neural Network We implement feed-forward neural network with the 6 inputs, 5 neurons in hidden layer and one neuron in output layer:
###Code
Image('./images/ff_one_step_univariate.png')
from keras import regularizers
from keras.models import Model, Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping, ModelCheckpoint
LATENT_DIM = 5 # number of units in the dense layer
BATCH_SIZE = 32 # number of samples per mini-batch
EPOCHS = 50 # maximum number of times the training algorithm will cycle through all samples
model = Sequential()
model.add(Dense(LATENT_DIM, activation="relu", input_shape=(T,)))
model.add(Dense(HORIZON))
###Output
_____no_output_____
###Markdown
Use RMSprop optimizer and mean squared error as the loss function.
###Code
model.compile(optimizer='RMSprop', loss='mse')
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 5) 35
_________________________________________________________________
dense_2 (Dense) (None, 1) 6
=================================================================
Total params: 41
Trainable params: 41
Non-trainable params: 0
_________________________________________________________________
###Markdown
Early stopping trick
###Code
Image('./images/early_stopping.png')
###Output
_____no_output_____
###Markdown
Specify the early stopping criteria. We **monitor** the validation loss (in this case the mean squared error) on the validation set after each training epoch. If the validation loss has not improved by **min_delta** after **patience** epochs, we stop the training.
###Code
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5)
best_val = ModelCheckpoint('model_{epoch:02d}.h5', save_best_only=True, mode='min', period=1)
history = model.fit(X_train,
y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(X_valid, y_valid),
callbacks=[earlystop, best_val],
verbose=1)
###Output
Train on 23370 samples, validate on 1463 samples
Epoch 1/50
23370/23370 [==============================] - 1s 62us/step - loss: 0.0146 - val_loss: 0.0027
Epoch 2/50
23370/23370 [==============================] - 2s 65us/step - loss: 0.0016 - val_loss: 8.8017e-04
Epoch 3/50
23370/23370 [==============================] - 2s 94us/step - loss: 8.4388e-04 - val_loss: 7.8345e-04
Epoch 4/50
23370/23370 [==============================] - 4s 179us/step - loss: 7.5352e-04 - val_loss: 7.6891e-04
Epoch 5/50
23370/23370 [==============================] - 4s 184us/step - loss: 6.8899e-04 - val_loss: 5.7843e-04
Epoch 6/50
23370/23370 [==============================] - 2s 99us/step - loss: 6.4656e-04 - val_loss: 6.5477e-04
Epoch 7/50
23370/23370 [==============================] - 2s 70us/step - loss: 6.1258e-04 - val_loss: 5.1480e-04
Epoch 8/50
23370/23370 [==============================] - 4s 179us/step - loss: 5.9428e-04 - val_loss: 5.3005e-04
Epoch 9/50
23370/23370 [==============================] - 4s 154us/step - loss: 5.8010e-04 - val_loss: 5.0697e-04
Epoch 10/50
23370/23370 [==============================] - 3s 114us/step - loss: 5.6687e-04 - val_loss: 5.6766e-04
Epoch 11/50
23370/23370 [==============================] - 3s 122us/step - loss: 5.6162e-04 - val_loss: 4.9836e-04
Epoch 12/50
23370/23370 [==============================] - 2s 94us/step - loss: 5.5403e-04 - val_loss: 4.4434e-04
Epoch 13/50
23370/23370 [==============================] - 2s 86us/step - loss: 5.4949e-04 - val_loss: 7.3764e-04
Epoch 14/50
23370/23370 [==============================] - 2s 86us/step - loss: 5.4249e-04 - val_loss: 4.4281e-04
Epoch 15/50
23370/23370 [==============================] - 2s 88us/step - loss: 5.4028e-04 - val_loss: 4.2500e-04
Epoch 16/50
23370/23370 [==============================] - 2s 65us/step - loss: 5.3656e-04 - val_loss: 4.3892e-04
Epoch 17/50
23370/23370 [==============================] - 2s 69us/step - loss: 5.3333e-04 - val_loss: 4.2791e-04
Epoch 18/50
23370/23370 [==============================] - 2s 74us/step - loss: 5.3011e-04 - val_loss: 5.1643e-04
Epoch 19/50
23370/23370 [==============================] - 2s 81us/step - loss: 5.2492e-04 - val_loss: 6.5587e-04
Epoch 20/50
23370/23370 [==============================] - 2s 74us/step - loss: 5.2655e-04 - val_loss: 4.8191e-04
###Markdown
Load the model with the smallest mape
###Code
best_epoch = np.argmin(np.array(history.history['val_loss']))+1
model.load_weights("model_{:02d}.h5".format(best_epoch))
###Output
_____no_output_____
###Markdown
plot training and validation losses
###Code
plot_df = pd.DataFrame.from_dict({'train_loss':history.history['loss'], 'val_loss':history.history['val_loss']})
plot_df.plot(logy=True, figsize=(10,10), fontsize=12)
plt.xlabel('epoch', fontsize=12)
plt.ylabel('loss', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate the model Create the test set
###Code
look_back_dt = dt.datetime.strptime(test_start_dt, '%Y-%m-%d %H:%M:%S') - dt.timedelta(hours=T-1)
test = energy.copy()[test_start_dt:][['load']]
test.head()
###Output
_____no_output_____
###Markdown
Scale the test data
###Code
test['load'] = scaler.transform(test)
test.head()
###Output
_____no_output_____
###Markdown
Create test set features
###Code
test_shifted = test.copy()
test_shifted['y_t+1'] = test_shifted['load'].shift(-1, freq='H')
for t in range(1, T+1):
test_shifted['load_t-'+str(T-t)] = test_shifted['load'].shift(T-t, freq='H')
test_shifted = test_shifted.dropna(how='any')
y_test = test_shifted['y_t+1'].as_matrix()
X_test = test_shifted[['load_t-'+str(T-t) for t in range(1, T+1)]].as_matrix()
###Output
_____no_output_____
###Markdown
Make predictions on test set
###Code
predictions = model.predict(X_test)
predictions
###Output
_____no_output_____
###Markdown
Compare predictions to actual load
###Code
eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
eval_df['timestamp'] = test_shifted.index
eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
eval_df['actual'] = np.transpose(y_test).ravel()
eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
eval_df.head()
###Output
_____no_output_____
###Markdown
Compute the mean absolute percentage error over all predictions
###Code
mape(eval_df['prediction'], eval_df['actual'])
###Output
_____no_output_____
###Markdown
Plot the predictions vs the actuals for the first week of the test set
###Code
eval_df[eval_df.timestamp<'2014-11-08'].plot(x='timestamp', y=['prediction', 'actual'], style=['r', 'b'], figsize=(15, 8))
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
clean up model files
###Code
for m in glob('model_*.h5'):
os.remove(m)
###Output
_____no_output_____ |
examples/initial_datas/InitialData2D.ipynb | ###Markdown
Kelvin-Helmholtz 2D
###Code
K = 10
N = 128
a1 = np.random.uniform(0,1, K)
a2 = np.random.uniform(0,1, K)
b1 = np.random.uniform(0,1, K)
b2 = np.random.uniform(0,1, K)
perturbation = 0.06
normalization1 = sum(a1)
if abs(normalization1) < 1e-10:
normalization1 = 1
normalization2 = sum(a2)
if abs(normalization2) < 1e-10:
normalization2 = 1
x = np.linspace(0, 1, N)
y = np.linspace(0, 1, N)
X, Y =np. meshgrid(x, y)
X = X.T
Y = Y.T
perturbation_upper = 0.75 + perturbation*np.sum([a1[i]*np.cos(2*np.pi*(i+1)*(X+b1[i])) for i in range(len(a1))], 0)/normalization1
perturbation_lower = 0.25 + perturbation*np.sum([a2[i]*np.cos(2*np.pi*(i+1)*(X+b2[i])) for i in range(len(a2))], 0)/normalization2
middle = (Y < perturbation_upper)*(Y > perturbation_lower)
rho = 2.0 * middle + 1.0*(1-middle)
ux = -0.5*middle + 0.5*(1-middle)
uy = np.zeros_like(X)
p = 2.5*np.ones_like(X)
plt.pcolormesh(X, Y, rho)
plt.quiver(X[::16,::16], Y[::16,::16], ux[::16,::16], uy[::16,::16], -rho[::16,::16],
scale=4, cmap='Set2')
plt.xlabel("$x$")
plt.ylabel("$y$")
plot_info.showAndSave(f'kelvinhelmholtz_initial')
###Output
###Markdown
Richtmeyer-Meshkov 2D
###Code
K = 10
N = 1024
a1 = np.random.uniform(0,1, K)
b1 = np.random.uniform(0,1, K)
perturbation_size = 0.06
normalization = sum(a1)
if abs(normalization) < 1e-10:
normalization = 1
x = np.linspace(0, 1, N)
y = np.linspace(0, 1, N)
X, Y =np. meshgrid(x, y)
X = X.T
Y = Y.T
XC = X - 0.5
YC = Y - 0.5
phi = np.arctan2(YC, XC)#*(abs(XC)>0)
phi[phi < 0] += 2*np.pi
perturbation = perturbation_size * np.sum([a1[n] * np.cos(2*np.pi*(n+1)*(b1[n])+phi*(n+1)) for n in range(K)], axis=0) / normalization
r = np.sqrt((XC)**2+(YC)**2)
middle_p = r < 0.1
p = middle_p * 20 + (1-middle_p)*1.0
middle_rho = r < (0.25 + perturbation)
rho = middle_rho * 2.0 + (1-middle_rho)*1.0
ux = 0
uy = 0
plt.pcolormesh(X, Y, perturbation)
plt.colorbar()
plt.show()
plt.pcolormesh(X, Y, rho)
plt.colorbar()
plt.xlabel("$x$")
plt.ylabel("$y$")
plot_info.showAndSave(f'richtmeyermeshkov_initial_rho')
plt.pcolormesh(X, Y, p)
plt.colorbar()
plt.xlabel("$x$")
plt.ylabel("$y$")
plot_info.showAndSave(f'richtmeyermeshkov_initial_p')
###Output
_____no_output_____
###Markdown
Cloudshock
###Code
N=128
K = 10
a1 = np.random.uniform(0,1, K)
a2 = np.random.uniform(0,1, K)
b1 = np.random.uniform(0,1, K)
b2 = np.random.uniform(0,1, K)
perturbation = 0.06
normalization1 = sum(a1)
if abs(normalization1) < 1e-10:
normalization1 = 1
normalization2 = sum(a2)
if abs(normalization2) < 1e-10:
normalization2 = 1
x = np.linspace(0, 1, N)
y = np.linspace(0, 1, N)
X, Y =np.meshgrid(x, y)
r = np.sqrt((X-0.25)**2 + (Y-0.5)**2)
phi = np.arctan2(X-0.25, Y - 0.5)
perturbation_x = perturbation*np.sum([a1[i]*np.cos(2*np.pi*(i+1)*(Y+b1[i])) for i in range(len(a1))], axis=0)/normalization1
perturbation_r = perturbation*np.sum([a2[i]*np.cos(2*np.pi*(i+1)*(phi+b2[i])) for i in range(len(a2))], axis=0)/normalization2
#
r_max = 0.13
left_x = (x < 0.05 + perturbation_x)
cloud = r < r_max + perturbation_r
rho = left_x*3.86859 + (1-left_x)*(cloud*10.0 + (1-cloud)*1.0)
ux = 11.2536*left_x
uy = np.zeros_like(ux)
p = 167.345*left_x + (1-left_x)*1.0
plt.pcolormesh(X, Y, rho)
plt.colorbar()
plt.quiver((left_x*X)[::16,::16], (left_x*Y)[::16,::16],
(left_x*ux)[::16,::16],
(left_x*uy)[::16,::16], -(left_x*rho)[::16,::16],
scale=80, cmap='Set1')
plt.xlabel("$x$")
plt.ylabel("$y$")
plot_info.showAndSave(f'cloudshock_initial_rho')
###Output
###Markdown
Shock vortex
###Code
N = 128
K = 10
a1 = np.random.uniform(0,1, K)
b1 = np.random.uniform(0,1, K)
GAMMA=1.66666666666667
r_c = 0.05
alpha = 0.204
x_c = 0.25
y_c = 0.5
M = 1.1
epsilon = 0.06
perturbation = epsilon
normalization1 = sum(a1)
if abs(normalization1) < 1e-10:
normalization1 = 1
print(normalization1)
x = np.linspace(0, 1, N)
y = np.linspace(0, 1, N)
X, Y =np.meshgrid(x, y)
perturbation_x = perturbation*np.sum([a1[i]*np.cos(2*np.pi*(i+1)*(Y+b1[i])) for i in range(len(a1))], axis=0)/normalization1
shock_location = X < (0.5 + perturbation_x)
# shock part
rho = 1.0 * shock_location + 1.0/1.1 * (1-shock_location)
ux = np.sqrt(GAMMA) * shock_location + 1.1*np.sqrt(GAMMA)*(1-shock_location)
uy = np.zeros_like(ux)
#ux = np.zeros_like(ux)
p = 1.0 * shock_location + (1-0.1) * (1-shock_location)
# vortex part
tau = np.sqrt((X - x_c)**2 + (Y - y_c)**2 )/ r_c
sin_theta = (Y - y_c) / (tau * r_c)
cos_theta = (X - x_c) / (tau * r_c)
left_x = X < 0.5
vortex_epsilon = 1
ux += left_x * vortex_epsilon * tau * np.exp(alpha*(1 - tau**2)) * sin_theta;
uy -= left_x * vortex_epsilon *tau * np.exp(alpha*(1 - tau**2)) * cos_theta;
p += left_x*(-(GAMMA - 1) * vortex_epsilon**2* np.exp(2 * alpha*(1 - tau**2)) / (4 * alpha * GAMMA) * rho);
skips=4
plt.pcolormesh(X, Y, rho)
plt.colorbar()
plt.quiver(X[::skips,::skips],
(Y)[::skips,::skips],
(ux)[::skips,::skips],
(uy)[::skips,::skips],
-(rho)[::skips,::skips],
scale=30, cmap='Set1')
plt.xlabel("$x$")
plt.ylabel("$y$")
plot_info.showAndSave(f'shockvortex_initial_rho')
plt.pcolormesh(X, Y, p)
plt.colorbar()
plt.quiver(X[::skips,::skips],
(Y)[::skips,::skips],
(ux)[::skips,::skips],
(uy)[::skips,::skips],
-(rho)[::skips,::skips],
scale=80, cmap='Set1')
plt.xlabel("$x$")
plt.ylabel("$y$")
plot_info.showAndSave(f'shockvortex_initial_p')
zoom_region
###Output
_____no_output_____ |
tutorials/simple-tutorial.ipynb | ###Markdown
NWB Conversion Tools - Simple Tutorial
###Code
from pathlib import Path
from pynwb import NWBHDF5IO
from nwbwidgets import nwb2widget
from nwb_conversion_tools import (
NWBConverter,
TutorialRecordingInterface,
TutorialSortingInterface
)
###Output
_____no_output_____
###Markdown
Define conversion class and its internal data interface classes (*i.e.*, the name of format) For a full list of supported formats, [see this list](https://nwb-conversion-tools.readthedocs.io/en/conversion_guide/converting_data_to_nwb.html), or [make your own data interface](https://nwb-conversion-tools.readthedocs.io/en/conversion_guide/data_interface.html)
###Code
class TutorialNWBConverter(NWBConverter):
data_interface_classes = dict(
TutorialRecording=TutorialRecordingInterface,
TutorialSorting=TutorialSortingInterface
)
###Output
_____no_output_____
###Markdown
Construct arguments for the converter class and run conversion
###Code
# Custom parameters for simulated toy data
duration=10. # Seconds
num_channels=4
sampling_frequency=30000. # Hz
stub_test = True # Truncates data write for faster quality checking
output_file = "E:/NWBConversionToolsSimpleTutorial.nwb"
# Input arguments to each interface
# For actual data formats, these arguments are typically file or folder paths to the data
source_data = dict(
TutorialRecording=dict(
duration=duration,
num_channels=num_channels,
sampling_frequency=sampling_frequency
),
TutorialSorting=dict(
duration=duration,
num_channels=num_channels,
sampling_frequency=sampling_frequency
)
)
# Initialize converter
converter = TutorialNWBConverter(source_data=source_data)
# Get metadata from source data
# For actual data formats, this generally pulls informatin from the header files for each interface
metadata = converter.get_metadata()
# User-input metadata
metadata["NWBFile"]["session_description"] = "NWB Conversion Tools tutorial."
metadata["NWBFile"]["experimenter"] = "My name"
metadata["Subject"] = dict(subject_id="Name of imaginary testing subject (required for DANDI upload)")
# Conversion options for each interface
# For actual data formats, these can vary widely - read the docstring for the interface you want to use by entering
# import nwb_conversion_tools
# nwb_conversion_tools.NameOfDataInterface.run_conversion?
conversion_options = dict(
TutorialRecording=dict(stub_test=stub_test),
TutorialSorting=dict()
)
# Run conversion
converter.run_conversion(
metadata=metadata,
nwbfile_path=output_file,
save_to_file=True, # If False, this instead returns the NWBFile object in memory
overwrite = True, # If False, this appends an existing file
conversion_options=conversion_options
)
###Output
_____no_output_____
###Markdown
View NWBFile with widgets
###Code
io = NWBHDF5IO(output_file, "r", load_namespaces=True)
nwbfile = io.read()
nwb2widget(nwbfile)
###Output
_____no_output_____
###Markdown
NWB Conversion Tools - Simple Tutorial
###Code
from pathlib import Path
from datetime import datetime
from pynwb import NWBHDF5IO
from nwb_conversion_tools import NWBConverter, RecordingTutorialInterface, SortingTutorialInterface
###Output
_____no_output_____
###Markdown
Define conversion class and its internal data interfaces (*i.e.*, the name of format) For a full list of supported formats, [see this list](https://nwb-conversion-tools.readthedocs.io/en/conversion_guide/converting_data_to_nwb.html), or [make your own data interface](https://nwb-conversion-tools.readthedocs.io/en/conversion_guide/data_interface.html)
###Code
class TutorialNWBConverter(NWBConverter):
data_interface_classes = dict(
RecordingTutorial=RecordingTutorialInterface,
SortingTutorial=SortingTutorialInterface
)
###Output
_____no_output_____
###Markdown
Construct arguments for the converter class and run conversion
###Code
# Custom parameters for simulated toy data
duration = 10. # Seconds
num_channels = 4
num_units = 10
sampling_frequency = 30000. # Hz
stub_test = False # Truncates data write for faster quality checking
output_file = "NWBConversionToolsSimpleTutorial.nwb"
# Input arguments to each interface
# For actual data formats, these arguments are typically file or folder paths to the data
source_data = dict(
RecordingTutorial=dict(
duration=duration,
num_channels=num_channels,
sampling_frequency=sampling_frequency
),
SortingTutorial=dict(
duration=duration,
num_units=num_units,
sampling_frequency=sampling_frequency
)
)
# Initialize converter
converter = TutorialNWBConverter(source_data=source_data)
# Get metadata from source data
# For actual data formats, this generally pulls informatin from the header files for each interface
metadata = converter.get_metadata()
# User-input metadata
metadata["NWBFile"]["session_description"] = "NWB Conversion Tools tutorial."
metadata["NWBFile"]["experimenter"] = ["My name"]
metadata["NWBFile"]['session_start_time'] = datetime.now().astimezone().strftime("%Y-%m-%dT%H:%M:%S")
metadata["Subject"] = dict(subject_id="Name of imaginary testing subject (required for DANDI upload)")
# Conversion options for each interface
# For actual data formats, these can vary widely - read the docstring for the interface you want to use by entering
# import nwb_conversion_tools
# nwb_conversion_tools.NameOfDataInterface.run_conversion?
conversion_options = dict(
RecordingTutorial=dict(stub_test=stub_test),
SortingTutorial=dict()
)
# Run conversion
converter.run_conversion(
metadata=metadata,
nwbfile_path=output_file,
save_to_file=True, # If False, this instead returns the NWBFile object in memory
overwrite=True, # If False, this appends an existing file
conversion_options=conversion_options
)
###Output
_____no_output_____
###Markdown
View NWBFile with widgets
###Code
from nwbwidgets import nwb2widget
io = NWBHDF5IO(output_file, "r", load_namespaces=True)
nwbfile = io.read()
nwb2widget(nwbfile)
###Output
_____no_output_____
###Markdown
NWB Conversion Tools - Simple Tutorial
###Code
from pathlib import Path
from pynwb import NWBHDF5IO
from nwbwidgets import nwb2widget
from nwb_conversion_tools import NWBConverter, RecordingTutorialInterface, SortingTutorialInterface
###Output
_____no_output_____
###Markdown
Define conversion class and its internal data interfaces (*i.e.*, the name of format) For a full list of supported formats, [see this list](https://nwb-conversion-tools.readthedocs.io/en/conversion_guide/converting_data_to_nwb.html), or [make your own data interface](https://nwb-conversion-tools.readthedocs.io/en/conversion_guide/data_interface.html)
###Code
class TutorialNWBConverter(NWBConverter):
data_interface_classes = dict(
RecordingTutorial=RecordingTutorialInterface,
SortingTutorial=SortingTutorialInterface
)
###Output
_____no_output_____
###Markdown
Construct arguments for the converter class and run conversion
###Code
# Custom parameters for simulated toy data
duration = 10. # Seconds
num_channels = 4
num_units = 10
sampling_frequency = 30000. # Hz
stub_test = False # Truncates data write for faster quality checking
output_file = "E:/NWBConversionToolsSimpleTutorial.nwb"
# Input arguments to each interface
# For actual data formats, these arguments are typically file or folder paths to the data
source_data = dict(
RecordingTutorial=dict(
duration=duration,
num_channels=num_channels,
sampling_frequency=sampling_frequency
),
SortingTutorial=dict(
duration=duration,
num_units=num_units,
sampling_frequency=sampling_frequency
)
)
# Initialize converter
converter = TutorialNWBConverter(source_data=source_data)
# Get metadata from source data
# For actual data formats, this generally pulls informatin from the header files for each interface
metadata = converter.get_metadata()
# User-input metadata
metadata["NWBFile"]["session_description"] = "NWB Conversion Tools tutorial."
metadata["NWBFile"]["experimenter"] = ["My name"]
metadata["Subject"] = dict(subject_id="Name of imaginary testing subject (required for DANDI upload)")
# Conversion options for each interface
# For actual data formats, these can vary widely - read the docstring for the interface you want to use by entering
# import nwb_conversion_tools
# nwb_conversion_tools.NameOfDataInterface.run_conversion?
conversion_options = dict(
RecordingTutorial=dict(stub_test=stub_test),
SortingTutorial=dict()
)
# Run conversion
converter.run_conversion(
metadata=metadata,
nwbfile_path=output_file,
save_to_file=True, # If False, this instead returns the NWBFile object in memory
overwrite=True, # If False, this appends an existing file
conversion_options=conversion_options
)
###Output
_____no_output_____
###Markdown
View NWBFile with widgets
###Code
io = NWBHDF5IO(output_file, "r", load_namespaces=True)
nwbfile = io.read()
nwb2widget(nwbfile)
###Output
_____no_output_____ |
Naive Bayes.ipynb | ###Markdown
Naive Bayes with Binary Labels
###Code
# Assigning features and label variables
weather=['Sunny','Sunny','Overcast','Rainy','Rainy','Rainy','Overcast','Sunny','Sunny',
'Rainy','Sunny','Overcast','Overcast','Rainy']
temp=['Hot','Hot','Hot','Mild','Cool','Cool','Cool','Mild','Cool','Mild','Mild','Mild','Hot','Mild']
play=['No','No','Yes','Yes','Yes','No','Yes','No','Yes','Yes','Yes','Yes','Yes','No']
# Import LabelEncoder
from sklearn import preprocessing
#creating labelEncoder
le = preprocessing.LabelEncoder()
# Converting string labels into numbers.
wheather_encoded=le.fit_transform(weather)
print(wheather_encoded)
# Converting string labels into numbers
temp_encoded=le.fit_transform(temp)
label=le.fit_transform(play)
print("weather:",wheather_encoded)
print("Temp:",temp_encoded)
print("Play:",label)
#Combinig weather and temp into single listof tuples
features=list(zip(wheather_encoded,temp_encoded))
print(features)
#Import Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(features,label)
#Predict Output
predicted= model.predict([[0,2]]) # 0:Overcast, 2:Mild
print("Predicted Value:", predicted)
###Output
Predicted Value: [1]
###Markdown
Naive Bayes with Multiple Labels
###Code
#Import scikit-learn dataset library
from sklearn import datasets
#Load dataset
wine = datasets.load_wine()
wine
wine.data[1]
# print the names of the 13 features
print("Features: ", wine.feature_names)
# print the label type of wine(class_0, class_1, class_2)
print("Labels: ", wine.target_names)
# print data(feature)shape
wine.data.shape
# print the wine data features (top 5 records)
print(wine.data[0:5])
# print the wine labels (0:Class_0, 1:class_2, 2:class_2)
print(wine.target)
# Import train_test_split function
from sklearn.model_selection import train_test_split
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(wine.data, wine.target, test_size=0.2,random_state=109)
# 70% training and 30% test
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Model Generation
###Code
#Import Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
#Create a Gaussian Classifier
gnb = GaussianNB()
#Train the model using the training sets
gnb.fit(X_train, y_train)
#Predict the response for test dataset
y_pred = gnb.predict(X_test)
###Output
_____no_output_____
###Markdown
Evaluating Model
###Code
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
###Output
Accuracy: 0.9444444444444444
###Markdown
Naive Bayes ClassifierDataset can be found in: https://archive.ics.uci.edu/ml/datasets/wineData Set Information: These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/data/Wine.csv')
X = dataset.iloc[:, 0:13].values
y = dataset.iloc[:, 13].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
dataset.head()
#Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, classification_report
cm = confusion_matrix(y_test, y_pred)
c = classification_report(y_test, y_pred)
cm
#Classification report
print(c)
#K folder cross validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
###Output
_____no_output_____
###Markdown
SPAMBASE 1)DEFINING THE QUESTION Specifying the QuestionBuild a spam classifier model to tell whether a message is spam or not. DATASETEach row of this dataset is a set of features measured for a specific email and an additional column telling whether the mail is spam or non-spam, class is the target variables. spam(1) , not spam(0) Defining metric for successTo achieve our objective;Modelling to predict if a mail is spam or not The algorithm I'll put into consideration is:i) Baseline Model at 80%ii) Naive bayes where I aim for an 80% accuracy of the model. Understanding the contextSpam are irrelevant messages sent over the Internet, typically to a large number of users, for the purposes of advertising.They are sent for commercial purposes.One disadvantage of spam emails is that it can be a malicious attempt to gain access to your computer.Machine learningalgorithmns such as Deep Learning, Naïve Bayes, Support Vector Machines, Neural Networks, etc, can be used to identify whether an email is spam or not. Recording experimental designsteps taken to tackle the projecti) Loading the dataii) Data cleaningiii) Modellingvi) conclusion 2) Load and Study the data
###Code
#import basic libraries
import pandas as pd
import numpy as np
df = pd.read_csv('spambase.csv')
#first five records
df.head()
#shape of the data
df.shape
###Output
_____no_output_____
###Markdown
The dataframe has 4,601 rows and 58 columns.
###Code
#info of the data
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4601 entries, 0 to 4600
Data columns (total 58 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 word_freq_make 4601 non-null float64
1 word_freq_address 4601 non-null float64
2 word_freq_all 4601 non-null float64
3 word_freq_3d 4601 non-null float64
4 word_freq_our 4601 non-null float64
5 word_freq_over 4601 non-null float64
6 word_freq_remove 4601 non-null float64
7 word_freq_internet 4601 non-null float64
8 word_freq_order 4601 non-null float64
9 word_freq_mail 4601 non-null float64
10 word_freq_receive 4601 non-null float64
11 word_freq_will 4601 non-null float64
12 word_freq_people 4601 non-null float64
13 word_freq_report 4601 non-null float64
14 word_freq_addresses 4601 non-null float64
15 word_freq_free 4601 non-null float64
16 word_freq_business 4601 non-null float64
17 word_freq_email 4601 non-null float64
18 word_freq_you 4601 non-null float64
19 word_freq_credit 4601 non-null float64
20 word_freq_your 4601 non-null float64
21 word_freq_font 4601 non-null float64
22 word_freq_000 4601 non-null float64
23 word_freq_money 4601 non-null float64
24 word_freq_hp 4601 non-null float64
25 word_freq_hpl 4601 non-null float64
26 word_freq_george 4601 non-null float64
27 word_freq_650 4601 non-null float64
28 word_freq_lab 4601 non-null float64
29 word_freq_labs 4601 non-null float64
30 word_freq_telnet 4601 non-null float64
31 word_freq_857 4601 non-null float64
32 word_freq_data 4601 non-null float64
33 word_freq_415 4601 non-null float64
34 word_freq_85 4601 non-null float64
35 word_freq_technology 4601 non-null float64
36 word_freq_1999 4601 non-null float64
37 word_freq_parts 4601 non-null float64
38 word_freq_pm 4601 non-null float64
39 word_freq_direct 4601 non-null float64
40 word_freq_cs 4601 non-null float64
41 word_freq_meeting 4601 non-null float64
42 word_freq_original 4601 non-null float64
43 word_freq_project 4601 non-null float64
44 word_freq_re 4601 non-null float64
45 word_freq_edu 4601 non-null float64
46 word_freq_table 4601 non-null float64
47 word_freq_conference 4601 non-null float64
48 char_freq_%3B 4601 non-null float64
49 char_freq_%28 4601 non-null float64
50 char_freq_%5B 4601 non-null float64
51 char_freq_%21 4601 non-null float64
52 char_freq_%24 4601 non-null float64
53 char_freq_%23 4601 non-null float64
54 capital_run_length_average 4601 non-null float64
55 capital_run_length_longest 4601 non-null int64
56 capital_run_length_total 4601 non-null int64
57 class 4601 non-null int64
dtypes: float64(55), int64(3)
memory usage: 2.0 MB
###Markdown
No null values, most attributes are floats , capital_run... are integers
###Code
#checking for missing values
df.isnull().sum()
###Output
_____no_output_____
###Markdown
The dataset has no null values which is pretty good. column description Most of the attributes indicate whether a particular word or character was frequently occuring in the e-mail.1) The run-length attributes ie ,capital_run_length_average, capital_run_length_longest, capital_run_length_total, measure the length of sequences of consecutive capital letters.Statistical measure for each attribute:2) Attributes of type word_freq_WORD [0,100]. Percentage of words in the e-mail that match WORD, i.e. 100 * (number of times the WORD appears in the e-mail) / total number of words in e-mail. A "word" in this case is any string of alphanumeric characters bounded by non-alphanumeric characters or end-of-string.3) Attributes of type char_freq_CHAR [0,100] percentage of characters in the e-mail that match CHAR, i.e. 100 * (number of CHAR occurences) / total characters in e-mail 4) Attribute of type capital_run_length_average. Average length of uninterrupted sequences of capital letters5) Attribute of type capital_run_length_longest. Length of longest uninterrupted sequence of capital letters6) Attribute of type capital_run_length_total. Sum of length of uninterrupted sequences of capital letters total number of capital letters in the e-mail. ***1 nominal {0,1} class attribute of type spam = denotes whether the e-mail was considered spam (1) or not (0)***
###Code
df['class'].value_counts()
###Output
_____no_output_____
###Markdown
spam emails have a count of 1813 , not spam emails have a count of 2788
###Code
#distribution of the features
import matplotlib.pyplot as plt
import seaborn as sns
col_names = df.columns
fig, ax = plt.subplots(len(col_names), figsize=(10,40))
for i, col_val in enumerate(col_names):
sns.distplot(df[col_val], hist=True, ax=ax[i], kde = False, color = 'blue')
ax[i].set_title('Frequency distribution of '+col_val, fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8)
ax[i].set_ylabel('Count', fontsize=8)
plt.tight_layout()
plt.show()
###Output
<ipython-input-133-eaeec98c0a28>:15: UserWarning: Tight layout not applied. tight_layout cannot make axes height small enough to accommodate all axes decorations
plt.tight_layout()
###Markdown
The distributions do not form a normal distribution 4) Baseline Model
###Code
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
# Splitting our dataset
X = df.drop(['class'], axis=1)
y = df['class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2, random_state=0)
# Fitting our model
LogReg = LogisticRegression()
%time LogReg.fit(X_train, y_train)
# Using our model to make a prediction
y_pred = LogReg.predict(X_test)
confusion_matrix = confusion_matrix(y_test, y_pred)
print(confusion_matrix)
print(accuracy_score(y_test, y_pred))
###Output
[[505 33]
[ 39 344]]
0.9218241042345277
###Markdown
The baseline model accuracy is 92% which is good for the model. 505+344 were correctly predicted while 33+39 were wrongly predicted.The baseline model will be used to compare it with the naive bayes perfomance. 3) Naive bayes Naïve Bayes, which is computationally very efficient and easy to implement, is a learning algorithm frequently used in text classification problems.Multinomial Naïve Bayes considers a feature vector where a given term represents the number of times it appears or very often i.e. frequency. It performs well in text classification problems and it is easy to implement, fast and accurate method of prediction. 80-20 splits
###Code
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
#the x will be all the independent variable while y is the target
X = df.drop(['class'], axis=1)
y = df['class']
#split into train(80%) and test(20%)
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.2, random_state=0)
# instantiate a Multinomial Naive Bayes model
nb = MultinomialNB()
#train the model
%time nb.fit(X_train, y_train)
###Output
Wall time: 12 ms
###Markdown
We've instantiated our multinomial model and printed out the time it has taken. The model is really fast as it has only takes 12ms.
###Code
# make class predictions for X_test
y_pred = nb.predict(X_test)
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The model accuracy is at 81% which is good for the model.
###Code
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
###Output
_____no_output_____
###Markdown
454+294 were correctly predicted while 84+89 were wrongly predicted 70-30 splits
###Code
#split into train(80%) and test(20%)
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.3, random_state=0)
# Training our model
# instantiate a Multinomial Naive Bayes model
nb = MultinomialNB()
#train the model
%time nb.fit(X_train, y_train)
# make class predictions for X_test
y_pred = nb.predict(X_test)
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Model accuracy is 80%. 699+419 correctly predicted while 123+140 were wrongly predicted 60-40 splits
###Code
#split into train(60%) and test(40%)
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.4, random_state=0)
# Training our model
model = MultinomialNB().fit(X_train, y_train)
y_pred = model.predict(X_test)
print(np.mean(predicted == y_test))
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred)
###Output
_____no_output_____
###Markdown
A score of 0.80 is achieved which is good for our model.952+539 were correctly predicted while 145+205 were wrongly predicted Overall perfomance 1) 80-20 splitsscore - .81212) 70-30 splitsscore - .80953) 60-40 splitsscore - 0.809880-20 splits has the highest score.The score reduces at 70-30split and slightly increases at 60-40splits .In general, the baseline model performs better than the naive bayes model.. The advantage we get when working with naive bayes is that the time it takes for execution is less compared to logistic regression which takes more time. Model perfomanceImplementing feature selection to assess if our model perfoms better.
###Code
#selecting the top 10 best features
from sklearn.ensemble import ExtraTreesClassifier
import matplotlib.pyplot as plt
model = ExtraTreesClassifier()
model.fit(X,y)
print(model.feature_importances_) #use inbuilt class feature_importances of tree based classifiers
#plot graph of feature importances for better visualization
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
feat_importances.nlargest(10).plot(kind='barh')
plt.show()
###Output
[0.01159974 0.0111479 0.02666887 0.00250107 0.03231061 0.01402748
0.05704808 0.01961772 0.01466238 0.01222565 0.02384798 0.01455896
0.00727479 0.00422051 0.00885529 0.04077301 0.02342125 0.01616622
0.03320163 0.01007994 0.06509778 0.00868645 0.03952104 0.0250498
0.03986489 0.01936839 0.02948395 0.00909483 0.00608918 0.01036962
0.00409664 0.00169162 0.00557159 0.00228113 0.00472211 0.00681147
0.01742365 0.00127279 0.00591049 0.0031821 0.00354503 0.01203184
0.00446527 0.00533805 0.01495664 0.02096043 0.00084554 0.00253955
0.00870418 0.01370661 0.0042838 0.050021 0.05706668 0.00559442
0.03420041 0.03853025 0.03341172]
###Markdown
80-20 splits with 10 important features
###Code
#X='word_freq_your','char_freq_%24','word_freq_remove','char_freq_%21','word_freq_free','word_freq_hp','word_freq_000',
#'capital_run_length_average','capital_run_length_longest', 'capital_run_length_total',
X = df[['word_freq_your', 'char_freq_%24', 'word_freq_remove', 'char_freq_%21', 'word_freq_free', 'word_freq_hp',
'word_freq_000', 'capital_run_length_average', 'capital_run_length_longest', 'capital_run_length_total']].values
y = df['class'].values
#split into train(80%) and test(20%)
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.2, random_state=0)
# Training our model
# instantiate a Multinomial Naive Bayes model
nb = MultinomialNB()
#train the model
%time nb.fit(X_train, y_train)
# make class predictions for X_test
y_pred_class = nb.predict(X_test)
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred_class)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred_class)
###Output
_____no_output_____
###Markdown
Accuracy at 63%. 338+247 correctly predicted, 200+136 wrongly predicted.The model has performed poorly compared to the first model which has an accuracy of 81% 70-30splits
###Code
#split into train(70%) and test(30%)
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.3, random_state=0)
# Training our model
# instantiate a Multinomial Naive Bayes model
nb = MultinomialNB()
#train the model
%time nb.fit(X_train, y_train)
# make class predictions for X_test
y_pred_class = nb.predict(X_test)
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred_class)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred_class)
###Output
_____no_output_____
###Markdown
64% accuracy.529+356 correct predictions, 293+203 wrong predictions.the model has performed poorly compared to the previous one which has a score of 80% 60-40 splits
###Code
#split into train(60%) and test(40%)
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.4, random_state=0)
# instantiate a Multinomial Naive Bayes model
nb = MultinomialNB()
#train the model
%time nb.fit(X_train, y_train)
# make class predictions for X_test
y_pred_class = nb.predict(X_test)
# calculate accuracy of class predictions
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred_class)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred_class)
###Output
_____no_output_____
###Markdown
Naive Bayes Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable and dependent feature vector through , : Using the naive conditional independence assumption thatfor all , this relationship is simplified to Since is constant given the input, we can use the following classification rule: and we can use Maximum A Posteriori (MAP) estimation to estimate and ; the former is then the relative frequency of class in the training set.The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of .In spite of their apparently over-simplified assumptions, naive Bayes classifiers have worked quite well in many real-world situations, famously document classification and spam filtering. They require a small amount of training data to estimate the necessary parameters. (For theoretical reasons why naive Bayes works well, and on which types of data it does, see the references below.)Naive Bayes learners and classifiers can be extremely fast compared to more sophisticated methods. The decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one dimensional distribution. This in turn helps to alleviate problems stemming from the curse of dimensionality.On the flip side, although naive Bayes is known as a decent classifier, it is known to be a bad estimator, so the probability outputs from predict_proba are not to be taken too seriously. Gaussian Naive Bayes GaussianNB implements the Gaussian Naive Bayes algorithm for classification. The likelihood of the features is assumed to be Gaussian: The parameters and are estimated using maximum likelihood.
###Code
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
X, y = load_iris(return_X_y =True)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.5, random_state=0)
gnb = GaussianNB()
y_pred= gnb.fit(X_train, y_train).predict(X_test)
print("Number of mislabeled points out of a total %d points: %d"
% (X_test.shape[0], (y_test != y_pred).sum()))
###Output
Number of mislabeled points out of a total 75 points: 4
###Markdown
------------------------------------- Naive Bayes in Python Tokenizing and countingCreate a bag of words, counting all the words, ignoring case. Using a regex
###Code
import re
import string
from prettytable import PrettyTable
def remove_punctuation(s):
table = string.maketrans("","")
return s.translate(table, string.punctuation)
def tokenize(text):
text = remove_punctuation(text)
text = text.lower()
return re.split("\W+", text)
def count_words(words):
wc={}
for word in words:
wc[word]=wc.get(word,0.0)+1.0
return wc
s = "Hello my name is Ivan. My favorite food is sushi"
count_words(tokenize(s))
###Output
_____no_output_____
###Markdown
Counting our probabilitiesSo now that we can count words, lets get cooking. The code below is going to do the following: * open each document * label it as aeither "crypto" or "dino" and keep track of how many of each label there are (priors) * count the words for the document * add those counts to the vocab, or a corpus level word count * add those counts to the word_count, for a category level word count
###Code
from sh import find
#setup some structures to store our data
vocab={}
word_count = {
"crypto":{},
"dino":{}
}
priors ={
'crypto':0.,
'dino':0.
}
docs=[]
for f in find("nb_files/sample-data"):
f=f.strip()
if f.endswith(".txt")==False:
#skip non .txt files
continue
elif "cryptid" in f:
category ="cryto"
else:
category = "dino"
docs.append((category,f))
#ok time to start counting stuff
priors[category] += 1
text = open(f).read()
words = tokenize(text)
counts = count_words(words)
for word, count in counts.items():
#if we havent seen a word yet, lets add it to our dictionaries with a count of 0
if word not in vocab:
vocab[word]=0.0 #use 0.0 here so python does "correct" math
if word not in word_counts[category]:
word_counts[category][word]=0.0
vocab[word] += count
word_counts[category][word] += count
###Output
_____no_output_____
###Markdown
Classifying a new page
###Code
new_doc = open("nb_files/examples/Yeti.txt").read()
words= tokenize(new_doc)
counts = count_words(words)
###Output
_____no_output_____
###Markdown
Alright, we've got our counts. Now we'll calculate P(word|category) for each word and multiply each of these conditional probabilities together to calculate the P(category|set of words). To prevent computational errors, we're going to perform the operations in logspace. All this means is we're going to use the log(probability) so we require fewer decimal places. More on the mystical properties of logs here and here.
###Code
import math
prior_dino= (priors['dino'] / sum(priors.values()))
prior_crypto= (priors['crypto'] / sum(priors.values()))
log_prob_crypto =0.0
log_prob_dino =0.0
for w, cnt in count_items():
#Skip words that we havent seen before, or words less than 3 letters long
if not w in vocab or len(w) <=3:
continue
#Calculate the probability that the word occurs at all
p_word = vocab[w] / sum(vocab.values())
#for both categories, calculate P(word[category]), or the probability a
# word will appear, given that we know that the document is <category>
p_w_given_dino = word_counts["dino"].get(w,0.0) / sum(word_counts["dino"].values())
p_w_given_crypto = word_counts["crypto"].get(w, 0.0) / sum(word_counts["crypto"].values())
#add new probability to our running total" log_prob_<category> if the probability
#is 0 (i.e the word never appears for the category ), then skip it
if p_w_given_dino> 0 :
log_prob_dino += math.log(cnt * p_w_given_dino / p_word)
if p_w_given_crypto > 0 :
log_prob_crypto += math.log(cnt * p_w_given_crypto / p_word)
#Print out the results; we need to go from logspace back to "regular" space
# so we take the EXP of the log_prob
print("Score(dino):", math.exp(log_prob_dino + math.log(prior_dino)))
print("Score (crypto):",math.exp(log_prob_crypto + math.log(prior_crypto)))
#dino; 2601.766
#crypto: 25239.089
###Output
_____no_output_____
###Markdown
Since we're slightly bending the rules of Bayes' Theorem, the results are not actual probabilities, but rather are "scores". All you really need to know is which one is bigger. So our suspicions are confirmed, the "Yeti.txt" file is being classified overwhelmingly in favor of crypto (as we would hope). -------------------------------- Another tutorial on the webhttps://www.youtube.com/watch?v=99MN-rl8jGY
###Code
import numpy as np
import pandas as pd
import urllib.request
import sklearn
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
from sklearn.metrics import accuracy_score
#python 3
#with urllib.request.urlopen("http://www.python.org") as url:
# s = url.read()
url ="https://archive.ics.uci.edu/ml/machine-learning-databases/spambase/spambase.data"
raw_data= urllib.request.urlopen(url) #python 2.x urllib.urlopen(url)
dataset=np.loadtxt(raw_data, delimiter=",")
print(dataset[0])
X= dataset[:, 0:48]
y = dataset[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=.33, random_state=17)
###Output
_____no_output_____
###Markdown
Bernoulli
###Code
BernNB = BernoulliNB(binarize = True)
BernNB.fit(X_train, y_train)
print(BernNB)
#label
y_expect=y_test
y_pred=BernNB.predict(X_test)
print(accuracy_score(y_expect,y_pred))
###Output
BernoulliNB(alpha=1.0, binarize=True, class_prior=None, fit_prior=True)
0.8558262014483212
###Markdown
Multinomial
###Code
MultiNB = MultinomialNB()
MultiNB.fit(X_train, y_train)
print(MultiNB)
y_pred=MultiNB.predict(X_test)
print(accuracy_score(y_expect,y_pred))
###Output
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
0.8736010533245556
###Markdown
Gaussian
###Code
GausNB = GaussianNB()
GausNB.fit(X_train, y_train)
print(GausNB)
y_pred=GausNB.predict(X_test)
print(accuracy_score(y_expect,y_pred))
###Output
GaussianNB(priors=None, var_smoothing=1e-09)
0.8130348913759052
###Markdown
improved versions
###Code
BernNB= BernoulliNB(binarize=0.1)
BernNB.fit(X_test,y_test)
print(BernNB)
y_expect= y_test
y_pred=BernNB.predict(X_test)
print(accuracy_score(y_expect,y_pred))
###Output
BernoulliNB(alpha=1.0, binarize=0.1, class_prior=None, fit_prior=True)
0.8940092165898618
###Markdown
Naive BayesThis notebook is a spam filter based off of a multinomial naive bayes model.
###Code
# Import packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import copy
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.metrics import roc_curve, auc
np.random.seed(seed=0)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load Data
###Code
# Load Data
email = pd.read_csv("email.csv")
email.head()
email.tail()
print(email.columns.values)
email.fillna("n", inplace=True)
for em in email["Subject"].tolist():
if not isinstance(em, str):
print(em)
len(email)
###Output
['Subject' 'Body' 'From: (Name)' 'From: (Address)' 'From: (Type)'
'Importance' 'type']
###Markdown
Bags of WordsAs we are working with text data, and models are unable to work with strings as input data, we need to convert them to numbers. Bags of Words will allow us to count and vectorize all words in a given set, then transform the strings into series of numbers that represent the words for the model to work with.
###Code
# Bags of Words
corpus = {}
for col in email.columns.values:
vectorizer = CountVectorizer()
features = vectorizer.fit_transform(email[col]).todense()
corpus[col] = features
print(corpus.keys())
# Term Frequency times Inverse Document Frequenct (tf-idf)
corpus_tfidf = {}
for col in email.columns.values:
tfidf_transformer = TfidfTransformer()
corpus_tfidf[col] = tfidf_transformer.fit_transform(corpus[col]).todense()
print(corpus_tfidf[col].shape)
###Output
dict_keys(['Subject', 'Body', 'From: (Name)', 'From: (Address)', 'From: (Type)', 'Importance', 'type'])
(495, 1093)
(495, 13318)
(495, 444)
(495, 555)
(495, 1)
(495, 1)
(495, 2)
###Markdown
Splitting FeaturesHere the desired features are split into separate dataframes. This is because each feature, when converted to bags of words, is a matrix of the features and their bags of words. Creating a dataframe where each element is a matrix is impossible, so to get around this I am splitting each desired feature into its own dataframe, and as necessary concatenating dataframes to combine features.
###Code
subject = pd.DataFrame(corpus_tfidf["Subject"])
body = pd.DataFrame(corpus_tfidf["Body"])
fromname = pd.DataFrame(corpus_tfidf["From: (Name)"])
fromaddress = pd.DataFrame(corpus_tfidf["From: (Address)"])
fromtype = pd.DataFrame(corpus_tfidf["From: (Type)"])
spam = pd.DataFrame(corpus_tfidf["type"])
spam = spam[1]
###Output
_____no_output_____
###Markdown
Multinomial Naive Bayes Model
###Code
def multinb(x, y):
"""
This function performs the required functions for fitting and prediction a Multinomial Naive Bayes
from given x and y datasets.
Args:
x (array-like): independent data
y (array-like): target data
Return:
score (float): Mean accuracy of the model on the given test and target data
"""
# Train Test Split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.33, random_state = 0)
# Fit and predict model
multinb = MultinomialNB()
multinb.fit(X_train, y_train)
predicted = multinb.predict(X_test)
predicted
multinb.predict(X_test)
score = multinb.score(X_test, y_test)
# Plot
# x_axis = range(len(X_test))
# fig,ax = plt.subplots(figsize=(15,10))
# ax.scatter(x_axis, predicted, alpha = 0.3)
# ax.scatter(x_axis, y_test, alpha = 0.3)
return score
###Output
_____no_output_____
###Markdown
ResultsBelow is the model run for each feature-set, with the accuracy score printed and a scatter plot of each prediction vs test point.
###Code
subjectscore = multinb(subject, spam)
print("Model Mean Accuracy [Subject]:", subjectscore)
bodyscore = multinb(body, spam)
print("\nModel Mean Accuracy [Body]:", bodyscore)
fnamescore = multinb(fromname, spam)
print("\nModel Mean Accuracy [From Name]:", fnamescore)
faddscore = multinb(fromaddress, spam)
print("\nModel Mean Accuracy [From Address]:", faddscore)
ftypescore = multinb(fromtype, spam)
print("\nModel Mean Accuracy [From Type]:", ftypescore)
subbodscore = multinb(pd.concat([subject, body], axis=1), spam)
print("\nModel Mean Accuracy [Subject + Body]:", subbodscore)
subbodfnamescore = multinb(pd.concat([subject, body, fromname], axis=1), spam)
print("\nModel Mean Accuracy [Subject + Body + From Name]:", subbodfnamescore)
subbodfaddscore = mult
###Output
Model Mean Accuracy [Subject]: 0.762195121951
Model Mean Accuracy [Body]: 0.853658536585
Model Mean Accuracy [From Name]: 0.90243902439
Model Mean Accuracy [From Address]: 0.914634146341
Model Mean Accuracy [From Type]: 0.689024390244
Model Mean Accuracy [Subject + Body]: 0.896341463415
Model Mean Accuracy [Subject + Body + From Name]: 0.939024390244
Model Mean Accuracy [Subject + Body + From Address]: 0.94512195122
###Markdown
ROC Curve
###Code
# Compute ROC curve and ROC area for best model
# Need the y_test data and prediction probabilities
X_train, X_test, y_train, y_test = train_test_split(pd.concat([subject, body, fromname], axis=1), spam,
test_size = 0.33, random_state = 0)
# Fit and get prediction probabilities
multinb = MultinomialNB()
predict_prob = multinb.fit(X_train, y_train).predict_proba(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, _ = roc_curve(y_test, predict_prob[:,1])
roc_auc = auc(fpr, tpr)
fig,ax = plt.subplots(figsize=(10,7))
lw = 2
ax.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
ax.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate', fontsize=14)
ax.set_ylabel('True Positive Rate', fontsize=14)
ax.set_title('Receiver operating characteristic', fontsize=16)
ax.legend(loc="lower right", fontsize=12)
ax.tick_params(axis="both", labelsize=12)
###Output
_____no_output_____
###Markdown
This notebook is inspired but not limited by *Machine Learning In Action*.e.g., the implementation of the algrithm is at a higher level.All rights deserved by Diane(Qingyun Hu). 1. About Naive Bayes 1.1 Mechanism of Naive BayesNaive Bayes is a variant of Bayes' Rule. Let's recap Bayes' Rule a bit.$$ P(c_i | w_1, w_2, w_3, ..., w_m) = \frac{P(w_1, w_2, w_3, ..., w_m | c_i)*P(c_i)}{P(w_1, w_2, w_3, ..., w_m)} $$where $w_1, w_2, w_3, ..., w_m$ is an vector of words that present in the document as well as included in the existing vocabulary list, and $c_i$ stands for class i.Naive Bayes asks us to assume that the presence of $w_1, w_2, w_3, ..., w_m$ is independent. Although this is not realistic, as in there are always some connection between one word to another. However, this assumption simplifies the calculation and works quite well so far. By assuming the presence of words is independent, here we have:$$ P(c_i | w_1, w_2, w_3, ..., w_m) = \frac{(\ P(w_1 | c_i) * P(w_2 | c_i) * P(w_3 | c_i) * ... * P(w_m | c_i)\ ) * P(c_i)}{P(w_1) * P(w_2) * P(w_3) * ... * P(w_m))} $$ 1.2 Pros and Cons 1.21 Pros1. Handles multiple classes.2. Works well on small dataset. 1.22 Cons1. Sensitive to how the input data is prepared2. The sparse bag-of-words vector could consume a lot of memery if not handling it properly, as in for each vector, it's lenth is at the same lenth of vocabulary list. 1.23 Works withNominal Values 2. ID3 Tree Construction
###Code
# Creat demo dataset
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import math
def createDataSet():
postingList=[['my', 'dog', 'has', 'flea', \
'problems', 'help', 'please'],
['maybe', 'not', 'take', 'him', \
'to', 'dog', 'park', 'stupid'],
['my', 'dalmation', 'is', 'so', 'cute', \
'I', 'love', 'him'],
['stop', 'posting', 'stupid', 'worthless', 'garbage'],
['mr', 'licks', 'ate', 'my', 'steak', 'how',\
'to', 'stop', 'him'],
['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
classVec = [0,1,0,1,0,1]
return postingList,classVec
dataSet, labels = createDataSet()
dataSet
labels
# Tool Function 1: Create an vocabulary list according to dataSet
def createVocabList(dataSet):
vocabList = set([])
for docum in dataSet:
vocabList = vocabList | set(docum)
return list(vocabList)
vocabList = createVocabList(dataSet)
# Tool Function 2: Get an bag of words vector for each document
import numpy as np
def bagOfWordsVec(vocabList, document):
returnVec = np.ones(len(vocabList))
for token in document:
if token in vocabList:
returnVec[vocabList.index(token)] += 1
return returnVec
bagOfWordsVec(vocabList, dataSet[3])
# Tool Function 3: Get BagOfWordsTable for Training Dataset
def getBagOfWordsTable(dataSet, vocabList, label):
bagOfWordsTable = []
for document in dataSet:
bagOfWordsTable.append(bagOfWordsVec(vocabList, document))
bagOfWordsTable = pd.DataFrame(bagOfWordsTable, columns=vocabList)
bagOfWordsTable['label']= label
return bagOfWordsTable
getBagOfWordsTable(dataSet, vocabList, labels)
# Calculate Probabilities
bagOfWordsTable = getBagOfWordsTable(dataSet, vocabList, labels)
def getProb(c_i, bagOfWordsTable, testDataset):
P_ci = bagOfWordsTable['label'][bagOfWordsTable.label==c_i].count() / bagOfWordsTable.shape[0]
bagOfWordsTable_ci = bagOfWordsTable[bagOfWordsTable.label==c_i]
P_Xi_ci = bagOfWordsTable_ci.sum() / bagOfWordsTable_ci.sum().sum()
P_Xi = bagOfWordsTable.sum() / bagOfWordsTable.sum().sum()
predVec = []
for document in testDataset:
predVec.append(np.exp(np.log(P_Xi_ci[document]).sum() + np.log(P_ci) - np.log(P_Xi[document]).sum()))
# return P_Xi_ci, P_ci, P_Xi
return predVec
print("Predictions on Traing DataSet (The propability of each document being Class 1) :")
getProb(1, bagOfWordsTable,dataSet)
print("Real Classes of Traing DataSet")
labels
print("Not Bad!")
###Output
Predictions on Traing DataSet (The propability of each document being Class 1) :
###Markdown
Testing
###Code
import glob
path = r'.\spam\*\*'
data = []
for fn in glob.glob(path):
is_spam = "ham" not in fn
with open(fn,'r') as file:
for line in file:
if line.startswith("Subject:"):
subject = re.sub(r"^Subject: ", "", line).strip()
data.append((subject, is_spam))
len(test_data)
random.seed(0)
train_data, test_data = split_data(data, 0.75)
classifier = NaiveBayesClassifier()
classifier.train(train_data)
classified = [(subject, is_spam, classifier.classify(subject))
for subject, is_spam in test_data]
counts = Counter((is_spam, spam_probability > 0.5)
for _, is_spam, spam_probability in classified)
counts
tn = counts[(False,False)]
tp = counts[(True,True)]
fn = counts[(True,False)]
fp = counts[(False,True)]
print accuracy(tp, fp, fn, tn)
print precision(tp, fp, fn, tn)
print recall(tp, fp, fn, tn)
print f1_score(tp, fp, fn, tn)
classified.sort(key = lambda row: row[2])
spammiest_hams = filter(lambda row: not row[1], classified)[-5:]
spammiest_hams
hammiest_spams = filter(lambda row: row[1], classified)[:5]
hammiest_spams
def p_spam_given_word(word_prob):
"""calculate p(spam|word)"""
word, prob_if_spam, prob_if_not_spam = word_prob
return prob_if_spam / (prob_if_spam + prob_if_not_spam)
words = sorted(classifier.word_probs, key = p_spam_given_word)
words[-5:]
words[:5]
###Output
_____no_output_____
###Markdown
自然言語処理のおいて、迷惑メールの識別などで有名なナイーブベイズを用いた、テキストデータの識別手法を扱う。 データ - livedoorのニュースコーパスから取得した7367個のニュース記事 - トピックニュース、Sports Watch、ITライフハック、家電チャネル、MOVIE ENTER、独女通信、エスマックス、livedoor HOMME、Peachyの9カテゴリ 目標ニュース記事を学習データとテストデータに分けて、ナイーブベイズにより識別器を作成し、テストデータのニュース記事を入力したときに、そのニュース記事が9つのカテゴリのどれに属するのかを判定すること。そのためには、テキストデータを扱いやすい数値データへと変換する必要がある。 数値への変換手順 1. テキストデータを名詞や動詞、形容詞、助詞などに分ける - これを形態素解析や、分かち書きと呼ぶ 2. 学習データの各単語をナンバリングしていく 3. 各ニュース記事に対して、記事内でのそれぞれの単語の出現数を要素に持つベクトルを生成もし学習データに含まれる単語の数が1万個であれば、1万次元のベクトルとなり、その単語が出現した回数が格納される。 データの入手 - [データ入手先](https://www.rondhuit.com/download.html) - ldcc-20140209.tar.gz 分かち書き
###Code
from janome.tokenizer import Tokenizer
import os, glob
#Janomeを使って形態素解析
ja_tokenizer = Tokenizer()
#日本語を単語や品詞ごとに分ける
def ja_tokenize(text):
res = []
lines = text.split("\n")
lines = lines[2:] # 最初の2行はヘッダーなので捨てる
for line in lines:
malist = ja_tokenizer.tokenize(line)
for tok in malist:
ps = tok.part_of_speech.split(",")[0]
if not ps in ['名詞', '動詞', '形容詞']:
continue # 他の品詞は無視
w = tok.base_form
if w == "*" or w == "":
w=tok.surface
if w == "" or w == "\n":
continue
res.append(w)
res.append("\n")
return res
#テストデータを読み込み
root_dir ='./text'
for path in glob.glob(root_dir + "/*/*.txt", recursive=True):
if path.find("LICENSE")>0:
continue #LICENSE.txtは除く
print(path)
path_wakati=path + ".wakati"
if os.path.exists(path_wakati):
continue #ファイルができているときはスルー
text = open(path,"r", encoding='utf-8').read() #エンコーディングに注意
words = ja_tokenize(text)
wt = " ".join(words)
open(path_wakati, "w", encoding="utf-8").write(wt)
###Output
./text/movie-enter/movie-enter-5978741.txt
./text/movie-enter/movie-enter-6322901.txt
./text/movie-enter/movie-enter-6176324.txt
./text/movie-enter/movie-enter-6573929.txt
./text/movie-enter/movie-enter-5914880.txt
./text/movie-enter/movie-enter-5878782.txt
./text/movie-enter/movie-enter-6354868.txt
./text/movie-enter/movie-enter-6035751.txt
./text/movie-enter/movie-enter-6689395.txt
./text/movie-enter/movie-enter-6375454.txt
./text/movie-enter/movie-enter-6689342.txt
./text/movie-enter/movie-enter-6581639.txt
./text/movie-enter/movie-enter-6395403.txt
./text/movie-enter/movie-enter-6631156.txt
./text/movie-enter/movie-enter-6075201.txt
./text/movie-enter/movie-enter-6408753.txt
./text/movie-enter/movie-enter-5972654.txt
./text/movie-enter/movie-enter-6671599.txt
./text/movie-enter/movie-enter-6512253.txt
./text/movie-enter/movie-enter-6331809.txt
./text/movie-enter/movie-enter-6070635.txt
./text/movie-enter/movie-enter-6707194.txt
./text/movie-enter/movie-enter-6808641.txt
./text/movie-enter/movie-enter-6523716.txt
./text/movie-enter/movie-enter-6224544.txt
./text/movie-enter/movie-enter-6707630.txt
./text/movie-enter/movie-enter-6424051.txt
./text/movie-enter/movie-enter-6316535.txt
./text/movie-enter/movie-enter-6822042.txt
./text/movie-enter/movie-enter-6076279.txt
./text/movie-enter/movie-enter-6557068.txt
./text/movie-enter/movie-enter-6370840.txt
./text/movie-enter/movie-enter-6282195.txt
./text/movie-enter/movie-enter-6464677.txt
./text/movie-enter/movie-enter-6305978.txt
./text/movie-enter/movie-enter-6557256.txt
./text/movie-enter/movie-enter-6892597.txt
./text/movie-enter/movie-enter-6239924.txt
./text/movie-enter/movie-enter-6699965.txt
./text/movie-enter/movie-enter-6350986.txt
./text/movie-enter/movie-enter-6625136.txt
./text/movie-enter/movie-enter-6385605.txt
./text/movie-enter/movie-enter-6324632.txt
./text/movie-enter/movie-enter-6316858.txt
./text/movie-enter/movie-enter-6668817.txt
./text/movie-enter/movie-enter-5902577.txt
./text/movie-enter/movie-enter-6184400.txt
./text/movie-enter/movie-enter-6882196.txt
./text/movie-enter/movie-enter-6265685.txt
./text/movie-enter/movie-enter-6673576.txt
./text/movie-enter/movie-enter-6845387.txt
./text/movie-enter/movie-enter-6132017.txt
./text/movie-enter/movie-enter-6506390.txt
./text/movie-enter/movie-enter-6485099.txt
./text/movie-enter/movie-enter-5933219.txt
./text/movie-enter/movie-enter-6185285.txt
./text/movie-enter/movie-enter-6393311.txt
./text/movie-enter/movie-enter-5886488.txt
./text/movie-enter/movie-enter-5865707.txt
./text/movie-enter/movie-enter-6195731.txt
./text/movie-enter/movie-enter-6371369.txt
./text/movie-enter/movie-enter-6317563.txt
./text/movie-enter/movie-enter-6450467.txt
./text/movie-enter/movie-enter-6529615.txt
./text/movie-enter/movie-enter-6714538.txt
./text/movie-enter/movie-enter-6790638.txt
./text/movie-enter/movie-enter-6083351.txt
./text/movie-enter/movie-enter-6465423.txt
./text/movie-enter/movie-enter-5958928.txt
./text/movie-enter/movie-enter-5890605.txt
./text/movie-enter/movie-enter-6296323.txt
./text/movie-enter/movie-enter-5943850.txt
./text/movie-enter/movie-enter-6564488.txt
./text/movie-enter/movie-enter-6407852.txt
./text/movie-enter/movie-enter-6292838.txt
./text/movie-enter/movie-enter-6623974.txt
./text/movie-enter/movie-enter-6083027.txt
./text/movie-enter/movie-enter-6332442.txt
./text/movie-enter/movie-enter-6552846.txt
./text/movie-enter/movie-enter-6148425.txt
./text/movie-enter/movie-enter-6247976.txt
./text/movie-enter/movie-enter-6149061.txt
./text/movie-enter/movie-enter-5943845.txt
./text/movie-enter/movie-enter-6083973.txt
./text/movie-enter/movie-enter-6573334.txt
./text/movie-enter/movie-enter-6241687.txt
./text/movie-enter/movie-enter-6625135.txt
./text/movie-enter/movie-enter-6350991.txt
./text/movie-enter/movie-enter-6298125.txt
./text/movie-enter/movie-enter-6274984.txt
./text/movie-enter/movie-enter-6632298.txt
./text/movie-enter/movie-enter-6355023.txt
./text/movie-enter/movie-enter-6771404.txt
./text/movie-enter/movie-enter-6208867.txt
./text/movie-enter/movie-enter-6041207.txt
./text/movie-enter/movie-enter-5851460.txt
./text/movie-enter/movie-enter-6858100.txt
./text/movie-enter/movie-enter-6343331.txt
./text/movie-enter/movie-enter-6087495.txt
./text/movie-enter/movie-enter-6131041.txt
./text/movie-enter/movie-enter-6630879.txt
./text/movie-enter/movie-enter-6223416.txt
./text/movie-enter/movie-enter-6236580.txt
./text/movie-enter/movie-enter-6033332.txt
./text/movie-enter/movie-enter-6097490.txt
./text/movie-enter/movie-enter-5935337.txt
./text/movie-enter/movie-enter-6172352.txt
./text/movie-enter/movie-enter-5928414.txt
./text/movie-enter/movie-enter-6242229.txt
./text/movie-enter/movie-enter-6265669.txt
./text/movie-enter/movie-enter-6218872.txt
./text/movie-enter/movie-enter-6508806.txt
./text/movie-enter/movie-enter-6503172.txt
./text/movie-enter/movie-enter-6628367.txt
./text/movie-enter/movie-enter-5902410.txt
./text/movie-enter/movie-enter-6642155.txt
./text/movie-enter/movie-enter-6228153.txt
./text/movie-enter/movie-enter-6743277.txt
./text/movie-enter/movie-enter-6233567.txt
./text/movie-enter/movie-enter-6148384.txt
./text/movie-enter/movie-enter-6243533.txt
./text/movie-enter/movie-enter-6897614.txt
./text/movie-enter/movie-enter-6350599.txt
./text/movie-enter/movie-enter-6279137.txt
./text/movie-enter/movie-enter-6673821.txt
./text/movie-enter/movie-enter-6908108.txt
./text/movie-enter/movie-enter-6104819.txt
./text/movie-enter/movie-enter-6396441.txt
./text/movie-enter/movie-enter-6128471.txt
./text/movie-enter/movie-enter-5950828.txt
./text/movie-enter/movie-enter-6237714.txt
./text/movie-enter/movie-enter-6148636.txt
./text/movie-enter/movie-enter-6195457.txt
./text/movie-enter/movie-enter-5842974.txt
./text/movie-enter/movie-enter-6026676.txt
./text/movie-enter/movie-enter-6771207.txt
./text/movie-enter/movie-enter-6582445.txt
./text/movie-enter/movie-enter-6240239.txt
./text/movie-enter/movie-enter-6024707.txt
./text/movie-enter/movie-enter-6541426.txt
./text/movie-enter/movie-enter-5845799.txt
./text/movie-enter/movie-enter-6849578.txt
./text/movie-enter/movie-enter-6118161.txt
./text/movie-enter/movie-enter-6391931.txt
./text/movie-enter/movie-enter-5863602.txt
./text/movie-enter/movie-enter-6034460.txt
./text/movie-enter/movie-enter-6561682.txt
./text/movie-enter/movie-enter-6491565.txt
./text/movie-enter/movie-enter-6278407.txt
./text/movie-enter/movie-enter-6168690.txt
./text/movie-enter/movie-enter-5850747.txt
./text/movie-enter/movie-enter-6462638.txt
./text/movie-enter/movie-enter-6208876.txt
./text/movie-enter/movie-enter-6123444.txt
./text/movie-enter/movie-enter-6870676.txt
./text/movie-enter/movie-enter-6354662.txt
./text/movie-enter/movie-enter-6316451.txt
./text/movie-enter/movie-enter-6895820.txt
./text/movie-enter/movie-enter-5871984.txt
./text/movie-enter/movie-enter-6124315.txt
./text/movie-enter/movie-enter-5975506.txt
./text/movie-enter/movie-enter-5928601.txt
./text/movie-enter/movie-enter-6627447.txt
./text/movie-enter/movie-enter-6123308.txt
./text/movie-enter/movie-enter-5850745.txt
./text/movie-enter/movie-enter-6320134.txt
./text/movie-enter/movie-enter-6123122.txt
./text/movie-enter/movie-enter-6316647.txt
./text/movie-enter/movie-enter-6167984.txt
./text/movie-enter/movie-enter-5933745.txt
./text/movie-enter/movie-enter-6159895.txt
./text/movie-enter/movie-enter-6821836.txt
./text/movie-enter/movie-enter-5929899.txt
./text/movie-enter/movie-enter-6036329.txt
./text/movie-enter/movie-enter-6781848.txt
./text/movie-enter/movie-enter-6260699.txt
./text/movie-enter/movie-enter-6454001.txt
./text/movie-enter/movie-enter-6568920.txt
./text/movie-enter/movie-enter-6218736.txt
./text/movie-enter/movie-enter-6676218.txt
./text/movie-enter/movie-enter-6478266.txt
./text/movie-enter/movie-enter-5935297.txt
./text/movie-enter/movie-enter-6065956.txt
./text/movie-enter/movie-enter-6278767.txt
./text/movie-enter/movie-enter-6532347.txt
./text/movie-enter/movie-enter-6473694.txt
./text/movie-enter/movie-enter-6160423.txt
./text/movie-enter/movie-enter-6361791.txt
./text/movie-enter/movie-enter-6282378.txt
./text/movie-enter/movie-enter-6789964.txt
./text/movie-enter/movie-enter-5966397.txt
###Markdown
単語のベクトル化単語ごとにナンバリングして、ニュース記事をベクトルにする。この操作をBoW(Bag-of-Words)とも呼ぶ。まずはじめの全ニュースの単語を辞書にし、各ニュースの単語の出現回数をベクトルにする。今回は100ニュースずつデータを使用する。
###Code
import os, glob, json
#パスの設定
root_dir ='./text'
dic_file=root_dir+"/word-dic.json"
data_file=root_dir+"/textdata.json"
#辞書、ハッシュマップ的な
word_dic={"_MAX":0}
#辞書作成関連の関数たち-------------------------------------------------
#辞書に全部の単語を登録する
def register_dic():
files=glob.glob(root_dir+"/*/*.wakati", recursive=True)
for i in files:
file_to_ids(i)
#ファイルを読んで固定長シーケンスを返す
def file_to_ids(fname):
with open(fname, "r", encoding='utf-8') as f:
text=f.read()
return text_to_ids(text)
#語句を区切ってラベリングする
def text_to_ids(text):
text=text.strip()
words=text.split(" ")
result=[]
for n in words:
n=n.strip()
if n=="": continue
if not n in word_dic: #まだ登録していない言葉の場合
wid=word_dic[n]=word_dic["_MAX"]
word_dic["_MAX"]+=1
print(wid,n)
else:
wid=word_dic[n] #登録済みの言葉の場合
result.append(wid)
return result
#ベクトル作成関連の関数たち-------------------------------------------------
#ジャンルごとにファイルを読み込む
def count_freq(limit=0):
X=[]
Y=[]
max_words = word_dic["_MAX"]
cat_names=[]
for cat in os.listdir(root_dir):
cat_dir =root_dir + "/" + cat
if not os.path.isdir(cat_dir):continue #フォルダは無視する
cat_idx=len(cat_names)
cat_names.append(cat)
files=glob.glob(cat_dir+"/*.wakati")
i=0
for path in files:
#print(path)
cnt=count_file_freq(path)
X.append(cnt)
Y.append(cat_idx)
if limit > 0:
if i >limit : break
i+=1
return X,Y
#ファイル内の単語を数える
def count_file_freq(fname):
cnt=[0 for n in range(word_dic["_MAX"])]
with open(fname,"r", encoding='utf-8') as f:
text=f.read().strip()
ids=text_to_ids(text)
for wid in ids:
cnt[wid]+=1
return cnt
#-------------------------------------------
#単語辞書の作成
if os.path.exists(dic_file):
word_dic =json.load(open(dic_file))
else:
register_dic()
json.dump(word_dic, open(dic_file,"w", encoding='utf-8'))
#ファイルごとの単語出現頻度のベクトルを作る
print ("要素数=" + str(len(word_dic)))
X, Y=count_freq(100)
json.dump({"X": X, "Y":Y}, open(data_file,"w", encoding='utf-8'))
print("ファイル変換終了")
###Output
要素数=67392
ファイル変換終了
###Markdown
識別
###Code
# ライブラリのインポート--------------------------------
from sklearn import naive_bayes, metrics, preprocessing, cross_validation #機械学習用のライブラリを利用
import json
import numpy #numpyという行列などを扱うライブラリを利用
# データ準備
nb_classes=9
data=json.load(open("./text/textdata.json"))
X=data["X"] #単語ベクトル
Y=data["Y"] #クラスラベル
max_words=len(X[0]) #最大単語数
###Output
_____no_output_____
###Markdown
MultinominalNBはナイーブベイズ手法のうちの多項分布を使用したナイーブベイズを指定している。alphaは学習時に出てこなかった単語が、テストデータの記事に出てきたときに、生成確率が0になるのを避けるための調整パラメータであり、学習データ全てに、「機械学習」という言葉が一切出てこなかった場合に、テストデータに「機械学習」が入った文章はどのクラスに属する確率も0になってしまう。 それを避けるための微小値のこと。 (alpha=1をラプラス・スムージング、alpha<1の場合をLidstone smoothingと呼ぶ)fit_prior=’True’は、学習データの偏りがあった場合に、それを考慮する。
###Code
# 機械学習で分類・識別する---------------------------------------------------
clf = naive_bayes.MultinomialNB(alpha=0.1, fit_prior='True' )
# K分割交差検証(cross validation)で性能を評価する---------------------
scores=cross_validation.cross_val_score(clf, X, Y, cv=10)
print("平均正解率 = ", scores.mean())
print("正解率の標準偏差 = ", scores.std())
# トレーニングデータとテストデータに分ける------------------
X_train, X_test, Y_train, Y_test=cross_validation.train_test_split(X,Y, test_size=0.5, random_state=0)
clf.fit(X_train, Y_train)
print(clf.score(X_test, Y_test)) #テストデータに対する識別結果
###Output
_____no_output_____
###Markdown
This dataset is Titanic's dataset. It contains data of all the passengers like passenger class,gender,age,whether they survived after the boat sank.We will be using various parameters to decide whether a person survives or not after the boat sank!
###Code
data = dataset[["Sex","Pclass","Survived"]] #Only columns needed
data = data.dropna() #dropping rows with missing values
#splitting into training and test data
train = data.sample(frac=0.8,random_state=200)
test = data.drop(train.index)
# Taking the outputs into array named label
label = train['Survived']
train = train.drop("Survived",axis = 1)
###Output
_____no_output_____
###Markdown
Let's explore the data a bit.The ratio of yes-no decides imbalence of a class. If the data is skewed we need weighted classes!
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
def makebarplot(data,cols):
addtn = len(cols)%3
if addtn > 0:
addtn = 1
sub1 = len(cols)//3 + addtn
sub2 = 3
fig = plt.figure()
for index,i in enumerate(cols):
item = data[i].unique()
X = np.array(range(len(item)))
Y = [data.groupby(i)[i].count()[j] for j in item]
ax = fig.add_subplot(sub1,sub2,index+1)
ax.bar(X,Y,align = 'center')
plt.setp(ax,xticks=X, xticklabels=item,xlabel = i)
makebarplot(data,["Survived","Sex",'Pclass'])
def makestackedplot(data,cols,target,noofcolsplot = 3):
addtn = len(cols)%3
if addtn > 0:
addtn = 1
sub1 = len(cols)//3 + addtn
sub2 = 3
fig = plt.figure()
for index,i in enumerate(cols):
item = data[i].unique()
X = np.array(range(len(item)))
classes = data[target].unique()
Y = np.zeros((classes.shape[0],item.shape[0]))
for ind,targetlabel in enumerate(classes):
Y[ind] = [(data[(data[i] == value) & (data[target] == targetlabel)]).shape[0] for value in item]
ax = fig.add_subplot(sub1,sub2,index+1)
ax.bar(X,Y[0,:],color = 'r',align = 'center')
ax.bar(X,Y[1,:],color = 'b',bottom = Y[0,:],align = 'center')
plt.setp(ax,xticks=X, xticklabels=item,xlabel = i)
makestackedplot(data,["Pclass","Sex"],"Survived")
###Output
_____no_output_____
###Markdown
Naive Bayes for classification is a probabilistic algorithm. It is based on Bayes probability theorem which states that $P(A|B) = \frac{P(A and B)}{P(B)}$We take the prior probabilities of labels and conditional probabilities of each feature w.r.t. label.Then the probability of a new data point $ = P(A|Label1)*P(B|Label1)*P(C|Label1)*P(Label1)$ for label1and $= P(A|Label2)*P(B|Label2)*P(C|Label2)*P(Label2)$
###Code
#Defining meta variables
target = "Survived"
classes = data[target].unique()
featurenames = np.array(["Sex","Pclass"])
#calculating prior
prior = data.groupby(target)[target].count()/len(data)
prior
#calculating likelihood
pcount = data.groupby(target)[target].count()
likelihood = list(range(featurenames.shape[0]))
for index,feature in enumerate(featurenames):
categories = data[feature].unique()
temp = np.zeros((classes.shape[0],categories.shape[0]))
for ind,targetlabel in enumerate(classes):
temp[ind] = [(data[(data[feature] == value) & (data[target] == targetlabel)]).shape[0] for value in categories]
likelihood[index] = temp / pcount[:,None]
likelihood
#Calculating posterior for a new dataset <male,2>
genderis = "male"
pclassis = 2
indexofmale = np.where(data["Sex"].unique() == genderis)[0] #function gives tuple. First occurance at index 0
indexof2 = np.where(data["Pclass"].unique() == pclassis)[0]
posterior0 = prior[0] * likelihood[0][0,indexofmale]*likelihood[1][0,indexof2]
posterior1 = prior[1] * likelihood[0][1,indexofmale]*likelihood[1][1,indexof2]
if posterior0 >posterior1:
print "Human won't survive"
else:
print "Human will survive"
###Output
Human won't survive
###Markdown
Naive bayes for Divorce Predictors Data Set The DatasetThe Dataset is from UCIMachinelearning and it provides you all the relevant information needed for the prediction of Divorce. It contains 54 features and on the basis of these features we have to predict that the couple has been divorced or not. Value 1 represent Divorced and value 0 represent not divorced. Features are as follows:1. If one of us apologizes when our discussion deteriorates, the discussion ends.2. I know we can ignore our differences, even if things get hard sometimes.3. When we need it, we can take our discussions with my spouse from the beginning and correct it.4. When I discuss with my spouse, to contact him will eventually work.5. The time I spent with my wife is special for us.6. We don't have time at home as partners.7. We are like two strangers who share the same environment at home rather than family.8. I enjoy our holidays with my wife.9. I enjoy traveling with my wife.10. Most of our goals are common to my spouse.11. I think that one day in the future, when I look back, I see that my spouse and I have been in harmony with each other.12. My spouse and I have similar values in terms of personal freedom.13. My spouse and I have similar sense of entertainment.14. Most of our goals for people (children, friends, etc.) are the same.15. Our dreams with my spouse are similar and harmonious.16. We're compatible with my spouse about what love should be.17. We share the same views about being happy in our life with my spouse18. My spouse and I have similar ideas about how marriage should be19. My spouse and I have similar ideas about how roles should be in marriage20. My spouse and I have similar values in trust.21. I know exactly what my wife likes.22. I know how my spouse wants to be taken care of when she/he sick.23. I know my spouse's favorite food.24. I can tell you what kind of stress my spouse is facing in her/his life.25. I have knowledge of my spouse's inner world.26. I know my spouse's basic anxieties.27. I know what my spouse's current sources of stress are.28. I know my spouse's hopes and wishes.29. I know my spouse very well.30. I know my spouse's friends and their social relationships.31. I feel aggressive when I argue with my spouse.32. When discussing with my spouse, I usually use expressions such as ‘you always’ or ‘you never’ .33. I can use negative statements about my spouse's personality during our discussions.34. I can use offensive expressions during our discussions.35. I can insult my spouse during our discussions.36. I can be humiliating when we discussions.37. My discussion with my spouse is not calm.38. I hate my spouse's way of open a subject.39. Our discussions often occur suddenly.40. We're just starting a discussion before I know what's going on.41. When I talk to my spouse about something, my calm suddenly breaks.42. When I argue with my spouse, ı only go out and I don't say a word.43. I mostly stay silent to calm the environment a little bit.44. Sometimes I think it's good for me to leave home for a while.45. I'd rather stay silent than discuss with my spouse.46. Even if I'm right in the discussion, I stay silent to hurt my spouse.47. When I discuss with my spouse, I stay silent because I am afraid of not being able to control my anger.48. I feel right in our discussions.49. I have nothing to do with what I've been accused of.50. I'm not actually the one who's guilty about what I'm accused of.51. I'm not the one who's wrong about problems at home.52. I wouldn't hesitate to tell my spouse about her/his inadequacy.53. When I discuss, I remind my spouse of her/his inadequacy.54. I'm not afraid to tell my spouse about her/his incompetence. Generally, logistic Machine Learning in Python has a straightforward and user-friendly implementation. It usually consists of these steps:1. Import packages, functions, and classes2. Get data to work with and, if appropriate, transform it3. Create a classification model and train (or fit) it with existing data4. Evaluate your model to see if its performance is satisfactory5. Apply your model to make predictions Import packages, functions, and classes
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn import tree
###Output
_____no_output_____
###Markdown
Get data to work with and, if appropriate, transform it
###Code
df = pd.read_csv('divorce.csv',sep=';')
y=df.Class
x_data=df.drop(columns=['Class'])
df.head(10)
###Output
_____no_output_____
###Markdown
Data description
###Code
sns.countplot(x='Class',data=df,palette='hls')
plt.show()
count_no_sub = len(df[df['Class']==0])
count_sub = len(df[df['Class']==1])
pct_of_no_sub = count_no_sub/(count_no_sub+count_sub)
print("percentage of no divorce is", pct_of_no_sub*100)
pct_of_sub = count_sub/(count_no_sub+count_sub)
print("percentage of divorce", pct_of_sub*100)
###Output
_____no_output_____
###Markdown
Normalize data
###Code
x = (x_data - np.min(x_data)) / (np.max(x_data) - np.min(x_data)).values
x.head()
###Output
_____no_output_____
###Markdown
correlation of all atribute
###Code
plt.figure(figsize=(10,8))
sns.heatmap(df.corr(), cmap='viridis');
###Output
_____no_output_____
###Markdown
Split data set
###Code
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.4,random_state=400)
print("x_train: ",x_train.shape)
print("x_test: ",x_test.shape)
print("y_train: ",y_train.shape)
print("y_test: ",y_test.shape)
###Output
x_train: (102, 54)
x_test: (68, 54)
y_train: (102,)
y_test: (68,)
###Markdown
Create a classification model and train (or fit) it with existing data Step 1. Import the model you want to useStep 2. Make an instance of the ModelStep 3. Training the model on the data, storing the information learned from the dataStep 4. Predict labels for new data
###Code
clfb = GaussianNB()
clfb.fit(x_train, y_train.ravel())
y_predb = clfb.predict(x_test)# step 4
###Output
_____no_output_____
###Markdown
Report
###Code
print(classification_report(y_test, clfb.predict(x_test)))
print('Accuracy of Naive bayes classifier on test set: {:.2f}'.format(clfb.score(x_test, y_test)))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 33
1 1.00 1.00 1.00 35
accuracy 1.00 68
macro avg 1.00 1.00 1.00 68
weighted avg 1.00 1.00 1.00 68
Accuracy of Naive bayes classifier on test set: 1.00
###Markdown
Confusion Matrix
###Code
from sklearn.metrics import classification_report, confusion_matrix as cm
def confusionMatrix(y_pred,title,n):
plt.subplot(1,2,n)
ax=sns.heatmap(cm(y_test, y_pred)/sum(sum(cm(y_test, y_pred))), annot=True
,cmap='RdBu_r', vmin=0, vmax=0.52,cbar=False, linewidths=.5)
plt.title(title)
plt.ylabel('Actual outputs')
plt.xlabel('Prediction')
b, t=ax.get_ylim()
ax.set_ylim(b+.5, t-.5)
plt.subplot(1,2,n+1)
axx=sns.heatmap(cm(y_test, y_pred), annot=True
,cmap='plasma', vmin=0, vmax=40,cbar=False, linewidths=.5)
b, t=axx.get_ylim()
axx.set_ylim(b+.5, t-.5)
return
plt.figure(figsize=(8,6))
confusionMatrix(y_predb,'Naive bayes',1)
plt.show
###Output
_____no_output_____
###Markdown
Naive Bayes Classification Import Libs
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Create data
###Code
from sklearn.datasets import make_blobs
X, y = make_blobs(100, 2, centers=2, random_state=2, cluster_std=1.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu');
###Output
_____no_output_____
###Markdown
Apply Naive BayesWith GaussianNB estimator
###Code
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X, y);
###Output
_____no_output_____
###Markdown
Now let's generate some new data and predict the label:
###Code
rng = np.random.RandomState(0)
Xnew = [-6, -14] + [14, 18] * rng.rand(2000, 2)
ynew = model.predict(Xnew)
###Output
_____no_output_____
###Markdown
Now we can plot this new data to get an idea of where the decision boundary is:
###Code
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu')
lim = plt.axis()
plt.scatter(Xnew[:, 0], Xnew[:, 1], c=ynew, s=20, cmap='RdBu', alpha=0.1)
plt.axis(lim);
###Output
_____no_output_____
###Markdown
We see a slightly curved boundary in the classifications—in general, the boundary in Gaussian naive Bayes is quadratic.A nice piece of this Bayesian formalism is that it naturally allows for probabilistic classification, which we can compute using the ``predict_proba`` method:
###Code
yprob = model.predict_proba(Xnew)
yprob[-8:].round(2)
###Output
_____no_output_____
###Markdown
Importing the library
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn import model_selection
from sklearn.metrics import classification_report,confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import GaussianNB
###Output
_____no_output_____
###Markdown
Loading the Iris dataset
###Code
iris=datasets.load_iris()
iris
#Converting the dataset to pandas dataframe
iris_dataframe=pd.DataFrame(np.c_[iris['data'], iris['target']],
columns= np.append(iris['feature_names'], ['target']))
iris_dataframe.head(50)
X=iris.data
X
Y=iris.target
Y
#Shape of data and target
iris.data.shape,iris.target.shape
iris_dataframe.describe()
iris_dataframe.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 5 columns):
sepal length (cm) 150 non-null float64
sepal width (cm) 150 non-null float64
petal length (cm) 150 non-null float64
petal width (cm) 150 non-null float64
target 150 non-null float64
dtypes: float64(5)
memory usage: 5.9 KB
###Markdown
Graphical representation
###Code
scalar = StandardScaler()
sns.set(style='whitegrid', context='notebook')
features_plot = iris.feature_names
#Show different levels of a categorical variable by the color of plot elements
iris_2 = sns.load_dataset("iris")
sns.pairplot(iris_2, size=2.0,diag_kind='hist',hue="species");
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
X_train,X_test,Y_train,Y_test=model_selection.train_test_split(X,Y,random_state=0)
###Output
_____no_output_____
###Markdown
Converting Data to Labelled DataExplains how the Labelled Data works
###Code
from IPython.display import Image
Image(filename='Image/Labelled.png')
def makeLabelled(column):
second_limit=column.mean()
first_limit=0.5*second_limit
third_limit=1.5*second_limit
for i in range(0,len(column)):
if(column[i]<first_limit):
column[i]=0
elif(column[i]<second_limit):
column[i]=1
elif(column[i]<third_limit):
column[i]=2
else:
column[i]=3
return column
#Calling the makeLabelled over the data
for i in range(0,X.shape[-1]):
X[:,i]=makeLabelled(X[:,i])
###Output
_____no_output_____
###Markdown
Inbuilt Naive Bayes Sklearn
###Code
clf=GaussianNB()
clf
#Training the data
clf.fit(X_train,Y_train)
#Testing the data
Y_predict_1=clf.predict(X_test)
#Classification Report
print(classification_report(Y_test,Y_predict_1))
#Confusion Matrix
print(confusion_matrix(Y_test,Y_predict_1))
###Output
[[11 2 0]
[ 0 16 0]
[ 0 3 6]]
###Markdown
Implementing the Naive Bayes
###Code
#for training the data
def fit(x_train,y_train):
result={}#Storing the results in dictionary
class_value=set(y_train)
#Itrating over unique value of y_train
for current_class in class_value:
#for each class there will another dictionary
result[current_class]={}
#In this dictionary the last element is length of y_train
result['total_data']=len(y_train)
'''current_class_rows store the true or false value
when y_train is equal to current_class'''
current_class_rows=(y_train==current_class)
'''in x_train there is only that data for which
y_train equal to current_class or where current_class_row equals to 1'''
x_train_current=x_train[current_class_rows]
#Same for y_train_current
y_train_current=y_train[current_class_rows]
#total no of features
num_features=x_train.shape[1]
#in the current_class dictionary there will be last element
#which is length of y_train_current
result[current_class]['total_count']=len(y_train_current)
for j in range(1,num_features+1):
#there will be one more nested dictonary for each feature
result[current_class][j]={}
#Get the all possible unique values of that feature
all_possible_values=set(x_train[:,j-1])
#iterate over all possible values
for current_value in all_possible_values:
#in final dictonary we will store that value where x_train_current is same as current_value
result[current_class][j][current_value]=(x_train_current[:,j-1]==current_value).mean()
return result
#Calculating the probability that x belongs to current class
def probability(dicti,x,current_class):
'''Log of probabilities is taken to prevent underflow (extremely small prob. values)
This changes the multiplication of values to addition, and division to subtraction.'''
#in output there is class probability store
output=np.log(dicti[current_class]['total_count'])-np.log(dicti['total_data'])
#all possible features
#-1 because there is one extra key that is total_count
num_features=len(dicti[current_class].keys())-1
for j in range(1,num_features+1):
xj=x[j-1]
#numerator of probability
'''+1 is there to apply Laplace corrections
and same is applied in denominator'''
count_current_class_with_value_xj=dicti[current_class][j][xj]+1
#denominator of probability
count_current_class=dicti[current_class]['total_count']+len(dicti[current_class][j].keys())
#required probability
current_xj_probability=np.log(count_current_class_with_value_xj)-np.log(count_current_class)
output=(output)+(current_xj_probability)
return output
def predictSinglepoint(dicti,x):
#all possible classes
classes=dicti.keys()
#probability variable initialised with value which can't be a probability so that it get the value of
#probability of current_class, and same is with best_class
best_p=-1000
best_class=-1
#use to run atleast one time
first_run=True
for current_class in classes:
#if we get the last key of the dictonary
if current_class=='total_data':
continue
#probability returns that what is probability that this x belongs to current_class
p_current_class=probability(dicti,x,current_class)
#getting the best probability
if(first_run or p_current_class>best_p):
best_p=p_current_class
best_class=current_class
first_run=False
return best_class
#taking the dictonary and x_test and predicting its class
def predict(dicti,x_test):
#y_pred store the test result
y_pred=[]
for x in x_test:
#calling the predictSinglepoint
x_class=predictSinglepoint(dicti,x)
y_pred.append(x_class)
return y_pred
#Training the data
dictionary=fit(X_train,Y_train)
#Testing the data
Y_predict_2=predict(dictionary,X_test)
#Classification Report
print(classification_report(Y_test,Y_predict_2))
#Confusion Report
print(confusion_matrix(Y_test,Y_predict_2))
###Output
[[13 0 0]
[ 0 16 0]
[ 0 3 6]]
###Markdown
Implementation of Naive Bayes
###Code
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report, roc_curve
from sklearn.model_selection import train_test_split
from math import sqrt
from math import pi
from math import exp
from sklearn import preprocessing
from sklearn.naive_bayes import GaussianNB
from sklearn import datasets
from itertools import cycle
from sklearn.metrics import roc_auc_score, auc
import matplotlib.pyplot as plt
%matplotlib inline
iris = datasets.load_iris()
###Output
_____no_output_____
###Markdown
Split the data by Class
###Code
def split_by_class(data):
splitted_data = dict()
for i in range(len(data)):
for j in range(data.shape[1]-1):
feature = data.iloc[i][j]
class_name = data.iloc[i][data.shape[1]-1]
if class_name not in splitted_data:
splitted_data[class_name] = list()
splitted_data[class_name].append(feature)
return splitted_data
###Output
_____no_output_____
###Markdown
Calculate data set statistics
###Code
def data_stats(data):
for i in range(len(data)):
mean = data.mean(axis = 0)
variance = data.var(axis = 0)
sd = variance**0.5
length = len(data)
return mean, sd, length
###Output
_____no_output_____
###Markdown
Calculate statistics for each class
###Code
def stats_by_class(data):
splitted = split_by_class(data)
splitted_stats = dict()
listt = []
x = data.shape[1] - 1
for key, value in splitted.items():
for i in range(len(splitted[0])):
if key in data['target'].unique():
mean = [np.mean(splitted[key][i : : x]) for i in range(len(splitted[key])//(len(splitted[key])//x))]
variance = [np.var(splitted[key][i : : x]) for i in range(len(splitted[key])//(len(splitted[key])//x))]
length = [len(splitted[key][i : : x]) for i in range(len(splitted[key])//(len(splitted[key])//x))]
splitted_stats[key] = mean, variance, length[0]
return splitted_stats
###Output
_____no_output_____
###Markdown
Calculate Gaussian PDF
###Code
def gaussian_pdf(x, mean, sd):
exponent = exp(-((x-mean)**2 / (2 * sd**2 )))
pdf = (1 / (sqrt(2 * pi) * sd)) * exponent
print(pdf)
return pdf
###Output
_____no_output_____
###Markdown
Calculate class probability
###Code
def calc_class_prob(data, listt):
stats = stats_by_class(data)
total_rows = sum([stats[x][2] for x in stats])
prob = dict()
for key, value in stats.items():
prob[key] = stats[key][2]/float(total_rows)
for i in range(len(value)):
print(listt)
mean, sd, _ = value[0][i], value[1][i], value[2]
prob[key] *= gaussian_pdf(listt[i], mean, sd)
return prob
def predict_nb(data, listt):
pred_list =[]
prob = calc_class_prob(data, listt)
max_value = max(prob.values())
label = [k for k, v in prob.items() if v == max_value]
return label, max_value
###Output
_____no_output_____
###Markdown
Testing with data downloaded from Kaggle
###Code
data = pd.DataFrame(data= np.c_[iris.data, iris.target],
columns= iris['feature_names'] + ['target'])
def predict(data):
predicted_list = []
prob_list = []
for i in range(len(data)):
label, pred_prob = predict_nb(data, data.iloc[i, :].tolist())
predicted_list.append(label)
prob_list.append(pred_prob)
return predicted_list, prob_list
predicted_list, prob_list = predict(data)
print(predicted_list)
len(predicted_list)
print(prob_list)
data['predicted'] = np.array(predicted_list)
data['predicted prob'] = np.array(prob_list)
prob_list = np.array(prob_list)
data.head()
###Output
_____no_output_____
###Markdown
Confusion Matrix
###Code
df_confusion = confusion_matrix(data['target'], data['predicted'])
print(df_confusion)
###Output
[[49 1 0]
[ 0 39 11]
[ 0 7 43]]
###Markdown
Overall Accuracy
###Code
print(classification_report(data['target'], data['predicted']))
one_hot_target = pd.get_dummies(test_data['target']).squeeze()
one_hot_target = one_hot_target.to_numpy()
print(one_hot_target)
###Output
[[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[1 0 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 1 0]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]
[0 0 1]]
###Markdown
ROC
###Code
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(test_data['target'].unique())):
fpr[i], tpr[i], _ = roc_curve(one_hot_target[:, i], prob_list)
roc_auc[i] = auc(fpr[i], tpr[i])
colors = cycle(['blue', 'red', 'green'])
for i, color in zip(range(len(data['target'].unique())), colors):
plt.plot(fpr[i], tpr[i], color=color,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([-0.05, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC for IRIS Dataset')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Class-wise accuracy
###Code
def class_accuracy(class_name, df_confusion):
sum = 0
for i in range(len(df_confusion)):
sum = df_confusion[i][class_name] + sum
class_acc = df_confusion[class_name][class_name]/sum
print('class[{}], accuracy:{:.4f}' .format(class_name, class_acc))
class_accuracy(0, df_confusion)
class_accuracy(1, df_confusion)
class_accuracy(2, df_confusion)
###Output
class[2], accuracy:0.7963
###Markdown
Naive Bayes
###Code
X_train_gau = X_train[["temperature"]]
X_train_ber = X_train.drop(columns=["Age", "date_diff", "temperature", "total_visit_num"], axis=1)
X_train_mul = X_train[["Age", "date_diff", "total_visit_num"]]
from imblearn.under_sampling import *
X_samp_gau, y_samp_gau = EditedNearestNeighbours(kind_sel="all", n_neighbors=5, random_state=0).fit_sample(X_train_gau, y_train)
X_samp_ber, y_samp_ber = EditedNearestNeighbours(kind_sel="all", n_neighbors=5, random_state=0).fit_sample(X_train_ber, y_train)
X_samp_mul, y_samp_mul = EditedNearestNeighbours(kind_sel="all", n_neighbors=5, random_state=0).fit_sample(X_train_mul, y_train)
X_test_gau = X_test[["temperature"]]
X_test_ber = X_test.drop(columns=["Age", "date_diff", "temperature", "total_visit_num"], axis=1)
X_test_mul = X_test[["Age", "date_diff", "total_visit_num"]]
###Output
_____no_output_____
###Markdown
GaussianNB
###Code
from sklearn.naive_bayes import GaussianNB
model_norm = GaussianNB().fit(X_samp_gau, y_samp_gau)
y_pred_gau = model_norm.predict(X_test_gau)
y_pred_gau_prob = model_norm.predict_proba(X_test_gau)
gau_prob = y_pred_gau_prob[:,1]
confusion_matrix(y_test, y_pred_gau, labels=["Yes", "No"])
print(classification_report(y_test, y_pred_gau))
###Output
precision recall f1-score support
No 0.80 0.91 0.85 26462
Yes 0.23 0.11 0.15 6694
micro avg 0.75 0.75 0.75 33156
macro avg 0.52 0.51 0.50 33156
weighted avg 0.69 0.75 0.71 33156
###Markdown
BernoulliNB
###Code
from sklearn.naive_bayes import BernoulliNB
model_bern = BernoulliNB().fit(X_samp_ber, y_samp_ber)
y_pred_ber = model_bern.predict(X_test_ber)
y_pred_ber_prob = model_bern.predict_proba(X_test_ber)
ber_prob = y_pred_ber_prob[:,1]
confusion_matrix(y_test, y_pred_ber, labels=["Yes", "No"])
print(classification_report(y_test, y_pred_ber))
###Output
precision recall f1-score support
No 0.89 0.58 0.70 26462
Yes 0.30 0.72 0.42 6694
micro avg 0.61 0.61 0.61 33156
macro avg 0.60 0.65 0.56 33156
weighted avg 0.77 0.61 0.64 33156
###Markdown
MultinomialNB
###Code
from sklearn.naive_bayes import MultinomialNB
model_mult = MultinomialNB().fit(X_samp_mul, y_samp_mul)
y_pred_mul = model_mult.predict(X_test_mul)
y_pred_mul_prob = model_mult.predict_proba(X_test_mul)
mul_prob = y_pred_mul_prob[:,1]
confusion_matrix(y_test, y_pred_mul, labels=["Yes", "No"])
print(classification_report(y_test, y_pred_mul))
###Output
precision recall f1-score support
No 0.87 0.69 0.77 26462
Yes 0.32 0.59 0.41 6694
micro avg 0.67 0.67 0.67 33156
macro avg 0.59 0.64 0.59 33156
weighted avg 0.76 0.67 0.70 33156
###Markdown
GaussianNB + BernoulliNB + MultinomialNB Probability https://stackoverflow.com/questions/33477736/how-to-combine-the-outputs-of-multiple-naive-bayes-classifier
###Code
# pos = pos_prior_ * (y_prob1[:,1]/pos_prior_) * (y_prob2[:,1]/pos_prior_) * (y_prob3[:,1]/pos_prior_) * (y_prob4[:,1]/pos_prior_)
# pos = y_prob1[:,1] * y_prob2[:,1] * y_prob3[:,1] * y_prob4[:,1] / pos_prior_**3
neg = (y_pred_gau_prob[:,0] * y_pred_ber_prob[:,0] * y_pred_mul_prob[:,0]) / (pos_prior ** 2)
pos = (gau_prob * ber_prob * mul_prob) / (pos_prior ** 2)
prob = pos / (neg + pos)
y_pred_total = [ "No" if p < 0.04 else "Yes" for p in prob ]
accuracy_score(y_test, y_pred_total)
confusion_matrix(y_test, y_pred_total, labels=["Yes", "No"])
print(classification_report(y_test, y_pred_total))
###Output
precision recall f1-score support
No 0.88 0.63 0.74 26462
Yes 0.31 0.67 0.43 6694
micro avg 0.64 0.64 0.64 33156
macro avg 0.60 0.65 0.58 33156
weighted avg 0.77 0.64 0.67 33156
###Markdown
Bayes Theorem `P(A|B) = (P(B|A) * P(A) / (P(B))` Naive Bayes- Suppose there are 2 categories; Person who goes to work by walk or drive. - Adding a new observation and lets predict weather a person will walk or drive.- Steps; - Step 1 : Calculating the Bayes theorem for walk and they are named as, P(Walks|X) = (P(X|Walks) * P(Walks) / (P(X) 1. X = Feature of new data point 2. P(Walks) = Prior Probability 3. P(X) = Marginal Probability 4. P(X|Walks) = Likelihood 5. P(Walks|X) = Posterior Porobability - Step 2 : Calculating the Bayes theorem for drive and they are named as, P(Drives|X) = (P(X|Drives) * P(Drives) / (P(X) 1. X = Feature of new data point 2. P(Drives) = Prior Probability 3. P(X) = Marginal Probability 4. P(X|Drives) = Likelihood 5. P(Drives|X) = Posterior Porobability - Step 3 : P(Walks|X) vs P(Drives|X) **Calculating for Walkers**- `P(Walks)` = Number of Walkers / Total Observation - `P(Walks) = 10 / 30` - For Marginal Probability, you need to make a circle around a new data point of your own radius.- `P(X)` = Number of Similar Observation / Total Observation - `P(X) = 4 / 30` - For Likelihood, you need to make a circle around a new data point of your own radius and select only walker.- `P(X|Walks)` = Number of Similar Observation Among those who Walks / Total Number of Walkers - `P(X|Walks) = 3 / 10`- `P(Walks|X)` = [ (3/10) * (10/30) ] / (4/30) - `P(Walks|X) = 0.75` **Calculating for Drivers**- `P(Drives)` = Number of Drives / Total Observation - `P(Drives) = 20 / 30` - For Marginal Probability, you need to make a circle around a new data point of your own radius.- `P(X)` = Number of Similar Observation / Total Observation - `P(X) = 4 / 30` - For Likelihood, you need to make a circle around a new data point of your own radius and select only walker.- `P(X|Drives)` = Number of Similar Observation Among those who Drives / Total Number of Drivers - `P(X|Drives) = 1 / 20`- `P(Drives|X)` = [ (1/20) * (20/30) ] / (4/30) - `P(Drives|X) = 0.25` **- So the P(Walks|X) > P(Drives|X)** **Importing Packages**
###Code
# Numpy allows us to work with array.
import numpy as np
# Maptplotlib which allows us to plot some chart.
import matplotlib.pyplot as plt
# Pandas allows us to not only import the datasets but also create the matrix of features(independent) and
# dependent variable.
import pandas as pd
###Output
_____no_output_____
###Markdown
**Importing Dataset** - The independent variable usally in the first columns of dataset and dependent variable usally in the last columns of the data sets.- X is Independent Variable.- Y is Dependent Variable.
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(x)
print(y)
###Output
[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 0 1 0 0 0 1 0 1
1 1 0 0 1 1 0 1 1 0 1 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 0 1 0 0 1 1 0 1 0 0 1
1 0 1 1 0 1 1 0 0 1 0 0 1 1 1 1 1 0 1 1 1 1 0 1 1 0 1 0 1 0 1 1 1 1 0 0 0
1 1 0 1 1 1 1 1 0 0 0 1 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0
0 1 0 1 0 0 1 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 1 1 0 1 0 1 1 1 0 1 1 1 1 0 1
1 1 0 1 0 1 0 0 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1]
###Markdown
**Splitting the dataset into the Training set and Test set**
###Code
# Importing Package
from sklearn.model_selection import train_test_split
# Dividing training and test set.
# The best ratio is 80 - 20 for trainging and testing respectively.
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 0)
print(x_train)
print(y_train)
###Output
[0 1 0 1 1 1 0 0 0 0 0 0 1 1 1 0 1 0 0 1 0 1 0 1 0 0 1 1 1 1 0 1 0 1 0 0 1
0 0 1 0 0 0 0 0 1 1 1 1 0 0 0 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 0 1
1 1 0 0 1 1 0 0 1 1 0 1 0 0 1 1 0 1 1 1 0 0 0 0 0 1 0 0 1 1 1 1 1 0 1 1 0
1 0 0 0 0 0 0 0 1 1 0 0 1 0 0 1 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0 0 0 1 1 0 0
0 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 0 0 0 0 0 1 0 0
0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0
0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0
0 0 1 0 1 1 0 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1
0 0 0 0]
###Markdown
**Feature Scaling**
###Code
# Importing Package
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
# Fitting and Transforming
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
print(x_train)
print(x_test)
###Output
[[-0.80480212 0.50496393]
[-0.01254409 -0.5677824 ]
[-0.30964085 0.1570462 ]
[-0.80480212 0.27301877]
[-0.30964085 -0.5677824 ]
[-1.10189888 -1.43757673]
[-0.70576986 -1.58254245]
[-0.21060859 2.15757314]
[-1.99318916 -0.04590581]
[ 0.8787462 -0.77073441]
[-0.80480212 -0.59677555]
[-1.00286662 -0.42281668]
[-0.11157634 -0.42281668]
[ 0.08648817 0.21503249]
[-1.79512465 0.47597078]
[-0.60673761 1.37475825]
[-0.11157634 0.21503249]
[-1.89415691 0.44697764]
[ 1.67100423 1.75166912]
[-0.30964085 -1.37959044]
[-0.30964085 -0.65476184]
[ 0.8787462 2.15757314]
[ 0.28455268 -0.53878926]
[ 0.8787462 1.02684052]
[-1.49802789 -1.20563157]
[ 1.07681071 2.07059371]
[-1.00286662 0.50496393]
[-0.90383437 0.30201192]
[-0.11157634 -0.21986468]
[-0.60673761 0.47597078]
[-1.6960924 0.53395707]
[-0.11157634 0.27301877]
[ 1.86906873 -0.27785096]
[-0.11157634 -0.48080297]
[-1.39899564 -0.33583725]
[-1.99318916 -0.50979612]
[-1.59706014 0.33100506]
[-0.4086731 -0.77073441]
[-0.70576986 -1.03167271]
[ 1.07681071 -0.97368642]
[-1.10189888 0.53395707]
[ 0.28455268 -0.50979612]
[-1.10189888 0.41798449]
[-0.30964085 -1.43757673]
[ 0.48261718 1.22979253]
[-1.10189888 -0.33583725]
[-0.11157634 0.30201192]
[ 1.37390747 0.59194336]
[-1.20093113 -1.14764529]
[ 1.07681071 0.47597078]
[ 1.86906873 1.51972397]
[-0.4086731 -1.29261101]
[-0.30964085 -0.3648304 ]
[-0.4086731 1.31677196]
[ 2.06713324 0.53395707]
[ 0.68068169 -1.089659 ]
[-0.90383437 0.38899135]
[-1.20093113 0.30201192]
[ 1.07681071 -1.20563157]
[-1.49802789 -1.43757673]
[-0.60673761 -1.49556302]
[ 2.1661655 -0.79972756]
[-1.89415691 0.18603934]
[-0.21060859 0.85288166]
[-1.89415691 -1.26361786]
[ 2.1661655 0.38899135]
[-1.39899564 0.56295021]
[-1.10189888 -0.33583725]
[ 0.18552042 -0.65476184]
[ 0.38358493 0.01208048]
[-0.60673761 2.331532 ]
[-0.30964085 0.21503249]
[-1.59706014 -0.19087153]
[ 0.68068169 -1.37959044]
[-1.10189888 0.56295021]
[-1.99318916 0.35999821]
[ 0.38358493 0.27301877]
[ 0.18552042 -0.27785096]
[ 1.47293972 -1.03167271]
[ 0.8787462 1.08482681]
[ 1.96810099 2.15757314]
[ 2.06713324 0.38899135]
[-1.39899564 -0.42281668]
[-1.20093113 -1.00267957]
[ 1.96810099 -0.91570013]
[ 0.38358493 0.30201192]
[ 0.18552042 0.1570462 ]
[ 2.06713324 1.75166912]
[ 0.77971394 -0.8287207 ]
[ 0.28455268 -0.27785096]
[ 0.38358493 -0.16187839]
[-0.11157634 2.21555943]
[-1.49802789 -0.62576869]
[-1.29996338 -1.06066585]
[-1.39899564 0.41798449]
[-1.10189888 0.76590222]
[-1.49802789 -0.19087153]
[ 0.97777845 -1.06066585]
[ 0.97777845 0.59194336]
[ 0.38358493 0.99784738]]
###Markdown
**Training the Naive Bayes on the Training set**
###Code
# Importing Package
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
# Fitting
classifier.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
**Predicting a new result**
###Code
print(classifier.predict(sc.transform([[30, 87000]])))
###Output
[0]
###Markdown
**Predicting the Test set results**
###Code
# Predicting
y_pred = classifier.predict(x_test)
# Concatenating and Reshaping
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
# First column is the predicted value and Second column is the real value.
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[1 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 0]
[1 1]
[0 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[0 0]
[1 1]
[0 1]
[0 0]
[1 1]
[0 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[1 1]]
###Markdown
**Making the confusion Matrix**
###Code
# Importing Package
from sklearn.metrics import confusion_matrix, accuracy_score
# Confusion Maxtrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# Accuracy Score
accuracy_score(y_test, y_pred)
###Output
[[65 3]
[ 7 25]]
###Markdown
**Visualising the Training Set results**
###Code
# Importing Package
from matplotlib.colors import ListedColormap
x_set, y_set = sc.inverse_transform(x_train), y_train
x1, x2 = np.meshgrid(np.arange(start = x_set[:, 0].min() - 10, stop = x_set[:, 0].max() + 10, step = 1),
np.arange(start = x_set[:, 1].min() - 1000, stop = x_set[:, 1].max() + 1000, step = 1))
plt.contourf(x1, x2, classifier.predict(sc.transform(np.array([x1.ravel(), x2.ravel()]).T)).reshape(x1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
**Visualising the Test Set results**
###Code
# Importing Package
from matplotlib.colors import ListedColormap
x_set, y_set = sc.inverse_transform(x_test), y_test
x1, x2 = np.meshgrid(np.arange(start = x_set[:, 0].min() - 10, stop = x_set[:, 0].max() + 10, step = 1),
np.arange(start = x_set[:, 1].min() - 1000, stop = x_set[:, 1].max() + 1000, step = 1))
plt.contourf(x1, x2, classifier.predict(sc.transform(np.array([x1.ravel(), x2.ravel()]).T)).reshape(x1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Separandos os dados
###Code
X_dados = dados.iloc[:,1:5].values
X_dados
y_dados = dados.iloc[:,-1].values
y_dados
###Output
_____no_output_____
###Markdown
Transformando os dados categóricos em numéricos
###Code
label_encoder_tempo = LabelEncoder()
label_encoder_temperatura = LabelEncoder()
label_encoder_umidade = LabelEncoder()
label_encoder_vento = LabelEncoder()
X_dados[:,0] = label_encoder_tempo.fit_transform(X_dados[:,0])
X_dados[:,1] = label_encoder_temperatura.fit_transform(X_dados[:,1])
X_dados[:,2] = label_encoder_umidade.fit_transform(X_dados[:,2])
X_dados[:,3] = label_encoder_vento.fit_transform(X_dados[:,3])
X_dados
X_train, X_test, y_train, y_test = train_test_split(X_dados, y_dados, test_size = 0.3, random_state=0)
naive_dados = GaussianNB() #Distribuição normal
#Treinando o algoritmo
naive_dados.fit(X_dados,y_dados)
#Acurácia do modelos
naive_dados.score(X_dados,y_dados)
###Output
_____no_output_____
###Markdown
Previsão de um novo dado
###Code
#chuvoso(0), quente(2), alta(0) e forte(0)
previsao = naive_dados.predict([[0,2,0,0]])
previsao
#Classes
naive_dados.classes_
#Atributos por classe
naive_dados.class_count_
#Propriedade a priori
naive_dados.class_prior_
###Output
_____no_output_____
###Markdown
Calculate Priors
###Code
# Number of males
n_male = data['Gender'][data['Gender'] == 'male'].count()
# Number of males
n_female = data['Gender'][data['Gender'] == 'female'].count()
# Total rows
total_ppl = data['Gender'].count()
# Number of males divided by the total rows
P_male = n_male/total_ppl
# Number of females divided by the total rows
P_female = n_female/total_ppl
###Output
_____no_output_____
###Markdown
Calculate Likelihood
###Code
# Group the data by gender and calculate the means of each feature
data_means = data.groupby('Gender').mean()
# View the values
data_means
# Group the data by gender and calculate the variance of each feature
data_variance = data.groupby('Gender').var()
# View the values
data_variance
# Means for male
male_height_mean = data_means['Height'][data_variance.index == 'male'].values[0]
male_weight_mean = data_means['Weight'][data_variance.index == 'male'].values[0]
male_footsize_mean = data_means['Foot_Size'][data_variance.index == 'male'].values[0]
# Variance for male
male_height_variance = data_variance['Height'][data_variance.index == 'male'].values[0]
male_weight_variance = data_variance['Weight'][data_variance.index == 'male'].values[0]
male_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'male'].values[0]
# Means for female
female_height_mean = data_means['Height'][data_variance.index == 'female'].values[0]
female_weight_mean = data_means['Weight'][data_variance.index == 'female'].values[0]
female_footsize_mean = data_means['Foot_Size'][data_variance.index == 'female'].values[0]
# Variance for female
female_height_variance = data_variance['Height'][data_variance.index == 'female'].values[0]
female_weight_variance = data_variance['Weight'][data_variance.index == 'female'].values[0]
female_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'female'].values[0]
# Create a function that calculates p(x | y):
def p_x_given_y(x, mean_y, variance_y):
# Input the arguments into a probability density function
p = 1/(np.sqrt(2*np.pi*variance_y)) * np.exp((-(x-mean_y)**2)/(2*variance_y))
# return p
return p
# Numerator of the posterior if the unclassified observation is a male
P_male * \
p_x_given_y(person['Height'][0], male_height_mean, male_height_variance) * \
p_x_given_y(person['Weight'][0], male_weight_mean, male_weight_variance) * \
p_x_given_y(person['Foot_Size'][0], male_footsize_mean, male_footsize_variance)
# Numerator of the posterior if the unclassified observation is a female
P_female * \
p_x_given_y(person['Height'][0], female_height_mean, female_height_variance) * \
p_x_given_y(person['Weight'][0], female_weight_mean, female_weight_variance) * \
p_x_given_y(person['Foot_Size'][0], female_footsize_mean, female_footsize_variance)
###Output
_____no_output_____
###Markdown
FROM SCRATCH
###Code
data = pd.DataFrame()
# Create our target variable
data['Gender'] = [1,1,1,1,0,0,0,0]
# Create our feature variables
data['Height'] = [6,5.92,5.58,5.92,5,5.5,5.42,5.75]
data['Weight'] = [180,190,170,165,100,150,130,150]
data['Foot_Size'] = [12,11,12,10,6,8,7,9]
data
X = data.drop(['Gender'],axis=1)
y=data.Gender
# splitting X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)
# training the model on training set
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, y_train)
# making predictions on the testing set
y_pred = gnb.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y, gnb.predict(X)))
cm = confusion_matrix(y, gnb.predict(X))
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(cm)
ax.grid(False)
ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s'))
ax.yaxis.set(ticks=(0, 1), ticklabels=('Actual 0s', 'Actual 1s'))
ax.set_ylim(1.5, -0.5)
for i in range(2):
for j in range(2):
ax.text(j, i, cm[i, j], ha='center', va='center', color='red')
plt.show()
# Create our target variable
data1 = pd.DataFrame()
# Create our feature variables
data1['Height'] = [6]
data1['Weight'] = [130]
data1['Foot_Size'] = [8]
y_pred = gnb.predict(data1)
if y_pred==0:
print ("female")
else:
print ("male")
###Output
female
###Markdown
DonorsChoose DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website. Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve: How to scale current manual processes and resources to screen 500,000 projects so that they can be posted as quickly and as efficiently as possible How to increase the consistency of project vetting across different volunteers to improve the experience for teachers How to focus volunteer time on the applications that need the most assistance The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval. About the DonorsChoose Data SetThe `train.csv` data set provided by DonorsChoose contains the following features:Feature | Description ----------|---------------**`project_id`** | A unique identifier for the proposed project. **Example:** `p036502` **`project_title`** | Title of the project. **Examples:**Art Will Make You Happy!First Grade Fun **`project_grade_category`** | Grade level of students for which the project is targeted. One of the following enumerated values: Grades PreK-2Grades 3-5Grades 6-8Grades 9-12 **`project_subject_categories`** | One or more (comma-separated) subject categories for the project from the following enumerated list of values: Applied LearningCare & HungerHealth & SportsHistory & CivicsLiteracy & LanguageMath & ScienceMusic & The ArtsSpecial NeedsWarmth **Examples:** Music & The ArtsLiteracy & Language, Math & Science **`school_state`** | State where school is located ([Two-letter U.S. postal code](https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviationsPostal_codes)). **Example:** `WY`**`project_subject_subcategories`** | One or more (comma-separated) subject subcategories for the project. **Examples:** LiteracyLiterature & Writing, Social Sciences **`project_resource_summary`** | An explanation of the resources needed for the project. **Example:** My students need hands on literacy materials to manage sensory needs! **`project_essay_1`** | First application essay* **`project_essay_2`** | Second application essay* **`project_essay_3`** | Third application essay* **`project_essay_4`** | Fourth application essay* **`project_submitted_datetime`** | Datetime when project application was submitted. **Example:** `2016-04-28 12:43:56.245` **`teacher_id`** | A unique identifier for the teacher of the proposed project. **Example:** `bdf8baa8fedef6bfeec7ae4ff1c15c56` **`teacher_prefix`** | Teacher's title. One of the following enumerated values: nanDr.Mr.Mrs.Ms.Teacher. **`teacher_number_of_previously_posted_projects`** | Number of project applications previously submitted by the same teacher. **Example:** `2` * See the section Notes on the Essay Data for more details about these features.Additionally, the `resources.csv` data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:Feature | Description ----------|---------------**`id`** | A `project_id` value from the `train.csv` file. **Example:** `p036502` **`description`** | Desciption of the resource. **Example:** `Tenor Saxophone Reeds, Box of 25` **`quantity`** | Quantity of the resource required. **Example:** `3` **`price`** | Price of the resource required. **Example:** `9.95` **Note:** Many projects require multiple resources. The `id` value corresponds to a `project_id` in train.csv, so you use it as a key to retrieve all resources needed for a project:The data set contains the following label (the value you will attempt to predict):Label | Description----------|---------------`project_is_approved` | A binary flag indicating whether DonorsChoose approved the project. A value of `0` indicates the project was not approved, and a value of `1` indicates the project was approved. Notes on the Essay DataPrior to May 17, 2016, the prompts for the essays were as follows:__project_essay_1:__ "Introduce us to your classroom"__project_essay_2:__ "Tell us more about your students"__project_essay_3:__ "Describe how your students will use the materials you're requesting"__project_essay_3:__ "Close by sharing why your project will make a difference"Starting on May 17, 2016, the number of essays was reduced from 4 to 2, and the prompts for the first 2 essays were changed to the following:__project_essay_1:__ "Describe your students: What makes your students special? Specific details about their background, your neighborhood, and your school are all helpful."__project_essay_2:__ "About your project: How will these materials make a difference in your students' learning and improve their school lives?"For all projects with project_submitted_datetime of 2016-05-17 and later, the values of project_essay_3 and project_essay_4 will be NaN.
###Code
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
"""from gensim.models import Word2Vec
from gensim.models import KeyedVectors"""
import pickle
#from tqdm import tqdm
import os
from scipy.sparse import hstack
from sklearn.preprocessing import StandardScaler
"""from plotly import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter"""
###Output
_____no_output_____
###Markdown
1.1 Reading Data
###Code
import pandas as pd
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
# how to replace elements in list python: https://stackoverflow.com/a/2582163/4084039
cols = ['Date' if x=='project_submitted_datetime' else x for x in list(project_data.columns)]
#sort dataframe based on time pandas python: https://stackoverflow.com/a/49702492/4084039
project_data['Date'] = pd.to_datetime(project_data['project_submitted_datetime'])
project_data.drop('project_submitted_datetime', axis=1, inplace=True)
project_data.sort_values(by=['Date'], inplace=True)
# how to reorder columns pandas python: https://stackoverflow.com/a/13148611/4084039
project_data = project_data[cols]
project_data.head(2)
print("Number of data points in train data", resource_data.shape)
print(resource_data.columns.values)
resource_data.head(2)
###Output
Number of data points in train data (1541272, 4)
['id' 'description' 'quantity' 'price']
###Markdown
1.2 preprocessing of `project_subject_categories`
###Code
catogories = list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
from collections import Counter
my_counter = Counter()
for word in project_data['clean_categories'].values:
my_counter.update(word.split())
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kc: kc[1]))
###Output
_____no_output_____
###Markdown
1.3 preprocessing of `project_subject_subcategories`
###Code
sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
my_counter = Counter()
for word in project_data['clean_subcategories'].values:
my_counter.update(word.split())
sub_cat_dict = dict(my_counter)
sorted_sub_cat_dict = dict(sorted(sub_cat_dict.items(), key=lambda kv: kv[1]))
###Output
_____no_output_____
###Markdown
1.3 Text preprocessing
###Code
# merge two column text dataframe:
project_data["essay"] = project_data["project_essay_1"].map(str) +\
project_data["project_essay_2"].map(str) + \
project_data["project_essay_3"].map(str) + \
project_data["project_essay_4"].map(str)
project_data.head(2)
# printing some random reviews
print(project_data['essay'].values[0])
print("="*50)
print(project_data['essay'].values[150])
print("="*50)
print(project_data['essay'].values[1000])
print("="*50)
print(project_data['essay'].values[20000])
print("="*50)
print(project_data['essay'].values[99999])
print("="*50)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"]
###Output
_____no_output_____
###Markdown
Limiting data poits to 50K due to system constrain
###Code
project_data = project_data.iloc[0:50000,:]
X=project_data.drop(columns=["project_is_approved"])
Y=project_data["project_is_approved"]
X["project_grade_category"].value_counts()
X["project_grade_category"][X["project_grade_category"]=="Grades PreK-2"]="GradeA"
X["project_grade_category"][X["project_grade_category"]=="Grades 3-5"]="GradeB"
X["project_grade_category"][X["project_grade_category"]=="Grades 6-8"]="GradeC"
X["project_grade_category"][X["project_grade_category"]=="Grades 9-12"]="GradeD"
X.columns
###Output
_____no_output_____
###Markdown
Preprocessing of `essay'
###Code
# Combining all the above stundents
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in (X['essay'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_essays.append(sent.lower().strip())
X['essay']=preprocessed_essays
###Output
_____no_output_____
###Markdown
1.4 Preprocessing of `project_title`
###Code
# Combining all the above stundents
#from tqdm import tqdm
preprocessed_project_title = []
# tqdm is for printing the status bar
for sentance in (X['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_project_title.append(sent.lower().strip())
X['project_title']=preprocessed_project_title
###Output
_____no_output_____
###Markdown
1.5 Preparing data for models we are going to consider - school_state : categorical data - clean_categories : categorical data - clean_subcategories : categorical data - project_grade_category : categorical data - teacher_prefix : categorical data - project_title : text data - text : text data - project_resource_summary: text data (optinal) - quantity : numerical (optinal) - teacher_number_of_previously_posted_projects : numerical - price : numerical TASK: Naive Bayes Apply Multinomial NaiveBayes on these feature sets Set 1: categorical, numerical features + project_title(BOW) + preprocessed_eassay (BOW) Set 2: categorical, numerical features + project_title(TFIDF)+ preprocessed_eassay (TFIDF) The hyper paramter tuning(find best Alpha) Find the best hyper parameter which will give the maximum AUC value Consider a wide range of alpha values for hyperparameter tuning, start as low as 0.00001 Find the best hyper paramter using k-fold cross validation or simple cross validation data Use gridsearch cv or randomsearch cv or you can also write your own for loops to do this task of hyperparameter tuning Feature importance Find the top 10 features of positive class and top 10 features of negative class for both feature sets Set 1 and Set 2 using values of `feature_log_prob_` parameter of MultinomialNB and print their corresponding feature names Representation of results You need to plot the performance of model both on train data and cross validation data for each hyper parameter, like shown in the figure. Here on X-axis you will have alpha values, since they have a wide range, just to represent those alpha values on the graph, apply log function on those alpha values. Once after you found the best hyper parameter, you need to train your model with it, and find the AUC on test data and plot the ROC curve on both train and test. Along with plotting ROC curve, you need to print the confusion matrix with predicted and original labels of test data points. Please visualize your confusion matrices using seaborn heatmaps. Conclusion You need to summarize the results at the end of the notebook, summarize it in the table format. To print out a table please refer to this prettytable library link 2. Naive Bayes 2.1 Splitting data into Train and cross validation(or test): Stratified Sampling
###Code
# splitting data into train and test
from sklearn.cross_validation import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.20, random_state=42,stratify=Y)
X_train,X_cv,Y_train,Y_cv=train_test_split(X_train,Y_train,test_size=0.20, random_state=42,stratify=Y_train)
for i in [X_train,Y_train,X_cv,Y_cv,X_test,Y_test]:
print(i.shape)
###Output
(32000, 17)
(32000,)
(8000, 17)
(8000,)
(10000, 17)
(10000,)
###Markdown
2.2 Make Data Model Ready: encoding numerical, categorical features Encoding project_subject categorical
###Code
# we use count vectorizer to convert the values into one
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(vocabulary=list(sorted_cat_dict.keys()), lowercase=True, binary=True)
categories_one_hot_X_train = vectorizer.fit_transform(X_train['clean_categories'])
categories_one_hot_X_cv = vectorizer.transform(X_cv['clean_categories'])
categories_one_hot_X_test = vectorizer.transform(X_test['clean_categories'])
print(vectorizer.get_feature_names())
print("categories_one_hot_X_train : {0} \ncategories_one_hot_X_cv : {1} \ncategories_one_hot_X_test : {2} ".format(categories_one_hot_X_train.shape,categories_one_hot_X_cv.shape,categories_one_hot_X_test.shape))
###Output
['Warmth', 'Care_Hunger', 'History_Civics', 'Music_Arts', 'AppliedLearning', 'SpecialNeeds', 'Health_Sports', 'Math_Science', 'Literacy_Language']
categories_one_hot_X_train : (32000, 9)
categories_one_hot_X_cv : (8000, 9)
categories_one_hot_X_test : (10000, 9)
###Markdown
Encoding project_subject sub categorical
###Code
# we use count vectorizer to convert the values into one clean_subcategories
vectorizer = CountVectorizer(vocabulary=list(sorted_sub_cat_dict.keys()), lowercase=False, binary=True)
sub_categories_one_hot_X_train = vectorizer.fit_transform(X_train['clean_subcategories'].values)
sub_categories_one_hot_X_cv = vectorizer.transform(X_cv['clean_subcategories'].values)
sub_categories_one_hot_X_test = vectorizer.transform(X_test['clean_subcategories'].values)
print(vectorizer.get_feature_names())
print("sub_categories_one_hot_X_train : {0}\nsub_categories_one_hot_X_cv : {1}\nsub_categories_one_hot_X_test : {2}".\
format(sub_categories_one_hot_X_train.shape,sub_categories_one_hot_X_cv.shape,sub_categories_one_hot_X_test.shape))
###Output
['Economics', 'CommunityService', 'FinancialLiteracy', 'ParentInvolvement', 'Extracurricular', 'Civics_Government', 'ForeignLanguages', 'NutritionEducation', 'Warmth', 'Care_Hunger', 'SocialSciences', 'PerformingArts', 'CharacterEducation', 'TeamSports', 'Other', 'College_CareerPrep', 'Music', 'History_Geography', 'Health_LifeScience', 'EarlyDevelopment', 'ESL', 'Gym_Fitness', 'EnvironmentalScience', 'VisualArts', 'Health_Wellness', 'AppliedSciences', 'SpecialNeeds', 'Literature_Writing', 'Mathematics', 'Literacy']
sub_categories_one_hot_X_train : (32000, 30)
sub_categories_one_hot_X_cv : (8000, 30)
sub_categories_one_hot_X_test : (10000, 30)
###Markdown
Encoding school_state categorical
###Code
# we use count vectorizer to convert the values into one school_state
vectorizer = CountVectorizer()
school_state_one_hot_X_train = vectorizer.fit_transform(X_train['school_state'].values)
school_state_one_hot_X_cv = vectorizer.transform(X_cv['school_state'].values)
school_state_one_hot_X_test = vectorizer.transform(X_test['school_state'].values)
print(vectorizer.get_feature_names())
print("school_state_one_hot_X_train : {} \nschool_state_one_hot_X_cv : {} \nschool_state_one_hot_X_test : {}".\
format(school_state_one_hot_X_train.shape,school_state_one_hot_X_cv.shape,school_state_one_hot_X_test.shape))
###Output
['ak', 'al', 'ar', 'az', 'ca', 'co', 'ct', 'dc', 'de', 'fl', 'ga', 'hi', 'ia', 'id', 'il', 'in', 'ks', 'ky', 'la', 'ma', 'md', 'me', 'mi', 'mn', 'mo', 'ms', 'mt', 'nc', 'nd', 'ne', 'nh', 'nj', 'nm', 'nv', 'ny', 'oh', 'ok', 'or', 'pa', 'ri', 'sc', 'sd', 'tn', 'tx', 'ut', 'va', 'vt', 'wa', 'wi', 'wv', 'wy']
school_state_one_hot_X_train : (32000, 51)
school_state_one_hot_X_cv : (8000, 51)
school_state_one_hot_X_test : (10000, 51)
###Markdown
Encoding teacher_prefix categorical
###Code
# we use count vectorizer to convert the values into one hot encoded features
#https://stackoverflow.com/questions/39303912/tfidfvectorizer-in-scikit-learn-valueerror-np-nan-is-an-invalid-document
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(lowercase=False, binary=True,encoding='utf-8',vocabulary=['Dr', 'Mr', 'Mrs', 'Ms', 'Teacher'])
vectorizer.fit(X_train['teacher_prefix'].values.astype('U'))
print(vectorizer.get_feature_names())
teacher_prefix_one_hot_X_train = vectorizer.transform(X_train['teacher_prefix'].values.astype('U'))
teacher_prefix_one_hot_X_cv = vectorizer.transform(X_cv['teacher_prefix'].values.astype('U'))
teacher_prefix_one_hot_X_test = vectorizer.transform(X_test['teacher_prefix'].values.astype('U'))
print("teacher_prefix_one_hot_X_train : {} \nteacher_prefix_one_hot_X_cv : {} \nteacher_prefix_one_hot_X_test : {}".\
format(teacher_prefix_one_hot_X_train.shape,teacher_prefix_one_hot_X_cv.shape,teacher_prefix_one_hot_X_test.shape))
###Output
['Dr', 'Mr', 'Mrs', 'Ms', 'Teacher']
teacher_prefix_one_hot_X_train : (32000, 5)
teacher_prefix_one_hot_X_cv : (8000, 5)
teacher_prefix_one_hot_X_test : (10000, 5)
###Markdown
Encoding project_grade_category categorical
###Code
# we use count vectorizer to convert the values into one project_grade_category
vectorizer = CountVectorizer(lowercase=False,)
grade_one_hot_X_train=vectorizer.fit_transform(X_train["project_grade_category"])
grade_one_hot_X_cv=vectorizer.fit_transform(X_cv["project_grade_category"])
grade_one_hot_X_test=vectorizer.fit_transform(X_test["project_grade_category"])
vectorizer.get_feature_names()
print("grade_one_hot_X_train : {} \ngrade_one_hot_X_cv : {} \ngrade_one_hot_X_test : {}".\
format(grade_one_hot_X_train.shape,grade_one_hot_X_cv.shape,grade_one_hot_X_test.shape))
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
X_train = pd.merge(X_train, price_data, on='id', how='left')
X_cv = pd.merge(X_cv, price_data, on='id', how='left')
X_test = pd.merge(X_test, price_data, on='id', how='left')
# check this one: https://www.youtube.com/watch?v=0HOqOcln3Z4&t=530s
# standardization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import StandardScaler
# price_standardized = standardScalar.fit(project_data['price'].values)
# this will rise the error
# ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# Reshape your data either using array.reshape(-1, 1)
price_scalar = StandardScaler()
price_scalar.fit(X_train['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
# Now standardize the data with above maen and variance.
"""price_standardized_X_train = price_scalar.transform(X_train['price'].values.reshape(-1, 1))
price_standardized_X_cv = price_scalar.transform(X_cv['price'].values.reshape(-1, 1))
price_standardized_X_test = price_scalar.transform(X_test['price'].values.reshape(-1, 1))"""
price_standardized_X_train = X_train['price'].values.reshape(-1, 1)
price_standardized_X_cv = X_cv['price'].values.reshape(-1, 1)
price_standardized_X_test = X_test['price'].values.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
2.3 Make Data Model Ready: encoding eassay, and project_title Bag of words
###Code
# We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer = CountVectorizer(min_df=10)
essay_bow_X_train = vectorizer.fit_transform(X_train["essay"])
essay_bow_X_cv = vectorizer.transform(X_cv["essay"])
essay_bow_X_test = vectorizer.transform(X_test["essay"])
print("essay_bow_X_train : {} \nessay_bow_X_cv : {} \nessay_bow_X_test : {}".\
format(essay_bow_X_train.shape,essay_bow_X_cv.shape,essay_bow_X_test.shape))
# We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer = CountVectorizer(min_df=10)
project_title_bow_X_train = vectorizer.fit_transform(X_train["project_title"])
project_title_bow_X_cv = vectorizer.transform(X_cv["project_title"])
project_title_bow_X_test = vectorizer.transform(X_test["project_title"])
print("project_title_bow_X_train : {} \nproject_title_bow_X_cv : {} \nproject_title_bow_X_test : {}".\
format(project_title_bow_X_train.shape,project_title_bow_X_cv.shape,project_title_bow_X_test.shape))
###Output
project_title_bow_X_train : (32000, 1458)
project_title_bow_X_cv : (8000, 1458)
project_title_bow_X_test : (10000, 1458)
###Markdown
TFIDF vectorizer
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=10)
essay_tfidf_X_train = vectorizer.fit_transform(X_train["essay"])
essay_tfidf_X_cv = vectorizer.transform(X_cv["essay"])
essay_tfidf_X_test = vectorizer.transform(X_test["essay"])
print("essay_tfidf_X_train : {} \nessay_tfidf_X_cv : {} \nessay_tfidf_X_test : {}".\
format(essay_tfidf_X_train.shape,essay_tfidf_X_cv.shape,essay_tfidf_X_test.shape))
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=10)
project_title_tfidf_X_train = vectorizer.fit_transform(X_train["project_title"])
project_title_tfidf_X_cv = vectorizer.transform(X_cv["project_title"])
project_title_tfidf_X_test = vectorizer.transform(X_test["project_title"])
print("project_title_tfidf_X_train : {} \nproject_title_tfidf_X_cv : {} \nproject_title_tfidf_X_test : {}".\
format(project_title_tfidf_X_train.shape,project_title_tfidf_X_cv.shape,project_title_tfidf_X_test.shape))
###Output
project_title_tfidf_X_train : (32000, 1458)
project_title_tfidf_X_cv : (8000, 1458)
project_title_tfidf_X_test : (10000, 1458)
###Markdown
2.4 Appling NB() on different kind of featurization as mentioned in the instructionsApply Naive Bayes on different kind of featurization as mentioned in the instructions For Every model that you work on make sure you do the step 2 and step 3 of instrucations
###Code
categorical_numerical_features_X_train=[categories_one_hot_X_train,sub_categories_one_hot_X_train,school_state_one_hot_X_train,\
teacher_prefix_one_hot_X_train,grade_one_hot_X_train,price_standardized_X_train]
categorical_numerical_features_X_cv=[categories_one_hot_X_cv,sub_categories_one_hot_X_cv,school_state_one_hot_X_cv,\
teacher_prefix_one_hot_X_cv,grade_one_hot_X_cv,price_standardized_X_cv]
categorical_numerical_features_X_test=[categories_one_hot_X_test,sub_categories_one_hot_X_test,school_state_one_hot_X_test,\
teacher_prefix_one_hot_X_test,grade_one_hot_X_test,price_standardized_X_test]
categorical_numerical_features_X_train_stack= hstack(categorical_numerical_features_X_train)
categorical_numerical_features_X_cv_stack = hstack(categorical_numerical_features_X_cv)
categorical_numerical_features_X_test_stack= hstack(categorical_numerical_features_X_test)
###Output
_____no_output_____
###Markdown
Creating list of featuers in the same order which we are going to stacked data in SET1
###Code
categorical_numerical_feature_list=['Warmth', 'Care_Hunger', 'History_Civics', 'Music_Arts', 'AppliedLearning', 'SpecialNeeds', 'Health_Sports', 'Math_Science', 'Literacy_Language',\
'Economics', 'CommunityService', 'FinancialLiteracy', 'ParentInvolvement', 'Extracurricular', 'Civics_Government', 'ForeignLanguages', 'NutritionEducation', 'Warmth', 'Care_Hunger', 'SocialSciences', 'PerformingArts', 'CharacterEducation', 'TeamSports', 'Other', 'College_CareerPrep', 'Music', 'History_Geography', 'Health_LifeScience', 'EarlyDevelopment', 'ESL', 'Gym_Fitness', 'EnvironmentalScience', 'VisualArts', 'Health_Wellness', 'AppliedSciences', 'SpecialNeeds', 'Literature_Writing', 'Mathematics', 'Literacy',\
'ak', 'al', 'ar', 'az', 'ca', 'co', 'ct', 'dc', 'de', 'fl', 'ga', 'hi', 'ia', 'id', 'il', 'in', 'ks', 'ky', 'la', 'ma', 'md', 'me', 'mi', 'mn', 'mo', 'ms', 'mt', 'nc', 'nd', 'ne', 'nh', 'nj', 'nm', 'nv', 'ny', 'oh', 'ok', 'or', 'pa', 'ri', 'sc', 'sd', 'tn', 'tx', 'ut', 'va', 'vt', 'wa', 'wi', 'wv', 'wy',\
'Dr', 'Mr', 'Mrs', 'Ms', 'Teacher',\
'GradeA','GradeB','GradeC','GradeD',\
'Price']
vectorizer_essay_bow = CountVectorizer(min_df=10)
essay_bow_ = vectorizer_essay_bow.fit_transform(X_train["essay"])
essay_bow_featuers = vectorizer_essay_bow.get_feature_names()
len(essay_bow_featuers)
vectorizer_project_title_bow = CountVectorizer(min_df=10)
essay_bow_ = vectorizer_project_title_bow.fit_transform(X_train["project_title"])
project_title_bow_featuers = vectorizer_project_title_bow.get_feature_names()
len(project_title_bow_featuers)
all_featuers = np.hstack((categorical_numerical_feature_list,essay_bow_featuers,project_title_bow_featuers))
print(len(all_featuers))
###Output
11629
###Markdown
2.4.1 Applying Naive Bayes on BOW, SET 1
###Code
print("Categorical_numerical_features_X_train_stack :{0}\nCategorical_numerical_features_X_cv_stack :{1}\
\nCategorical_numerical_features_X_test_stack :{2}\
\nEssay_bow_X_train :{3}\nEssay_bow_X_cv :{4}\nEssay_bow_X_test :{5}\
\nProject_title_bow_X_train :{6}\nProject_title_bow_X_cv :{7}\nProject_title_bow_X_test :{8}".\
format(categorical_numerical_features_X_train_stack.shape,\
categorical_numerical_features_X_cv_stack.shape,\
categorical_numerical_features_X_test_stack.shape,\
essay_bow_X_train.shape,essay_bow_X_cv.shape,essay_bow_X_test.shape,\
project_title_bow_X_train.shape,project_title_bow_X_cv.shape,project_title_bow_X_test.shape))
###Output
Categorical_numerical_features_X_train_stack :(32000, 100)
Categorical_numerical_features_X_cv_stack :(8000, 100)
Categorical_numerical_features_X_test_stack :(10000, 100)
Essay_bow_X_train :(32000, 10071)
Essay_bow_X_cv :(8000, 10071)
Essay_bow_X_test :(10000, 10071)
Project_title_bow_X_train :(32000, 1458)
Project_title_bow_X_cv :(8000, 1458)
Project_title_bow_X_test :(10000, 1458)
###Markdown
categorical, numerical features + project_title(BOW) + preprocessed_essay (BOW) As naive bayes doesnot support coo_matrix we ate converting it into dense matrix
###Code
Set1_train=hstack((categorical_numerical_features_X_train_stack,essay_bow_X_train,project_title_bow_X_train)).toarray()
Set1_cv=hstack((categorical_numerical_features_X_cv_stack,essay_bow_X_cv,project_title_bow_X_cv)).toarray()
Set1_test=hstack((categorical_numerical_features_X_test_stack,essay_bow_X_test,project_title_bow_X_test)).toarray()
Set1_train.shape
def batch_predict(clf, data):
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_data_pred = []
tr_loop = data.shape[0] - data.shape[0]%2000
# consider you X_tr shape is 49041, then your cr_loop will be 49041 - 49041%1000 = 49000
# in this for loop we will iterate unti the last 1000 multiplier
for i in range(0, tr_loop, 2000):
y_data_pred.extend(clf.predict_log_proba(data[i:i+2000])[:,1])
# we will be predicting for the last data points
if (tr_loop<data.shape[0]):
y_data_pred.extend(clf.predict_log_proba(data[tr_loop:])[:,1])
return y_data_pred
#will go for alpha with wide range and with big interval .depending on plot will reduce range and interval
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score
train_auc = []
cv_auc = []
Alpha = list(np.arange(10**-5,10**2,3))
for i in (Alpha):
NB = MultinomialNB(alpha=i,class_prior=[0.5,0.5])
NB.fit(Set1_train,Y_train)
y_train_pred = batch_predict(NB, Set1_train)
y_cv_pred = batch_predict(NB, Set1_cv)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(Y_train,y_train_pred))
cv_auc.append(roc_auc_score(Y_cv, y_cv_pred))
plt.plot(np.log10(Alpha), train_auc, label='Train AUC')
plt.plot(np.log10(Alpha), cv_auc, label='CV AUC')
plt.scatter(np.log10(Alpha), train_auc, label='Train AUC points')
plt.scatter(np.log10(Alpha), cv_auc, label='CV AUC points')
plt.legend()
plt.xlabel("Alpha:range b/w 10^-5 and 10^2 ,interval 3")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
max_auc_index=np.argsort(cv_auc)[:len(cv_auc)-10:-1]
max_auc_index
Alpha_which_gave_max_auc=np.array(Alpha)[[max_auc_index]]
Alpha_which_gave_max_auc
max_alpha = Alpha_which_gave_max_auc[0]
max_alpha_6 = Alpha_which_gave_max_auc[6]
print("max_alpha : {}\nmax_alpha_10 : {}".format(max_alpha,max_alpha_6))
cv_auc1=max(cv_auc)
Alpha_max = Alpha[np.argmax(cv_auc)]
print("Max CV_AUC for alpha ranges between 10^-5 to 10^2 : ",cv_auc1)
print("ALPHA value which gives highest AUC : ",Alpha_max)
###Output
Max CV_AUC for alpha ranges between 10^-5 to 10^2 : 0.6640169749434155
ALPHA value which gives highest AUC : 15.00001
###Markdown
From the above graph it is clear that auc is high between alpha value of 3 and 16, so we will choose Laplas smoothing value between 3 and 16 and plot the AUC
###Code
train_auc = []
cv_auc = []
Alpha = list(np.arange(3,16,0.1))
for i in (Alpha):
NB = MultinomialNB(alpha=i,class_prior=[0.5,0.5])
NB.fit(Set1_train,Y_train)
y_train_pred = batch_predict(NB, Set1_train)
y_cv_pred = batch_predict(NB, Set1_cv)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(Y_train,y_train_pred))
cv_auc.append(roc_auc_score(Y_cv, y_cv_pred))
plt.plot(np.log10(Alpha), train_auc, label='Train AUC')
plt.plot(np.log10(Alpha), cv_auc, label='CV AUC')
plt.scatter(np.log10(Alpha), train_auc, label='Train AUC points')
plt.scatter(np.log10(Alpha), cv_auc, label='CV AUC points')
plt.legend()
plt.xlabel("Alpha:range b/w 3 & 16")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
cv_auc2=max(cv_auc)
Alpha_max_value = Alpha[np.argmax(cv_auc)] #alpha value which gave high AUC
print("Max CV_AUC for alpha ranges between 3 to 16 :", cv_auc2)
print("ALPHA value which gives highest AUC : ",Alpha_max_value)
###Output
Max CV_AUC for alpha ranges between 3 to 16 : 0.664045730477113
ALPHA value which gives highest AUC : 14.900000000000011
###Markdown
From above graph we can see that max AUC is 0.664 @ alpha value of 14.900
###Code
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score
NB_best_alpha_model = MultinomialNB(alpha=Alpha_max_value,class_prior=[0.5,0.5])
NB_best_alpha_model.fit(Set1_train,Y_train)
y_train_pred = batch_predict(NB_best_alpha_model, Set1_train)
FPR_Tr,TPR_Tr,TH_Tr = roc_curve(y_true=Y_train,y_score=y_train_pred)
y_test_pred = batch_predict(NB_best_alpha_model, Set1_test)
FPR_te,TPR_te,TH_te = roc_curve(y_true=Y_test,y_score=y_test_pred)
sco_tr = roc_auc_score(y_true=Y_train,y_score=y_train_pred)
sco_te = roc_auc_score(y_true=Y_test,y_score=y_test_pred)
plt.plot(FPR_Tr,TPR_Tr,label = ("Train_Curve:",sco_tr))
plt.plot(FPR_te,TPR_te,label = ("Test_Curve:",sco_te))
plt.title("ROC_curve for hyperperamater of alpha=14.900000000000011")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.legend()
plt.grid()
#confusion matrix with predict function
from sklearn.metrics import confusion_matrix
confusion= confusion_matrix(y_true=Y_test,y_pred=NB_best_alpha_model.predict(Set1_test))
confusion
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def predict(proba, threshould, fpr, tpr):
t = threshould[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
predictions = []
for i in proba:
if i>=t:
predictions.append(1)
else:
predictions.append(0)
return predictions
import seaborn
confusion_mat= pd.DataFrame(metrics.confusion_matrix(Y_test, predict(y_test_pred, TH_te, FPR_te, TPR_te)))
seaborn.heatmap(confusion_mat,annot=True, fmt="d",xticklabels=["Pred:NO","Pred:YES"],yticklabels=["Actual:NO","Actual:YES"])
plt.title("Confusion matrix for Test data")
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
print(confusion_matrix(Y_train, predict(y_train_pred, TH_Tr, FPR_Tr, TPR_Tr)))
print("Test confusion matrix")
print(confusion_matrix(Y_test, predict(y_test_pred, TH_te, FPR_te, TPR_te)))
###Output
====================================================================================================
Train confusion matrix
the maximum value of tpr*(1-fpr) 0.4161849937606353 for threshold -0.0
[[ 3406 1719]
[10045 16830]]
Test confusion matrix
the maximum value of tpr*(1-fpr) 0.3869886803543356 for threshold -0.0
[[1067 534]
[3522 4877]]
###Markdown
2.4.1.1 Top 10 important features of positive class from SET 1
###Code
positive=list(np.argsort((NB_best_alpha_model.feature_log_prob_)[1]))
positive.reverse()
positive_featuers=np.array(all_featuers)[np.array(positive[:10])]
positive_featuers
np.array(positive[:10])
###Output
_____no_output_____
###Markdown
2.4.1.2 Top 10 important features of negative class from SET 1
###Code
negetive=list(np.argsort((NB_best_alpha_model.feature_log_prob_)[0]))
negetive.reverse()
negetive_featuers=np.array(all_featuers)[np.array(negetive[:10])]
negetive_featuers
#index of top 10 negative class features
np.array(negetive[:10])
NB_best_alpha_model.feature_count_[0][np.array(negetive[:10])]
###Output
_____no_output_____
###Markdown
2.4.2 Applying Naive Bayes on TFIDF, SET 2
###Code
print("Categorical_numerical_features_X_train_stack :{0}\nCategorical_numerical_features_X_cv_stack :{1}\
\nCategorical_numerical_features_X_test_stack :{2}\
\nEssay_tfidf_X_train :{3}\nEssay_tfidf_X_cv :{4}\nEssay_tfidf_X_test :{5}\
\nProject_title_tfidf_X_train :{6}\nProject_title_tfidf_X_cv :{7}\nProject_title_tfidf_X_test :{8}".\
format(categorical_numerical_features_X_train_stack.shape,\
categorical_numerical_features_X_cv_stack.shape,\
categorical_numerical_features_X_test_stack.shape,\
essay_tfidf_X_train.shape,essay_tfidf_X_cv.shape,essay_tfidf_X_test.shape,\
project_title_tfidf_X_train.shape,project_title_tfidf_X_cv.shape,project_title_tfidf_X_test.shape))
Set2_train=hstack((categorical_numerical_features_X_train_stack,essay_tfidf_X_train,project_title_tfidf_X_train)).toarray()
Set2_cv=hstack((categorical_numerical_features_X_cv_stack,essay_tfidf_X_cv,project_title_tfidf_X_cv)).toarray()
Set2_test=hstack((categorical_numerical_features_X_test_stack,essay_tfidf_X_test,project_title_tfidf_X_test)).toarray()
###Output
_____no_output_____
###Markdown
Same as we did for Set1 we will go with wide range of hyper paramater values and break it down to small values based on the outcome
###Code
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score
train_auc = []
cv_auc = []
Alpha = list(np.arange(10**-5,10**2,3))
for i in (Alpha):
NB = MultinomialNB(alpha=i,class_prior=[0.5,0.5])
NB.fit(Set2_train,Y_train)
y_train_pred = batch_predict(NB, Set2_train)
y_cv_pred = batch_predict(NB, Set2_cv)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(Y_train,y_train_pred))
cv_auc.append(roc_auc_score(Y_cv, y_cv_pred))
plt.plot(np.log10(Alpha), train_auc, label='Train AUC')
plt.plot(np.log10(Alpha), cv_auc, label='CV AUC')
plt.scatter(np.log10(Alpha), train_auc, label='Train AUC points')
plt.scatter(np.log10(Alpha), cv_auc, label='CV AUC points')
plt.legend()
plt.xlabel("Alpha:range b/w 10^-5 and 10^2 ,interval 3")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
max_auc_index=np.argsort(cv_auc)[:len(cv_auc)-10:-1]
max_auc_index
Alpha_which_gave_max_auc=np.array(Alpha)[[max_auc_index]]
Alpha_which_gave_max_auc
max_alpha = Alpha_which_gave_max_auc[0]
max_alpha_6 = Alpha_which_gave_max_auc[6]
print("max_alpha : {}\nmax_alpha_6 : {}".format(max_alpha,max_alpha_6))
cv_auc=max(cv_auc)
Alpha_max = Alpha[np.argmax(cv_auc)]
print("Max CV_AUC for alpha ranges between 10^-5 to 10^2 : ",cv_auc)
print("ALPHA value which gives highest AUC : ",Alpha_max)
###Output
Max CV_AUC for alpha ranges between 10^-5 to 10^2 : 0.6343524178291744
ALPHA value which gives highest AUC : 1e-05
###Markdown
The AUC looks maximum below 2 so we will select low value and find the right hyper-paramater
###Code
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score
train_auc = []
cv_auc = []
Alpha = list(np.arange(0.00001,2,0.01))
for i in (Alpha):
NB = MultinomialNB(alpha=i,class_prior=[0.5,0.5])
NB.fit(Set2_train,Y_train)
y_train_pred = batch_predict(NB, Set2_train)
y_cv_pred = batch_predict(NB, Set2_cv)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(Y_train,y_train_pred))
cv_auc.append(roc_auc_score(Y_cv, y_cv_pred))
plt.plot(np.log10(Alpha), train_auc, label='Train AUC')
plt.plot(np.log10(Alpha), cv_auc, label='CV AUC')
plt.scatter(np.log10(Alpha), train_auc, label='Train AUC points')
plt.scatter(np.log10(Alpha), cv_auc, label='CV AUC points')
plt.legend()
plt.xlabel("Alpha:range b/w 0.00001 and 2 ,interval 0.01")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
cv_auc3=max(cv_auc)
Alpha_max_value = Alpha[np.argmax(cv_auc)] #alpha value which gave high AUC
print("max CV_AUC for alpha ranges between 0.00001 to 1.2 : ",cv_auc3)
print("ALPHA value which gives highest AUC : ",Alpha_max_value)
NB_best_alpha_mode2 = MultinomialNB(alpha=Alpha_max_value,class_prior=[0.5,0.5])
NB_best_alpha_mode2.fit(Set2_train,Y_train)
y_train_pred = batch_predict(NB_best_alpha_mode2, Set2_train)
FPR_Tr,TPR_Tr,TH_Tr = roc_curve(y_true=Y_train,y_score=y_train_pred)
y_test_pred = batch_predict(NB_best_alpha_mode2, Set2_test)
FPR_te,TPR_te,TH_te = roc_curve(y_true=Y_test,y_score=y_test_pred)
sco_tr = roc_auc_score(y_true=Y_train,y_score=y_train_pred)
sco_te = roc_auc_score(y_true=Y_test,y_score=y_test_pred)
plt.plot(FPR_Tr,TPR_Tr,label = ("Train_Curve:",sco_tr))
plt.plot(FPR_te,TPR_te,label = ("Test_Curve:",sco_te))
plt.title("ROC_curve for hyperperamater of alpha=1.15801")
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.legend()
plt.grid()
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def predict(proba, threshould, fpr, tpr):
t = threshould[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
predictions = []
for i in proba:
if i>=t:
predictions.append(1)
else:
predictions.append(0)
return predictions
confusion_mat= pd.DataFrame(metrics.confusion_matrix(Y_test, predict(y_test_pred, TH_te, FPR_te, TPR_te)))
seaborn.heatmap(confusion_mat,annot=True, fmt="d",xticklabels=["Pred:NO","Pred:YES"],yticklabels=["Actual:NO","Actual:YES"])
plt.title("Confusion matrix for Test data")
###Output
the maximum value of tpr*(1-fpr) 0.38178022888569985 for threshold -0.346
###Markdown
2.4.2.1 Top 10 important features of positive class from SET 2
###Code
positive=list(np.argsort((NB_best_alpha_mode2.feature_log_prob_)[1]))
positive.reverse()
positive_featuers=np.array(all_featuers)[np.array(positive[:10])]
positive_featuers
np.array(positive[:10])
###Output
_____no_output_____
###Markdown
2.4.2.1 Top 10 important features of negative class from SET 2
###Code
negetive=list(np.argsort((NB_best_alpha_mode2.feature_log_prob_)[0]))
negetive.reverse()
negetive_featuers=np.array(all_featuers)[np.array(negetive[:10])]
negetive_featuers
np.array(negetive[:10])
###Output
_____no_output_____
###Markdown
3. Conclusions
###Code
from prettytable import PrettyTable
x = PrettyTable()
x.field_names = ["Feature sets","Model" ,"Hyperparamater" ,"Train AUC", "CV AUC", "Test AUC"]
x.add_row(["BOW","Brut" ,14.9000,0.687, 0.664, 0.658])
x.add_row(["TFIDF","Brut" ,1.16001 ,0.702, 0.644, 0.650])
print(x)
###Output
+--------------+-------+----------------+-----------+--------+----------+
| Feature sets | Model | Hyperparamater | Train AUC | CV AUC | Test AUC |
+--------------+-------+----------------+-----------+--------+----------+
| BOW | Brut | 14.9 | 0.687 | 0.664 | 0.658 |
| TFIDF | Brut | 1.16001 | 0.702 | 0.644 | 0.65 |
+--------------+-------+----------------+-----------+--------+----------+
|
S1_A_Py_dataStructures.ipynb | ###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
# python start with position zero
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
# doesn't like the R
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
# Python & R are running sequential
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # be careful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
vector3
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would chane the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy(remember the difference between shallow and deep) of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0), so axis=0 means row, axis=1 means column
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
# dict
excise_data={'names':names, 'woman':woman,'ages':ages, 'country':country, 'education':education}
excise_data
#create a data Frame
from pandas import DataFrame as df # calling a function from the library and renaming the function name
friends=df.from_dict(excise_data)
friends
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
friends[friends.ages==max(friends.ages)].names
# How many people are 32?
friends[friends.ages==32]
# How many are not Peruvian? (use two different codes)
#1
DangeourousPlaces=["Peru"]
friends[~friends.country.isin(DangeourousPlaces)]
# len(friends[~friends.country.isin(DangeourousPlaces)])
#2
# Who is the person with the highest level of education?
Highest_Educ=["PhD"]
friends[friends.education.isin(Highest_Educ)]
# what is the sex of the oldest person in the group?.
friends[friends.ages==max(friends.ages)].woman
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
# [ ] creates a list, = is assignment
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
# : means before, 2 is the index
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
#[] means position, -1 means to give me the last element of the list
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
#altering the first element to be Alfred rather than Fred
#Makes changes without "warning" so you need to be careful, pay attention
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
#Python points out the line in which there is an error! this is really nice
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
#set gives us the unique values
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
#import is same as library in R - means to "activate" the thing that is on your computer already
#np.array means "array" is a function in np - the name we gave to numpy when we imported it
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
#numpy is how you do algebra, math, stats - R is ready to go because it was designed for use, Python needs a bit more coaxing
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
#not very important to learn
vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
_____no_output_____
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would chane the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would change the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
#parantheses create the tuple
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
#by default Python creates tuples
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
#The set for a dictionary lists the "headers" rather than the values
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
#You're not adding the values, you are concatenating the two lists
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would change the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
#[row,column]
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
#Within the students dataframe, look at the ages column, find the max of the ages, and give me the name
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
#Within students DF, who is above 30 and from China - gives the entire row
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
#who has a country that is within the DangerousPlaces list
students[~students.country.isin(DangeourousPlaces)] # the opposite
#tild ~ gives you "not"
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
#Among students who have BA, sort by ages in increasing value, and then gives the "tail" - who ever is in the furthest down row
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
Classroom
# Python updates objects while R does not
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
names 4 non-null object
dtypes: int64(1), object(3)
memory usage: 200.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would chane the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would chane the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
students[(students.ages>30)
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
students[(students.ages>32)} # How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package.A tuple is a sequence of immutable Python objects. Tuples are sequences, just like lists. The differences between tuples and lists are, the tuples cannot be changed unlike lists and tuples use parentheses, whereas lists use square brackets. Creating a tuple is as simple as putting different comma-separated values.When we are talking about data structures, we are talking about containers. Containers are the way to organiza data. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
#Python starts calculation from 0, while R starts from 1
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element of the list
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
#asks to show everything before the 3rd element
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
#give the last element, - means that Python start working in the reverse order
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
#making changes in specific element; working is flexible, but beware, there is no warning of changes!
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elementsA[:2]+elementsA[2:]
#word[0:2] # characters from position 0 (included) to 2 (excluded)
#word[2:5] # characters from position 2 (included) to 5 (excluded)
#Slicing:
#Note how the start is always included, and the end always excluded. This makes sure that s[:i] + s[i:] is always equal to s:
#word[:2] + word[2:]
#word[:4] + word[4:]
#word[:2] # character from the beginning to position 2 (excluded)
#word[4:] # characters from position 4 (included) to the end
#word[-2:] # characters from the second-last (included) to the end
#+---+---+---+---+---+---+
#| P | y | t | h | o | n |
#+---+---+---+---+---+---+
#0 1 2 3 4 5 6
#-6 -5 -4 -3 -2 -1
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
#Python tells where is the mistake: Error, the list is not assigned, bcz it was deleted
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
#how to get unique values. Python is doing it in the efficient way
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
#import means activate the library I already have
#as np - you can give a name of the library
vector1=np.array(['b','c','d']) #np - is a name; array - is a function of the np
vector2=np.array(['a','b','d'])
vector1>vector2 #this line compares the elements
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
#cannot compare 3 to 2, Python has problems with shapes of vectors
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
#difference btw list and a tuple is brekets
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
#tuple can be in brekets or without, just use commas
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
#in Python in tuple you cannot change things, you are not allowed; changing or deleting in tuple are not possible
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
#dictionary is created by {}
#dicts are structured as keyword and the explanation
#examples with the languages speoken in Excel: it's not efficient to use list is a person speaks 5-10 languages.
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
#will give an error, if its not the last call.
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
#len - function, lenth of the data structure
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
#set - function, keep unique values
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
# with dict, using of set function will Pshow not unique values, but unique key words
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
#iloc - function of pandas, integer location; The iloc indexer for Pandas Dataframe is used for integer-location based indexing / selection by **position**.
# data.iloc[<row selection>, <column selection>]
#students.iloc[:,[1,3]] - : means all rows to select, [1,3] - means select only column 1 and 3
# Single selections using iloc and DataFrame
# Rows:
#data.iloc[0] # first row of data frame
#data.iloc[1] # second row of data frame
#data.iloc[-1] # last row of data frame
# Columns:
#data.iloc[:,0] # first column of data frame
#data.iloc[:,1] # second column of data frame
#data.iloc[:,-1] # last column of data frame
# this is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would change the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,1]=23 # change is immediate! (no warning)
studentsCopy
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index after dropping
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
studentsCopy3=students.copy()
studentsCopy3
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
#isin is an element-wise function version of the python keyword in.
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
#Order false means its from biggest to smallest
#order true means from smallest to largest
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elements[:2]+elements[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
type(DetailStudent)
type(DetailStudentaTuple)
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would chane the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,0]=23 # change is immediate! (no warning)
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part A: Data Structures in Python Programming languages use data structures to tell the computer how to organize the data we are working with. That is, data structures provided by a programming language are not the same in another one. However, in most cases, a name given to a data structure in one programming language should generally be the same in other one. It is worth keeping in mind, that a particular data structure may serve for one purpose, but not for other ones.In everyday life, a book can be considered a data structure: we use it to store some kind of information. It has some advantages: it has a table of contents; it has numbers on the pages; you can take it with you; read it as long as you can see the words; and read it again as many times as you want. It has some disadvantages: you can lose it, and need to buy it again; it can deteriorate; get eaten by an insect; and so on.We are going to talk about 3 data structures in Python:1. [List](part1) 2. [Tuple](part2) 3. [Dictionary](part3) 4. [Data Frame](part4) **Lists** and **tuples** are basic containers, while **dictionaries** (a.k.a **dicts**) could be considered less simple and with a different 'philosophy'. **Data frames** are complex structures not directly supported by base Python, but easily managed with an additional package. ____ List Lists in Python are containers of values as in **R**. The values can be of any kind (numbers or non-numbers), and even other containers (simple or complex). If we have an spreadsheet as a reference, a row is a 'natural' list. Different from R, you can not give names to the list elements.
###Code
DetailStudent=["Fred Meyers",40,"False"]
###Output
_____no_output_____
###Markdown
The *object* 'DetailStudent' serves to store temporarily the list. To name a list, use combinations of letters and numbers (never start with a number) in a meaningful way. Typing the name of the object, now a list, will give you all the contents you saved in there:
###Code
DetailStudent
###Output
_____no_output_____
###Markdown
Python's lists are similar to vectors in R, but Python does not coerce the values (40 is still a number). Lists in Python are so flexible and simple, that it is common to have nested lists:
###Code
DetailStudentb=['Michael Nelson',60,'True']
Classroom=[DetailStudent,DetailStudentb] # list of lists
Classroom
###Output
_____no_output_____
###Markdown
You can access individual elements like this:
###Code
Classroom[1]
###Output
_____no_output_____
###Markdown
From the last result, you must always remember that Python positions start in **0**, see more examples of accessing:
###Code
DetailStudentb[0] # first element
DetailStudentb[:2] # before the index 2, that is position 0 and 1 / In R: DetailStudentb[1:2] (both limits needed)
#hkfhkshfkdfjh
DetailStudent[-1] # R does not work like this to get you the last element of a list...This will erase the first one
###Output
_____no_output_____
###Markdown
You can alter lists like in R (just remember positions start from 0 in Python):
###Code
DetailStudent[0]='Alfred Mayer'
DetailStudent
###Output
_____no_output_____
###Markdown
Deleting elements is easy, and we can do it:* By position* By valueLet's see. If we have these lists:
###Code
elementsA=[1,2,3,4]
elementsB=[1,2,3,4]
###Output
_____no_output_____
###Markdown
Then:
###Code
## DELETING BY POSITION
del elementsA[2] #delete third element
# then:
elementsA # alternative: elementsA[:2]+elementsA[3:]
# DELETING BY VALUE
elementsB.remove(2)
elementsB
###Output
_____no_output_____
###Markdown
Getting rid of your list:
###Code
newList=['a','b']
del newList
newList # becareful!... it is gone!
###Output
_____no_output_____
###Markdown
It is important to know how to get **unique values**:
###Code
weekdays=['M','T','W','Th','S','Su','Su']
weekdays
#then:
weekdays=list(set(weekdays))
weekdays
###Output
_____no_output_____
###Markdown
Doesn't Python have vectors? Vectors are NOT part of the basic Python, you need to use a mathematical module like **numpy**. When working with vectors, the operations of comparison ('>', '<', etc.) will work **element by element** as in R:
###Code
# For Python to work as R with vectors, you need to use the
# mathematical structure offered by numpy:
import numpy as np
vector1=np.array(['b','c','d'])
vector2=np.array(['a','b','d'])
vector1>vector2
###Output
_____no_output_____
###Markdown
If vectors have different sizes, comparison works if one has ONE element:
###Code
vector3=np.array(['a'])
vector1>vector3 # each element of vector1 compared to the only one in vector3
vector3
###Output
_____no_output_____
###Markdown
But, this confuses vectors:
###Code
vector4=np.array(['a','b'])
vector1>vector4
###Output
_____no_output_____
###Markdown
This is also valid for numbers:
###Code
# If these are our vectors:
numbers1=np.array([1,2,3])
numbers2=np.array([1,2,3])
numbers3=np.array([1])
numbers4=np.array([10,12])
###Output
_____no_output_____
###Markdown
Then, these work well:
###Code
# adding element by element:
numbers1+numbers2
# adding one value to all the elements of other vector:
numbers1+numbers3
# multiplication (element by element)!
numbers1*numbers2
# and this kind of multiplication:
numbers1*3
###Output
_____no_output_____
###Markdown
This will not work (it does not work in R either):
###Code
numbers1+numbers4
###Output
_____no_output_____
###Markdown
When dealing with vectors, the elements must share the same type. Otherwise, elements will be coerced into the same type:
###Code
numbers5=np.array([1,2,'3'])
numbers5
numbers6=np.array([1,2,3.0])
numbers6
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning) _____ Tuples Tuples are similar to lists. They can store any kind value, and even other structures:
###Code
DetailStudentaTuple=("Fred Meyers",40,"False")
###Output
_____no_output_____
###Markdown
To create tuples, you can use '()', the command *tuple()* or nothing:
###Code
DetailStudentbTuple='Michael Nelson',60,'True'
DetailStudentaTuple
DetailStudentbTuple
###Output
_____no_output_____
###Markdown
So, **why do we need *tuples*?** When you do not want that your object be altered:
###Code
DetailStudentbTuple[1]=50
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ Dicts Dicts, on the surface, are very similar to lists in R:
###Code
# creating dict:
DetailStudentDict={'fullName':"Fred Meyers",
'age':40,
'female':False}
# seeing it:
DetailStudentDict
###Output
_____no_output_____
###Markdown
But you realize soon a difference:
###Code
DetailStudentDict[0]
###Output
_____no_output_____
###Markdown
Dicts _only_ use their **keys** to access the elements:
###Code
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Dicts do allow changing values:
###Code
DetailStudentDict['age']=41
# then:
DetailStudentDict['age']
###Output
_____no_output_____
###Markdown
Lists versus Tuples vs Dicts? __A) Make sure what you have:__You can easily know what structure you have like this:
###Code
type(DetailStudentDict)
DetailStudentDict
type(DetailStudent)
DetailStudent
type(DetailStudentaTuple)
DetailStudentaTuple
###Output
_____no_output_____
###Markdown
__B) Make sure functions are shareable__They share many basic functions:
###Code
listTest=[1,2,3,3]
tupleTest=(1,2,3,4,4)
dictTest={'a':1,'b':2,'c':2}
len(listTest), len(tupleTest), len(dictTest)
###Output
_____no_output_____
###Markdown
Some may work slightly different:
###Code
# using set to keep unique values:
set(listTest)
set(tupleTest) # so far so good...
set(dictTest) # this MAY not be what you expected.
###Output
_____no_output_____
###Markdown
Notice the use of comparissons between lists and vectors:
###Code
numbers4=np.array([2])
numbers1<numbers4
numbers1
###Output
_____no_output_____
###Markdown
This will work the same for text:
###Code
list1=np.array(['b','c','d'])
list2=np.array(['a','b','d'])
list1>list2
###Output
_____no_output_____
###Markdown
If we used lists, you get a similar bahavior (not implemented in base R):
###Code
list1=['b','c','d']
list2=['a','b','d']
list1>list2
###Output
_____no_output_____
###Markdown
Python is doing a simple _lexicographical ordering_, that is, they compare the first element of each list (from left to right), and report _True_ or _False_ if they differ using '>' (or '<'). It is like comparing two words:
###Code
np.array([1,2,4]) > np.array([1,2,3]) # this is true because 4>3, and the previous are equal.
[1,2,4] > [1,2,3]
# this is true because 9>8, and the previous are equal, when a difference is detected, the comparisson stops.
(1,2,9,1) > (1,2,8,9,9)
# while you can not compare if sizes differ:
np.array([1,2,9,1]) > np.array([1,2,8,9,9])
###Output
_____no_output_____
###Markdown
Math operations should be taken with care:
###Code
# This will CONCATENATE:
numbersL1=[1,2,3]
numbersL2=[1,2,3]
numbersL1+numbersL2
# this won't work:
numbersL1 * numbersL2
# this will:
numbersL1 * 3
###Output
_____no_output_____
###Markdown
Due to its flexibility, lists are used pervasively in simple Python code. [Go to page beginning](beginning)____ Data Frames Data frames are containers of values. The most common analogy is an spreadsheet. To create a data frame, we need to call **pandas**:
###Code
import pandas
###Output
_____no_output_____
###Markdown
We can prepare the data frame now:
###Code
# columns of the data frame (as lists):
names=["Qing", "Françoise", "Raúl", "Bjork"]
ages=[32,33,28,30]
country=["China", "Senegal", "Spain", "Norway"]
education=["Bach", "Bach", "Master", "PhD"]
# now in a dict:
data={'names':names, 'ages':ages, 'country':country, 'education':education}
data
###Output
_____no_output_____
###Markdown
...and from dict to DataFrame:
###Code
students=pandas.DataFrame.from_dict(data)
# seeing it:
students
###Output
_____no_output_____
###Markdown
Sometimes, Python users code like this:
###Code
import pandas as pd # renaming the library
students=pd.DataFrame.from_dict(data)
students
###Output
_____no_output_____
###Markdown
Or like this:
###Code
from pandas import DataFrame as df # calling a function from the library and renaming the function name
students=df.from_dict(data)
students
###Output
_____no_output_____
###Markdown
You can set a particular column as **row name**:
###Code
students.set_index('names') # You have not changed until: students.set_index('names',inplace=True)
###Output
_____no_output_____
###Markdown
The command *type()* still works here:
###Code
type(students)
###Output
_____no_output_____
###Markdown
You can get more information on the data types like this (as _str()_ in R):
###Code
students.dtypes
###Output
_____no_output_____
###Markdown
The _info()_ function can get you more details:
###Code
students.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
names 4 non-null object
ages 4 non-null int64
country 4 non-null object
education 4 non-null object
dtypes: int64(1), object(3)
memory usage: 208.0+ bytes
###Markdown
The data frames in pandas behave much like in R:
###Code
#one particular column
students.names
# or
students['names'] # it is not the same as: students[['names']]
# it is not the same as:
students[['names']] # a data frame, not a column (or series)
# two columns
students.iloc[:,[1,3]]
# thie is also a DF
students[['country','names']]
## Using positions is the best way to get several columns:
students.iloc[:,1:4]
###Output
_____no_output_____
###Markdown
Deleting a column:
###Code
# This is what you want get rid of:
byeColumns=['education']
#this would chane the original: students.drop(byeColumns,axis=1,inplace=False)
studentsNoEd=students.drop(byeColumns,axis=1)
# this is a new DF
studentsNoEd
###Output
_____no_output_____
###Markdown
You can modify any values in a data frame. Let me create a **deep** copy of this data frame to play with:
###Code
studentsCopy=students.copy()
studentsCopy
###Output
_____no_output_____
###Markdown
Then,
###Code
# I can change the age of Qing to 23 replacing 32:
studentsCopy.iloc[0,1]=23 # change is immediate! (no warning)
studentsCopy
# I can reset a column as **missing**:
studentsCopy.country=None
# And, delete a column by droping it:
studentsCopy.drop(['ages'],1,inplace=True) # axis=1 is column
# Then, our copy looks like this:
studentsCopy
###Output
_____no_output_____
###Markdown
One important detail when erasing rows, is to reset the indexes:
###Code
# another copy for you to see the difference:
studentsCopy2=students.copy()
studentsCopy2
# drop third row (axis=0)
studentsCopy2.drop(2)
studentsCopy2
# resetting index
studentsCopy2.drop(2).reset_index()
#better resetting index
studentsCopy2.drop(2).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Pandas offers some practical functions:
###Code
# rows and columns
students.shape # dim(meals) in R
# length:
len(students) # length in R gives number of columns, here you get number of rows.
###Output
_____no_output_____
###Markdown
There is no specific function to get number of rows/columns in pandas, but **len** is useful:
###Code
len(students.index) # or students.shape[0]
len(students.columns) # or students.shape[1]
###Output
_____no_output_____
###Markdown
Remember that you can use len with list, tuples and data frames!...and even dictionaries (notice it gives you the count at the top level, it is not smart to report the count inside of an composite element).
###Code
aDict={'name':'John', "language_spoken":['Spanish','English']}
len(aDict)
###Output
_____no_output_____
###Markdown
You also have _tail_ and _head_ functions in Pandas, to get some top or bottom rows:
###Code
students.head(2) #and
students.tail(2)
###Output
_____no_output_____
###Markdown
You can also see the column names like this:
###Code
# similar to names() in R
students.columns
###Output
_____no_output_____
###Markdown
It may look like a list, but it is not:
###Code
type(students.columns) # index type...but list functions work here!
###Output
_____no_output_____
###Markdown
If you needed a list:
###Code
students.columns.values.tolist()
# or:
# students.columns.tolist()
# this is the easiest:
# list(students)
###Output
_____no_output_____
###Markdown
Querying Data Frames: Once you have a data frame you can start writing interesting queries:
###Code
# Who is the oldest in the group?
students[students.ages==max(students.ages)].names
# Who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')] # parenthesis are important with '&' in Pandas!!!
# Who is not from Norway?
students[students.country!="Norway"]
# Who is from one of these?
DangeourousPlaces=["Peru", "USA", "Spain"]
students[students.country.isin(DangeourousPlaces)]
students[~students.country.isin(DangeourousPlaces)] # the opposite
# The education level of who is above 30 and from China?
students[(students.ages>30) & (students.country=='China')].education
# **Show me the data ordered by age (decreasing)?**
toSort=["ages"]
Order=[False]
students.sort_values(by=toSort,ascending=Order)
# Show who is the oldest person with a Bachelor:
students[students.education=='Bach'].sort_values('ages',ascending=True).tail(1)
###Output
_____no_output_____
###Markdown
Class exercises: In a new Jupyter notebook solve each excercise, and then upload them to GitHub. Name the notebook as 'ex_data_structures': A. Turn this into a Data Frame name "friends":
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
B. Answer the following:
###Code
# Who is the oldest person in this group of friends?
# How many people are 32?
# How many are not Peruvian? (use two different codes)
# Who is the person with the highest level of education?
# what is the sex of the oldest person in the group?
###Output
_____no_output_____
###Markdown
Homework If you have the query:
###Code
# where is the youngest male in the group from?
###Output
_____no_output_____ |
spec/msi.ipynb | ###Markdown
Modular Strided IntervalsFix $N \in \{1, \ldots, 2^{23} - 1\}$.The LLVM type $\texttt{i}N$ represents $N$-bit tuples:$\texttt{i}N := \{0, 1\}^N$These tuples can be interpreted as elements of $\mathbb{Z}/{2^N}$ using the isomorphism $\phi_N$ together with an appropriate map of operations:$\phi_N \colon \texttt{i}N \rightarrow \mathbb{Z}/{2^N}, (b_0, \ldots, b_{N-1}) \mapsto \left(\sum_{k=0}^{N-1}b_k^k\right) + 2^N \mathbb{Z}$An abstraction of $\mathbb{Z}$ and thefore also of $\texttt{i}N$ can be obtained by a generalization of intervals over $\mathbb{Z}$, represeted by the type $\mathrm{MSI}_N$ of _modular strided intervals (MSI)_:$\mathrm{MSI}_N := \{s[a, b]_N \mid a, b, s \in \mathbb{Z}/2^N\}$The sematics of an MSI is given by the concetization function $\gamma_N$:$\gamma_N \colon \mathrm{MSI}_N \rightarrow \mathcal{P}(\mathbb{Z}/{2^N}), s[a, b]_N \mapsto \{k + 2^N \mathbb{Z} \mid k \in \mathbb{Z}, a \leq k, k \leq \min \{l \in \mathbb{Z} \mid a \leq k, l \equiv b \mod 2^N\}, k \equiv a \mod s\}$
###Code
from itertools import count, takewhile
from random import randint
from sympy import gcd, lcm
class MSI(object):
"""
Modular strided iterval
"""
def __init__(self, bit_width, begin, end, stride=1):
self.bit_width = bit_width
self.begin = begin
self.end = end
self.stride = stride
def __eq__(self, other):
return (self.bit_width == other.bit_width
and self.stride == other.stride
and self.begin == other.begin
and self.end == other.end)
def __repr__(self):
return f'{self.stride}[{self.begin}, {self.end}]_{{{self.bit_width}}}'
def __hash__(self):
return (self.begin+23) * (self.end+29) * (self.stride+31) % 16777216
def _tuple_repr(self):
return (self.bit_width, self.begin, self.end, self.stride)
###Output
_____no_output_____
###Markdown
Defining Functions and Predicates
###Code
# This predicate has not tests, it's an axiom.
def valid(i):
n, a, b, s = i._tuple_repr()
if n <= 0:
return False
if a < 0 or 2**n <= a:
return False
if b < 0 or 2**n <= b:
return False
if s < 0 or 2**n <= s:
return False
return True
def gamma(i):
n = i.bit_width
s = i.stride
a = i.begin
b = i.end if a <= i.end else i.end + 2**n
return {k % 2**n for k in takewhile(
lambda k: k <= b,
(a+l*s for l in count()) if s > 0 else [a]
)}
###Output
_____no_output_____
###Markdown
$\gamma_N$ is not injective, therefore normalization of MSIs is needed s.t. $\gamma_N$ restricted to $\{i \in \mathrm{MSI}_N \mid \mathrm{normal}(i)\}$ is injective. All other operatins on MSIs assume that there operands are normal and are expected to return a normal MSI.Expanation of $\textrm{normal}$:Fix $s[a, b]_N \in \textrm{MSI}_N$.Case 1: Assume $s = 0$.
###Code
def normal(i):
n, a, b, s = i._tuple_repr()
if s == 0 and a != b:
return False
if a == b and s != 0:
return False
if not b in gamma(i):
return False
a_ = a - s
if a_ != a and a_ >= 0 and gamma(i) == gamma(MSI(n, a_, (b-s) % 2**n, s)):
return False
if b < a and gamma(i) == gamma(MSI(n, b, a, 2**n - s)):
return False
return True
###Output
_____no_output_____
###Markdown
Test sets of MSIs with theire respective concretizations
###Code
test_MSIs_handpicked_gamma = [
# normalized
# no wraparound
# strid = 0
# begin = 0
(MSI(4, 0, 0, 0), {0}),
# begin > 0
(MSI(4, 3, 3, 0), {3}),
# strid = 1
# begin = 0
# end < 2**N-1
(MSI(4, 0, 2, 1), {0, 1, 2}),
# end = 2**N-1
(MSI(3, 0, 7, 1), {0, 1, 2, 3, 4, 5, 6, 7}),
# begin > 0
(MSI(4, 3, 4, 1), {3, 4}),
# stride > 1
# begin = 0
(MSI(4, 0, 4, 2), {0, 2, 4}),
# begin > 0
(MSI(3, 1, 7, 3), {1, 4, 7}),
(MSI(6, 6, 26, 10), {6, 16, 26}),
# wraparound
# stride = 1
(MSI(4, 14, 2, 1), {14, 15, 0, 1, 2}),
# stride > 1
(MSI(4, 11, 4, 3), {1, 4, 11, 14})]
test_MSIs_handpicked_gamma_unnormalized = [
# unnormalized
# no wraparound
# stride = 0
# begin = 0
(MSI(4, 0, 3, 0), {0}),
# begin > 0
(MSI(4, 3, 8, 0), {3}),
# stride = 1
# begin = 0
# end = begin
(MSI(4, 0, 0, 1), {0}),
# end != begin
(MSI(2, 2, 1, 1), {0, 1, 2, 3}),
# begin > 0
# end = begin
(MSI(4, 3, 3, 1), {3}),
# end != begin
(MSI(3, 5, 4, 1), {0, 1, 2, 3, 4, 5, 6, 7}),
# stride > 1
# begin = 0
(MSI(4, 0, 5, 2), {0, 2, 4}),
(MSI(4, 0, 3, 5), {0}),
# begin > 0
# end = begin - stride mod 2**N
(MSI(4, 11, 7, 4), {3, 7, 11, 15}),
# end != begin - stride mod 2**N
(MSI(6, 6, 35, 10), {6, 16, 26}),
(MSI(4, 3, 7, 5), {3}),
# wraparound
# stride = 0
(MSI(4, 5, 3, 0), {5}),
# stride = 1
(MSI(3, 5, 4, 1), {0, 1, 2, 3, 4, 5, 6, 7}),
(MSI(4, 15, 0, 1), {15, 0}),
# stride > 1
# end = begin - stride mod 2**N
(MSI(4, 10, 6, 4), {2, 6, 10, 14}),
(MSI(4, 12, 2, 6), {2, 12}),
# end != begin and != begin - stride mod 2**N
(MSI(4, 13, 2, 8), {13}),
(MSI(4, 11, 6, 3), {11, 14, 1, 4}),
(MSI(4, 10, 9, 4), {2, 6, 10, 14}),
(MSI(4, 12, 7, 6), {2, 12})
]
test_MSIs_handpicked = {}
for i, _ in test_MSIs_handpicked_gamma:
n = i.bit_width
if n not in test_MSIs_handpicked:
test_MSIs_handpicked[n] = [i]
else:
test_MSIs_handpicked[n].append(i)
print('size: ' + ', '.join(f'{n}: {len(js)}' for n, js in test_MSIs_handpicked.items()))
test_MSIs_handpicked_unnormalized = {}
for i, _ in test_MSIs_handpicked_gamma_unnormalized:
n = i.bit_width
if n not in test_MSIs_handpicked_unnormalized:
test_MSIs_handpicked_unnormalized[n] = [i]
else:
test_MSIs_handpicked_unnormalized[n].append(i)
print('size: ' + ', '.join(f'{n}: {len(js)}' for n, js in test_MSIs_handpicked_unnormalized.items()))
###Output
size: 4: 7, 3: 2, 6: 1
size: 4: 16, 2: 1, 3: 2, 6: 1
###Markdown
Tests for gamma
###Code
def test_gamma():
failed = False
for i, ks in test_MSIs_handpicked_gamma:
if not gamma(i) == ks:
failed = True
print(f'{i}: {gamma(i)}, {ks}')
if not failed:
print('succeeded')
def test_gamma_unnormalized():
failed = False
for i, ks in test_MSIs_handpicked_gamma_unnormalized:
if not gamma(i) == ks:
failed = True
print(f'{i}: {gamma(i)}, {ks}')
if not failed:
print('succeeded')
test_gamma()
test_gamma_unnormalized()
###Output
succeeded
succeeded
###Markdown
Normalization function
###Code
def normalize(i):
n, a, b, s = i._tuple_repr()
if s == 0:
b = a
else:
b_ = b if a <= b else b+2**n
b = (b_ - (b_-a) % s) % 2**n
if a == b:
s = 0
else:
if 2**n % s == 0 and (a-b) % 2**n == s:
a = a % s
b = (a-s) % 2**n
elif b == (a+s) % 2**n and b < a:
a, b = b, a
s = b-a
return MSI(n, a, b, s)
###Output
_____no_output_____
###Markdown
Test sets and utility functions for testing Warning:`normal` is used in `unary_function_test` if the `unnormalized` parameter is `True`, but tested later. Therefore this parameter should not be set before `normal` is tested.
###Code
def test_set(bit_widths, begins, ends, strides, only_normal=True, print_stats=False):
MSIs = {}
for n in bit_widths:
js = set()
bs = begins(n)
for b in bs:
es = ends(n)
for e in es:
ss = strides(n)
for s in ss:
if only_normal:
js.add(normalize(MSI(n, b, e, s)))
else:
js.add(MSI(n, b, e, s))
MSIs[n] = list(js)
if print_stats:
print('size: ' + ', '.join(f'{n}: {len(js)}' for n, js in MSIs.items()))
if not only_normal:
print('unnormalized: ' + ', '.join(f'{n}: {len(list(0 for j in js if not normal(j)))}' for n, js in MSIs.items()))
return MSIs
f = lambda n: list(range(2**n))
g = lambda n: list(range(2**n))
print('test_MSIs_4_exhaustive')
test_MSIs_4_exhaustive = test_set(range(1, 4+1), f, g, f, print_stats=True)
print('test_MSIs_4_exhaustive')
test_MSIs_4_exhaustive_unnormalized = test_set(range(1, 4+1), f, g, f, only_normal=False, print_stats=True)
f = lambda n: list(range(2**n))
g = lambda n: list(range(2**n))
print('test_MSIs_5_6_exhaustive')
test_MSIs_5_6_exhaustive = test_set(range(5, 6+1), f, g, f, print_stats=True)
print('test_MSIs_5_6_exhaustive')
test_MSIs_5_6_exhaustive_unnormalized = test_set(range(5, 6+1), f, g, f, only_normal=False, print_stats=True)
test_MSIs_6_exhaustive = {
**test_MSIs_4_exhaustive, **test_MSIs_5_6_exhaustive
}
test_MSIs_6_exhaustive_unnormalized = {
**test_MSIs_4_exhaustive_unnormalized, **test_MSIs_5_6_exhaustive_unnormalized
}
ks = [a+b for a in [0, 30] for b in [0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 15]]
ls = [30, 31, 32, 33, 35, 36, 40, 45]
f = lambda _: ks
print('test_MSIs_6_partial')
test_MSIs_6_partial = test_set([6], f, g, f, print_stats=True)
print('\ntest_MSIs_6_partial_unnormalized')
test_MSIs_6_partial_unnormalized = test_set([6], f, f, f, only_normal=False, print_stats=True)
ks = [a+b for a in [0, 30, 60] for b in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 25]]
ls = [0, 2, 3, 5, 6, 10, 15]
f = lambda n: takewhile(lambda k: k < 2**n, ks)
g = lambda n: (((15 if 2**n < 30 else 45) + 15 + a) % 2**n for a in ls)
print('test_MSIs_8_partial')
test_MSIs_8_partial = test_set([8], f, g, f, print_stats=True)
print('\ntest_MSIs_8_partial_unnormalized')
test_MSIs_8_partial_unnormalized = test_set([8], f, g, f, only_normal=False, print_stats=True)
f = lambda n: set(randint(0, 2**n-1) for _ in range(8))
g = lambda n: set(randint(0, 2**(n-1)-1) for _ in range(8))
print('test_MSIs_random')
test_MSIs_random = test_set(range(5, 8+1), f, f, g, print_stats=True)
print('\ntest_MSIs_random_unnormalized')
test_MSIs_random_unnormalized = test_set(range(5, 8+1), f, f, g, only_normal=False, print_stats=True)
def _unary_function_test(f, p, test_MSIs, test_count=0, fail_count=0, fail_lim=8):
for n, js in test_MSIs.items():
print(f' testing bit width: {n}')
for i in js:
test_count += 1
x = f(i)
if not p(n, i, x):
fail_count += 1
print(f' {i}: {x}')
if fail_count == fail_lim:
return test_count, fail_count
if test_count % 25000 == 0:
print(f'- tested {test_count} arguments')
return test_count, fail_count
def unary_function_test(f, p, big=False, unnormalized=False):
fail_lim = 16 if big else 8
test_count = fail_count = 0
print('testing MSIs with bit width up to 4 exhaustively')
MSIs = test_MSIs_4_exhaustive_unnormalized if unnormalized else test_MSIs_4_exhaustive
test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
print('testing random MSIs with bit width from 5 to 8')
MSIs = test_MSIs_random_unnormalized if unnormalized else test_MSIs_random
test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
if big:
print('testing some MSIs with bit width 6')
MSIs = test_MSIs_6_partial_unnormalized if unnormalized else test_MSIs_6_partial
test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
print('testing some MSIs with bit width 8')
MSIs = test_MSIs_8_partial_unnormalized if unnormalized else test_MSIs_8_partial
test_count, fail_count = _unary_function_test(f, p, MSIs, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
if fail_count == 0:
print(f'succeeded (tested {test_count} arguments in total)')
def _bin_fun_test(f, p, test_MSIs, test_count=0, fail_count=0, fail_lim=8):
for n, js in test_MSIs.items():
print(f' testing bit width: {n}')
for i in js:
for j in js:
test_count += 1
x = f(i, j)
if not p(n, i, j, x):
fail_count += 1
print(f' f {i} {j}: {x}')
if fail_count == fail_lim:
return test_count, fail_count
if test_count % 25000 == 0:
print(f'- tested {test_count} arguments')
return test_count, fail_count
def bin_fun_test(f, p, big=False, non_zero=False):
fail_lim = 16 if big else 8
test_count = fail_count = 0
print('testing MSIs with bit width up to 4 exhaustively')
test_count, fail_count = _bin_fun_test(f, p, test_MSIs_4_exhaustive, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
print('testing random MSIs with bit width from 5 to 8')
test_count, fail_count = _bin_fun_test(f, p, test_MSIs_random, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
if big:
print('testing some MSIs with bit width 6')
test_count, fail_count = _bin_fun_test(f, p, test_MSIs_6_partial, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
print('testing some MSIs with bit width 8')
test_count, fail_count = _bin_fun_test(f, p, test_MSIs_8_partial, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
if fail_count == 0:
print(f'succeeded (tested {test_count} arguments in total)')
def _bin_op_test(op_MSI, op, test_MSIs, test_count=0, fail_count=0, fail_lim=8, bad_args={}, bad_lim=8, non_zero=False):
bad_precision = max(bad_args.values()) if len(bad_args) > 0 else 1
for n, js in test_MSIs.items():
print(f' testing bit width: {n}')
for i in js:
vals_i = gamma(i)
for j in js:
test_count += 1
vals_j = gamma(j)
if non_zero and 0 in vals_j:
vals_op = {op(n, k, l) for k in vals_i for l in vals_j if not l == 0}
else:
vals_op = {op(n, k, l) for k in vals_i for l in vals_j}
vals_op_MSI = gamma(op_MSI(i, j))
if not vals_op <= vals_op_MSI:
fail_count += 1
print(f' {i} op {j}: {op_MSI(i, j)}, {vals_op}, {vals_i}, {vals_j}')
if fail_count == fail_lim:
return test_count, fail_count, bad_args
elif not len(vals_op) == 0:
precision = len(vals_op) / (len(vals_op_MSI) * 2**n)
if precision < bad_precision:
if len(bad_args) == bad_lim:
bad_args.pop(list(bad_args.keys())[list(bad_args.values()).index(bad_precision)])
bad_args[(i, j)] = precision
bad_precision = max(bad_args.values())
if test_count % 25000 == 0:
print(f'- tested {test_count} arguments')
return test_count, fail_count, bad_args
def bin_op_test(op_MSI, op, big=False, non_zero=False):
fail_lim = bad_lim = 16 if big else 8
test_count = fail_count = 0
bad_args = {}
print('testing MSIs with bit width up to 4 exhaustively')
test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_4_exhaustive, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero)
if fail_count == fail_lim:
return
print('testing random MSIs with bit width from 5 to 8')
test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_random, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero)
if fail_count == fail_lim:
return
if big:
print('testing some MSIs with bit width 6')
test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_6_partial, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero)
if fail_count == fail_lim:
return
print('testing some MSIs with bit width 8')
test_count, fail_count, bad_args = _bin_op_test(op_MSI, op, test_MSIs_8_partial, test_count, fail_count, fail_lim, bad_args, bad_lim, non_zero=non_zero)
if fail_count == fail_lim:
return
if fail_count == 0:
print(f'succeeded (tested {test_count} arguments in total)')
print('arguments with least precise results:')
for (i, j), r in bad_args.items():
print(f'{i}, {j}: {r}')
def _bin_rel_test(rel_MSI, rel, test_MSIs, test_count=0, fail_count=0, fail_lim=8):
for n, js in test_MSIs.items():
print(f' testing bit width: {n}')
for i in js:
for j in js:
test_count += 1
if not (rel_MSI(i, j) == rel(i, j)):
fail_count += 1
print(f' {i} rel {j}: {rel_MSI(i, j)}')
if fail_count == fail_lim:
return test_count, fail_count
if test_count % 25000 == 0:
print(f'- tested {test_count} arguments')
return test_count, fail_count
def bin_rel_test(rel_MSI, rel, big=False):
fail_lim = 16 if big else 8
test_count = fail_count = 0
print('testing MSIs with bit width up to 4 exhaustively')
test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_4_exhaustive, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
print('testing random MSIs with bit width from 5 to 8')
test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_random, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
if big:
print('testing some MSIs with bit width 6')
test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_6_partial, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
print('testing some MSIs with bit width 8')
test_count, fail_count = _bin_rel_test(rel_MSI, rel, test_MSIs_8_partial, test_count, fail_count, fail_lim)
if fail_count == fail_lim:
return
if fail_count == 0:
print(f'succeeded (tested {test_count} arguments in total)')
###Output
_____no_output_____
###Markdown
Test for normal
###Code
def test_normal():
failed = False
test_count = fail_count = 0
for n, js in test_MSIs_6_exhaustive.items():
equiv_classes = {}
for i in js:
a = frozenset(gamma(i))
if a in equiv_classes:
equiv_classes[a].add(i)
else:
equiv_classes[a] = {i}
for equiv_class in equiv_classes.values():
norm_forms = list(filter(normal, equiv_class))
test_count += 1
if len(norm_forms) != 1:
failed = True
fail_count += 1
if len(norm_forms) == 0:
print(f'no normal form for {equiv_class}')
else:
print(f'multiple normal forms {norm_forms}')
if fail_count > 8:
return
print(f'succeeded (tested {test_count} equivalence classes in total)')
test_normal()
###Output
succeeded (tested 18958 equivalence classes in total)
###Markdown
Helper functions
###Code
def bounds(i):
n, a, b, _ = i._tuple_repr()
if a <= b:
return a, b, False
else:
return a, b + 2**n, True
def contains(i, k):
n, a, b, s = i._tuple_repr()
if s == 0:
return a == k
elif a <= b:
return a <= k and k <= b and (k - a) % s == 0
else:
if k >= a:
return (k - a) % s == 0
elif k <= b:
return (k - b) % s == 0
else:
return False
def test_contains():
failed = False
test_count = fail_count = 0
for n, js in test_MSIs_6_exhaustive.items():
for i in js:
test_count += 1
a = gamma(i)
for k in range(2**n):
if k in a and not contains(i, k):
failed = True
fail_count += 1
print(f'{k} in gamma({i})')
if k not in a and contains(i, k):
failed = True
fail_count += 1
print(f'{k} not in gamma({i})')
if fail_count > 8:
return
print(f'succeeded (tested {test_count} arguments)')
test_contains()
def leq_MSI(i, j, debug=False):
n, a, b, s = i._tuple_repr()
m, c, d, t = j._tuple_repr()
assert n == m, 'strides must be equal'
if s == 0: # i contains exactly 1 value
return contains(j, a)
elif t == 0: # j contains exactly 1 value
return False
elif b == (a+s) % 2**n: # i contains exactly 2 values
return contains(j, a) and contains(j, b)
elif s % t == 0:
if 2**n % t == 0 and (c-d) % 2**n == t: # j represents a residue class of Z/t (=> t | 2**n)
return (a-c) % t == 0
else:
b_ = (b-a) % 2**n
c_, d_ = (c-a) % 2**n, (d-a) % 2**n
if d_ < c_ and c_ <= b_: # this branch may not return, but continue below [a...d_...c_...b_...]
e_ = s * (d_ // s)
f_ = (b_ - s * ((b_-c_) // s)) % s**n
if (f_-e_) == s:
if e_ < s:
if contains(j, a) and c_ % t == 0:
return True
elif contains(j, b) and d_ % t == 0:
return True
if c_ <= d_:
return c_ == 0 and b_ <= d_
else:
return b_ <= d_ and (d_-b_) % t == 0
else:
return False
bin_rel_test(leq_MSI, lambda i, j: gamma(i) <= gamma(j))
lhs = MSI(3, 2, 0, 3)
rhs = MSI(3, 5, 3, 3)
res = leq_MSI(lhs, rhs)
print(f'{lhs} leq {rhs} = {res}')
print(f'{gamma(lhs)} leq {gamma(rhs)} = {gamma(lhs) <= gamma(rhs)}')
leq_MSI(lhs, rhs, debug=True)
def size(i):
n, a, b, s = i._tuple_repr()
if s == 0:
return 1
else:
return ((b-a) % 2**n) // s + 1
unary_function_test(size, lambda n, i, s: s == len(gamma(i)), big=True)
def lub(i, j, debug=False):
n, a, b, s = i._tuple_repr()
m, c, d, t = j._tuple_repr()
assert n == m, 'strides must be equal'
if b == (a+s) % 2**n and s >= 2**(n-1):
if debug:
print('correction 1')
a, b = b, a
s = 2**n - s
if d == (c+t) % 2**n and t >= 2**(n-1):
if debug:
print('correction 2')
c, d = d, c
t = 2**n - t
b_ = (b-a) % 2**n
c_, d_ = (c-a) % 2**n, (d-a) % 2**n
if debug:
print(f'a: {a}, b: {b}, c: {c}, d: {d}')
print(f'b_: {b_}, c_: {c_}, d_: {d_}')
if (b_ < c_ and c_ < d_): # no overlapping regions
if debug:
print(f'case 0: b_: {b_}, c_: {c_}, d_: {d_}')
u1 = int(gcd(gcd(s, t), (c-b) % 2**n))
e1, f1 = a, d
u2 = int(gcd(gcd(s, t), (a-d) % 2**n))
e2, f2 = c, b
opt1 = normalize(MSI(n, e1, f1, u1))
opt2 = normalize(MSI(n, e2, f2, u2))
if debug:
print(f'opt2: {opt2}, opt1: {opt1}')
if (size(opt1) < size(opt2)):
return opt1
else:
return opt2
elif d_ < c_ and c_ <= b_: # two overlapping regions
if debug:
print(f'case 1: b_: {b_}, c_: {c_}, d_: {d_}')
u = int(gcd(gcd(s, t), gcd(c_ if c_ <= d_ else d_, 2**(n-1))))
e = a % u
f = (e - u) % 2**n
return normalize(MSI(n, e, f, u))
else: # one overlapping region
if debug:
print(f'case 2: b_: {b_}, c_: {c_}, d_: {d_}')
e = a if c_ <= d_ else c
f = b if d_ < b_ else d
u = int(gcd(gcd(s, t), (c_ if c_ <= d_ else d_)))
return normalize(MSI(n, e, f, u))
lhs, rhs = MSI(2, 0, 1, 1), MSI(2, 3, 3, 0)
print(f'{lhs}, {rhs}: {gamma(lhs)}, {gamma(rhs)}')
res = lub(lhs, rhs, debug=True)
print(f'{res}: {gamma(res)}')
print()
res = lub(rhs, lhs, debug=True)
print(f'{res}: {gamma(res)}')
bin_fun_test(lub, lambda n, i, j, x: gamma(i) | gamma(j) <= gamma(x) and lub(i, j) == lub(j, i))
lhs, rhs = MSI(3, 2, 0, 3), MSI(3, 4, 2, 3)
print(f'{lhs}, {rhs}: {gamma(lhs)}, {gamma(rhs)}')
res = lub(lhs, rhs, debug=True)
print(f'{res}: {gamma(res)}')
lub(MSI(6, 1, 37, 6), MSI(6, 31, 7, 6))
def as_signed_int(n, k):
return k if k < 2**(n-1) else k - 2**n
def umax_MSI(i):
n, a, b, s = i._tuple_repr()
if a <= b:
return b
else:
return 2**n - 1 - ((2**n - 1 - a) % s)
def umin_MSI(i):
n, a, b, s = i._tuple_repr()
if a <= b:
return a
else:
return b % s
def smax_MSI(i):
n, a, b, s = i._tuple_repr()
a, b = as_signed_int(n, a), as_signed_int(n, b)
if a <= b:
return b % 2**n
else:
return 2**(n-1) - 1 - ((2**(n-1) - 1 - a) % s)
def smin_MSI(i):
n, a, b, s = i._tuple_repr()
a, b = as_signed_int(n, a), as_signed_int(n, b)
if a <= b:
return a % 2**n
else:
b = b % 2**n
return (((b + 2**(n-1)) % 2**n % s) - 2**(n-1)) % 2**n
smin_MSI(MSI(2, 0, 3, 3))
def ustride(i):
n, a, b, s = i._tuple_repr()
if a <= b:
return s
else:
return int(gcd(s, a-b))
def sstride(i):
n, a, b, s = i._tuple_repr()
if as_signed_int(n, a) <= as_signed_int(n, b):
return s
else:
return int(gcd(s, as_signed_int(n, a)-as_signed_int(n, b)))
def pos_min(i):
m = umin_MSI(i)
if m < 2**(n-1):
return m
else:
return None
def neg_max(i):
m = umax_MSI(i)
if m < 2**(n-1):
return None
else:
return m
def sabsmin(i):
n, a, b, s = i._tuple_repr()
a_, b_ = as_singed_int(a), as_singed_int(b)
if s == 0:
return a
elif b_ < 0:
return b
elif 0 < a_:
return a
else:
x = a % s
y = x - s
return x if x <= y else y
def absmax(i):
a, b = as_singed_int(i.begin), as_singed_int(i.end)
b if abs(a) <= abs(b) else a
unary_function_test(umax_MSI, lambda n, i, k: max(gamma(i)) == k, big=True)
unary_function_test(umin_MSI, lambda n, i, k: min(gamma(i)) == k, big=True)
unary_function_test(smax_MSI, lambda n, i, k: max(map(lambda k: as_signed_int(n, k), gamma(i))) % 2**n == k, big=True)
unary_function_test(smin_MSI, lambda n, i, k: min(map(lambda k: as_signed_int(n, k), gamma(i))) % 2**n == k, big=True)
def as_unsigned(i):
n, a, b, s = i._tuple_repr()
if a <= b:
return MSI(n, a, b, s)
else:
t = int(gcd(s, (a-b) & 2**n))
c = a % t
d = (c-t) % 2**n
return MSI(n, c, d, t)
###Output
_____no_output_____
###Markdown
Implementation of Operations
###Code
def add(i, j):
n, a, b, s = i._tuple_repr()
m, c, d, t = j._tuple_repr()
assert n == m, 'strides must be equal'
u = int(gcd(s, t))
b_ = b if a <= b else b + 2**n
d_ = d if c <= d else d + 2**n
e, f = a+c, b_+d_
if f-e < 2**n:
u_ = u
e_, f_ = e % 2**n, f % 2**n
else:
u_ = int(gcd(u, 2**n))
e_ = e % 2**n
f_ = (e_-u_) % 2**n
return normalize(MSI(n, e_, f_, u_))
bin_op_test(add, lambda n, a, b: (a+b) % 2**n)
def sub(i, j):
n, a, b, s = i._tuple_repr()
m, c, d, t = j._tuple_repr()
assert n == m, 'strides must be equal'
u = int(gcd(s, t))
b_ = b if a <= b else b + 2**n
d_ = d if c <= d else d + 2**n
e, f = a-d_, b_-c
if f-e < 2**n:
u_ = u
e_, f_ = e % 2**n, f % 2**n
else:
u_ = int(gcd(u, 2**n))
e_ = e % 2**n
f_ = (e_-u_) % 2**n
return normalize(MSI(n, e_, f_, u_))
bin_op_test(sub, lambda n, a, b: (a-b) % 2**n)
def mul(i, j, debug=False):
n, a, b, s = i._tuple_repr()
m, c, d, t = j._tuple_repr()
assert n == m, 'strides must be equal'
m = 2**n
u = int(gcd(a, s)) * int(gcd(c, t))
b_ = b if a <= b else b + m
d_ = d if c <= d else d + m
e, f = a*c, b_*d_
if f-e < m:
u_ = u
e_, f_ = e % m, f % m
else:
u_ = int(gcd(u, m))
e_ = e % m
f_ = (e_-u_) % m
if debug:
print(f'u: {u}, e: {e}, f: {f}, u_: {u_}, e_: {e_}, f_: {f_}')
return normalize(MSI(n, e_, f_, u_))
def urem(i, j, debug=False):
n, _, _, s = i._tuple_repr()
m, _, _, t = j._tuple_repr()
assert n == m, 'strides must be equal'
a, b = umin_MSI(i), umax_MSI(i)
c, d = umin_MSI(j), umax_MSI(j)
s, t = ustride(i), ustride(j)
if c == 0:
if t == 0:
if debug:
print('case 1')
return MSI(n, 0, (-1) % 2**n, 1)
else:
c = t
if b < c:
if debug:
print('case 2')
return i
elif t == 0:
if a//c == b//c:
if debug:
print('case 3.1')
return normalize(MSI(n, a % c, b % c, s))
else:
if debug:
print('case 3.2')
u = int(gcd(s, c))
return normalize(MSI(n, a % u, c-1, u))
else:
if debug:
print('case 4')
u = int(gcd(gcd(c, t), s))
return normalize(MSI(n, a % u, min(b, d-1), u))
bin_op_test(urem, lambda n, a, b: (a % b) % 2**n, big=False, non_zero=True)
def udiv(i, j, debug=False):
n, _, _, _ = i._tuple_repr()
m, _, _, t = j._tuple_repr()
assert n == m, 'strides must be equal'
a, b = umin_MSI(i), umax_MSI(i)
c, d = umin_MSI(j), umax_MSI(j)
s = ustride(i)
m = 2**n
if c == 0:
if t == 0:
return MSI(n, 0, (-1) % 2**n, 1)
else:
c = ustride(j)
s_ = int(gcd(a, s))
if t == 0:
u = s_ // c
u = u if u*c == s_ else 1
return normalize(MSI(n, a//c, b//c, u))
else:
e, f = a//d, b//c
return normalize(MSI(n, e, f, 1))
lhs, rhs = MSI(3, 1, 5, 1), MSI(3, 2, 0, 3)
print(f'{lhs}, {rhs}: {gamma(lhs)}, {gamma(rhs)}')
res = udiv(lhs, rhs, debug=True)
print(f'{res}: {gamma(res)}')
bin_op_test(udiv, lambda n, a, b: (a // b), big=False, non_zero=True)
def rem(k, n):
assert not n == 0, 'remainder by 0'
if k > 0:
return k % abs(n)
else:
return -(abs(k) % abs(n))
def div(k, n):
assert not n == 0, 'division by 0'
if k > 0:
return k // n
else:
return -(abs(k) // n)
def smin(n, k, l):
k_, l_ = as_signed_int(n, k), as_signed_int(n, l)
if k_ <= l_:
return k_
else:
return l_
def smax(n, k, l):
k_, l_ = as_signed_int(n, k), as_signed_int(n, l)
if k_ >= l_:
return k_
else:
return l_
def srem(i, j, debug=False):
n, m = i.bit_width, j.bit_width
assert n == m, 'strides must be equal'
a, b = as_signed_int(n, smin_MSI(i)), as_signed_int(n, smax_MSI(i))
c, d = as_signed_int(n, smin_MSI(j)), as_signed_int(n, smax_MSI(j))
s, t = sstride(i), sstride(j)
if debug:
print(f'a: {a}, b: {b}, c: {c}, d: {d}, s: {s}, t: {t}')
if d < 0:
if debug:
print('all negative')
c, d = -d, -c
elif c < 0:
if debug:
print('some negative')
t_ = (d+c) % t
c, d = min(-c % t, d % t), max(-c, d)
t = gcd(t, t_)
if debug:
print(f'a: {a}, b: {b}, c: {c}, d: {d}, s: {s}, t: {t}')
if c == 0: # remainder by bound not possible
if t == 0: # definite remainder by 0
if debug:
print('case 1')
return MSI(n, 0, (-1) % 2**n, 1)
else: # correct bound to avoid ramainder by 0
if debug:
print('avoid 0')
c = c+t
# renormalize
if c == d:
if debug:
print('renormalize')
t = 0
if debug:
print(f'a: {a}, b: {b}, c: {c}, d: {d}, s: {s}, t: {t}')
absMaxI = max(abs(a), abs(b))
if absMaxI < c: # remainder has no effect
if debug:
print('case 2')
return i
elif t == 0: # remainder by constant
if div(a, c) == div(b, c): # E x. x*c <= a <= b < (x+1)*c
if debug:
print('case 3')
return normalize(MSI(n, rem(a, c) % 2**n, rem(b, c) % 2**n, s))
if debug:
print(f'case 5')
u = int(gcd(gcd(c, t), s))
e = a % u if 0 < a else max(a, 1-d + (a+d-1) % u)
f = min(b, d-1) if 0 < b else (e-1) % u + 1 - u
if debug:
print(f'u: {u}, e: {e}, f: {f}, {a % u if 0 < a else 1-d + (a-d+1) % u}, {d-1 if 0 < b else (e-1) % u + 1 - u}')
return normalize(MSI(n, e % 2**n, f % 2**n, u))
srem(MSI(3, 5, 6, 1), MSI(3, 2, 2, 0), debug=True)
def srem_cases(i, j):
p = []
n, m = i.bit_width, j.bit_width
assert n == m, 'strides must be equal'
a, b = as_signed_int(n, smin_MSI(i)), as_signed_int(n, smax_MSI(i))
c, d = as_signed_int(n, smin_MSI(j)), as_signed_int(n, smax_MSI(j))
s, t = sstride(i), sstride(j)
if append(p, d < 0) and d < 0:
c, d = -d, -c
elif append(p, c < 0) and c < 0:
t_ = (d+c) % t
append(p, -c % t <= d % t)
append(p, -c <= d)
c, d = min(-c % t, d % t), max(-c, d)
t = gcd(t, t_)
if append(p, c == 0) and c == 0: # remainder by bound not possible
if append(p, t == 0) and t == 0: # definite remainder by 0
return MSI(n, 0, (-1) % 2**n, 1), p
else: # correct bound to avoid ramainder by 0
c = c+t
# renormalize
if append(p, c == d) and c == d:
t = 0
append(p, a >= 0)
append(p, a >= b)
append(p, abs(a) >= abs(b))
absMaxI = max(abs(a), abs(b))
if append(p, absMaxI < c) and absMaxI < c: # remainder has no effect
return i, p
elif append(p, t == 0) and t == 0: # remainder by constant
if append(p, div(a, c) == div(b, c)) and div(a, c) == div(b, c): # E x. x*c <= a <= b < (x+1)*c
return normalize(MSI(n, rem(a, c) % 2**n, rem(b, c) % 2**n, s)), p
u = int(gcd(gcd(c, t), s))
if append(p, 0 < a) and 0 < a:
e = a % u
else:
append(p, a >= 1-d + (a+d-1) % u)
e = max(a, 1-d + (a+d-1) % u)
if append(p, 0 < b) and 0 < b:
append(p, b >= d-1)
f = min(b, d-1)
else:
f = (e-1) % u + 1 - u
return normalize(MSI(n, e % 2**n, f % 2**n, u)), p
srem_cases(MSI(4, 2, 5, 3), MSI(4, 15, 3, 2))
def append(xs, x):
xs.append(x)
return True
def add_case(cases, path, ex):
if cases is None:
if len(path) == 0:
return (True, True, ex)
else:
return (False, False, {path[0]: add_case(None, path[1:], ex), not path[0]: None})
else:
fin0, a0, cs0 = cases
if fin0:
return cases
else:
b = path[0]
fin1, a1, cs1 = add_case(cs0[b], path[1:], ex)
fin = cs0[not b] is not None and cs0[not b][0] and fin1
return (fin, a0, {b: (fin1, a1, cs1), not b: cs0[not b]})
def gen_test_cases(f):
cases = None
for n in range(1, 4+1):
for i in test_MSIs_6_exhaustive[n]:
for j in test_MSIs_6_exhaustive[n]:
_, path = f(i, j)
cases = add_case(cases, path, (i, j))
if cases[0]:
return cases
return cases
def get_cases(cases):
if cases is None:
return []
elif cases[1]:
return [cases[2]]
else:
return get_cases(cases[2][True]) + get_cases(cases[2][False])
def find_unreachable_pathes(cases):
if cases is None:
return [p]
elif cases[0]:
return []
r = []
for b in [True, False]:
if cases[2][b] is None:
r += [[b]]
else:
r += [[b]+p for p in find_unreachable_pathes(cases[2][b])]
return r
cases = gen_test_cases(srem_cases)
cs = get_cases(cases)
len(cs)
cs[0]
srem(MSI(2, 2, 2, 0), MSI(2, 2, 3, 1))
for lhs, rhs in cs:
n, a, b, s = lhs._tuple_repr()
_, c, d, t = rhs._tuple_repr()
ref = srem(lhs, rhs)
_, e, f, u = ref._tuple_repr()
print(f'lhs = {{{n}, {a}, {b}, {s}}}; rhs = {{{n}, {c}, {d}, {t}}}; ref = {{{n}, {e}, {f}, {u}}};')
print(f'res_p = lhs.srem({n}, rhs);');
print(f'res = *(static_cast<StridedInterval *>(res_p.get()));')
print(f'if (res != ref) {{')
print(f' errs() << "[testSrem] failed with operands " << lhs << ", " << rhs << ": got " << res << ", expected " << ref << "\\n";')
print(f'}}')
n = 4
lhs, rhs = MSI(n, 10, 13, 3), MSI(n, 10, 13, 3)
print(f'{lhs}, {rhs}: {set(map(lambda k: as_signed_int(n, k), gamma(lhs)))}, {set(map(lambda k: as_signed_int(n, k), gamma(rhs)))}')
res = srem(lhs, rhs, debug=True)
print(f'{res}: {set(map(lambda k: as_signed_int(n, k), gamma(res)))}')
bin_op_test(srem, lambda n, a, b: rem(as_signed_int(n, a), as_signed_int(n, b)) % 2**n, big=False, non_zero=True)
###Output
testing MSIs with bit width up to 4 exhaustively
testing bit width: 1
testing bit width: 2
testing bit width: 3
testing bit width: 4
- tested 25000 arguments
- tested 50000 arguments
|
deter_pt.ipynb | ###Markdown
Month Report
###Code
from nlg.generation import Generation
from nlg.realization import Realization
from inpe_deter.content import content
from datetime import datetime
from db.model import DeforestationDeterINPE, CoronaVirus
from datetime import datetime
structuring_path = 'inpe_deter/data/pt-br/month_grammar/structuring.json'
lexicalization_path = 'inpe_deter/data/pt-br/month_grammar/lexicalization.json'
reference_path = 'inpe_deter/data/pt-br/month_grammar/references.json'
lexicon_path = 'nlg/lexicons/pt-br/'
gen = Generation(structuring_path, lexicalization_path, reference_path, lexicon_path)
messages = [
{
'intent': 'CAUSE',
'attributes': {
'area': 835, 'cause': 'DESMATAMENTO_CR', 'year': 2019, 'month': 6, 'location': 'deter-amz'
},
'str_msg': 'CAUSE(area="835",cause="DESMATAMENTO_CR",location="deter-amz",month="6",year="2019")',
'delex_msg': 'CAUSE(area=AREA,cause=CAUSE,location=LOCATION,month=MONTH,year=YEAR)'
},
{
'intent': 'TOTAL_DEFORESTATION',
'attributes': {'area': 12, 'year': 2019, 'month': 6, 'location': 'deter-amz', 'state': 'PA', 'city': 'Novo Progresso,Itaituba', 'UC': 'FLORESTA NACIONAL DO JAMANXIM'},
'str_msg': 'TOTAL_DEFORESTATION(area="12",city="Novo Progresso,Itaituba",location="deter-amz",month="6",state="PA",UC="FLORESTA NACIONAL DO JAMANXIM",year="2019")',
'delex_msg': 'TOTAL_DEFORESTATION(area=AREA,city=CITY,location=LOCATION,month=MONTH,state=STATE,UC=UC,year=YEAR)'
},
{
'intent': 'TOTAL_DEFORESTATION',
'attributes': {'area': 454, 'year': 2019, 'month': 6, 'location': 'deter-amz', 'state': 'PA'},
'str_msg': 'TOTAL_DEFORESTATION(area="454",location="deter-amz",month="6",state="PA",year="2019")',
'delex_msg': 'TOTAL_DEFORESTATION(area=AREA,location=LOCATION,month=MONTH,state=STATE,year=YEAR)'
},
{
'intent': 'TOTAL_DEFORESTATION',
'attributes': {'area': 914, 'year': 2019, 'month': 6, 'location': 'deter-amz'},
'str_msg': 'TOTAL_DEFORESTATION(area="914",location="deter-amz",month="6",year="2019")',
'delex_msg': 'TOTAL_DEFORESTATION(area=AREA,location=LOCATION,month=MONTH,year=YEAR)'
},
{
'intent': 'TOTAL_DEFORESTATION',
'attributes': {'area': 95, 'year': 2019, 'month': 6, 'location': 'deter-amz', 'state': 'PA', 'city': 'Altamira'},
'str_msg': 'TOTAL_DEFORESTATION(area="95",city="Altamira",location="deter-amz",month="6",state="PA",year="2019")',
'delex_msg': 'TOTAL_DEFORESTATION(area=AREA,city=CITY,location=LOCATION,month=MONTH,state=STATE,year=YEAR)'
}
]
entry, template, paragraphs = gen.generate(messages, strategy='random')
text = []
for p in paragraphs:
text.append(' '.join(p))
text = '\n\n'.join(text)
print(text)
for date in DeforestationDeterINPE.objects().distinct('date')[:5]:
month, year = date.month, date.year
messages, date = content(month, year)
try:
entry, template, paragraphs = gen.generate(messages, strategy='random')
text = []
for p in paragraphs:
text.append(' '.join(p))
text = '\n\n'.join(text)
print("Portuguese:")
print(text)
except:
print('ERROR')
print(10 * '*', '\n')
###Output
Portuguese:
Um total de 914 km² de desmatamento na Amazônia Legal foi detectado no mês de junho de 2019 pelo INPE. Com 835 km², a principal causa de devastação foi o desmatamento com solo exposto, que deixa a terra sem nenhuma cobertura florestal.
O estado que mais teve desmatamento foi Pará (454 km²). O município que teve mais desmatamento no mês foi Altamira (Pará), com 95 km² de área desmatada.
Com 12 km², a FLORESTA NACIONAL DO JAMANXIM / Pará foi a Unidade de Conservação mais devastada no mês.
**********
Portuguese:
Um total de 914 km² de desmatamento na Amazônia legal foi detectado no mês de junho de 2019 pelo INPE. Com 835 km², a principal causa de devastação foi o desmatamento com solo exposto, aquele que deixa a terra sem nenhuma vegetação.
Pará foi o estado mais atingido pelo desmatamento, com 454 km². Altamira foi o município mais afetado (95 km²).
A Unidade de Conservação onde teve mais desmatamento foi a FLORESTA NACIONAL DO JAMANXIM / Pará (12 km²).
**********
Portuguese:
De acordo com os dados de monitoramento do INPE, foram desmatados 914 km² da Amazônia Legal no mês de junho de 2019. Com 835 km², a principal causa de devastação foi o desmatamento com solo exposto, que deixa a terra sem nenhuma cobertura florestal.
O estado de Pará foi o mais desmatado no mês de junho, com 454 km² de floresta destruídos, enquanto Altamira / Pará foi o município com maior desmatamento (95 km²) no mesmo período.
A FLORESTA NACIONAL DO JAMANXIM / Pará foi a Unidade de Conservação mais devastada no mês de junho, com um total de 12 km² desmatados.
**********
Portuguese:
Segundo o INPE, 914 km² foram desmatados na Amazônia legal, em junho de 2019. A principal causa foi o desmatamento de corte raso, que deixa o solo sem nenhuma vegetação, somando 835 km².
Pará foi o estado mais atingido pelo desmatamento, com 454 km². O município de Altamira / Pará foi o mais afetado, com 95 km² desmatados.
A Área Protegida onde mais aconteceu desmatamento foi a FLORESTA NACIONAL DO JAMANXIM / Pará (12 km²).
**********
Portuguese:
O INPE informou que foram desmatados 914 km², na Amazônia Legal, em junho de 2019. Com um total de 835 km², a principal causa de destruição da Amazônia Legal no mês foi o desmatamento com solo exposto, que deixa a terra sem vegetação.
O estado mais atingido foi Pará (454 km²) e o município com mais desmatamento no mês foi Altamira / Pará (95 km²).
A Unidade de Conservação onde teve mais desmatamento foi a FLORESTA NACIONAL DO JAMANXIM / Pará (12 km²).
**********
###Markdown
Daily Report
###Code
from datetime import datetime, timedelta
import inpe_deter.daily_content as daily_content
structuring_path = 'inpe_deter/data/pt-br/daily_grammar/structuring.json'
lexicalization_path = 'inpe_deter/data/pt-br/daily_grammar/lexicalization.json'
reference_path = 'inpe_deter/data/pt-br/daily_grammar/references.json'
lexicon_path = 'nlg/lexicons/pt-br/'
gen = Generation(structuring_path, lexicalization_path, reference_path, lexicon_path)
messages = [
{
'intent': 'CAUSE',
'attributes': {'year': 2020, 'month': 10, 'location': 'deter-amz', 'state': 'AM', 'city': 'Labrea', 'cause': 'DESMATAMENTO_CR'},
'str_msg': 'CAUSE(cause="DESMATAMENTO_CR",city="Labrea",location="deter-amz",month="10",state="AM",year="2020")',
'delex_msg': 'CAUSE(cause=CAUSE,city=CITY,location=LOCATION,month=MONTH,state=STATE,year=YEAR)'
},
{
'intent': 'DAILY_ALERT',
'attributes': {'year': 2020, 'month': 10, 'day': 17, 'location': 'deter-amz', 'state': 'AM', 'city': 'Labrea', 'area': 6.075527808385738, 'daily_accumulation': 7},
'str_msg': 'DAILY_ALERT(area="6.075527808385738",city="Labrea",daily_accumulation="7",day="17",location="deter-amz",month="10",state="AM",year="2020")',
'delex_msg': 'DAILY_ALERT(area=AREA,city=CITY,daily_accumulation=DAILY_ACCUMULATION,day=DAY,location=LOCATION,month=MONTH,state=STATE,year=YEAR)'
},
{
'intent': 'TOTAL_DEFORESTATION',
'attributes': {'year': 2020, 'month': 10, 'location': 'deter-amz', 'state': 'AM', 'city': 'Labrea', 'area': 36.66000131810721},
'str_msg': 'TOTAL_DEFORESTATION(area="36.66000131810721",city="Labrea",location="deter-amz",month="10",state="AM",year="2020")',
'delex_msg': 'TOTAL_DEFORESTATION(area=AREA,city=CITY,location=LOCATION,month=MONTH,state=STATE,year=YEAR)'
}
]
entry, template, paragraphs = gen.generate(messages, strategy='random')
text = []
for p in paragraphs:
text.append(' '.join(p))
print('\n'.join(text))
date = datetime.now() - timedelta(days=60)
alerts = []
for last_day in sorted(DeforestationDeterINPE.objects(date__gte=date).distinct('date'))[:5]:
cities, ucs = {}, {}
cases = DeforestationDeterINPE.objects(date=last_day)
for case in cases:
if case.cause in ['DESMATAMENTO_CR', 'DESMATAMENTO_VEG', 'MINERACAO']:
if case.UC:
if (last_day, case.state, case.city, case.UC) not in ucs:
ucs[(last_day, case.state, case.city, case.UC)] = 0
ucs[(last_day, case.state, case.city, case.UC)] += case.uc_area
else:
if (last_day, case.state, case.city, None) not in cities:
cities[(last_day, case.state, case.city, None)] = 0
cities[(last_day, case.state, case.city, None)] += case.city_area
cities = sorted(cities.items(), key=lambda x:x[1], reverse=True)[:3]
ucs = sorted(ucs.items(), key=lambda x:x[1], reverse=True)[:3]
for city in cities:
alerts.append(city[0])
for uc in ucs:
alerts.append(uc[0])
for alert in alerts[:5]:
date, state, city, uc = alert
messages, _ = daily_content.content(date, state, city, uc)
print('\n'.join([msg['str_msg'] for msg in messages]))
print()
try:
entry, template, paragraphs = gen.generate(messages, strategy='random')
text = []
for p in paragraphs:
text.append(' '.join(p))
print('\n'.join(text))
except:
print('ERROR')
print(10 * '*', '\n')
###Output
CAUSE(cause="DESMATAMENTO_CR",city="Porto Velho",location="deter-amz",month="6",state="RO",year="2020")
DAILY_ALERT(area="6.7931018036451825",city="Porto Velho",daily_accumulation="9",day="30",location="deter-amz",month="6",state="RO",year="2020")
TOTAL_DEFORESTATION(area="46.0882973437658",city="Porto Velho",location="deter-amz",month="6",state="RO",year="2020")
No dia 30 de junho de 2020, o INPE registrou alertas de desmatamento somando 6,79 km² em Porto Velho / Rondônia, que acumula 9 dias com alertas no mês. O principal tipo de desmatamento foi o desmatamento com solo exposto, que deixa o solo sem vegetação. No total, 46,09 km² foram desmatados em Porto Velho no mês de junho.
**********
CAUSE(cause="DESMATAMENTO_CR",city="Sao Felix do Xingu",location="deter-amz",month="6",state="PA",year="2020")
DAILY_ALERT(area="6.161107067840259",city="Sao Felix do Xingu",daily_accumulation="11",day="30",location="deter-amz",month="6",state="PA",year="2020")
TOTAL_DEFORESTATION(area="67.97554469342512",city="Sao Felix do Xingu",location="deter-amz",month="6",state="PA",year="2020")
No dia 30 de junho de 2020, o Instituto Nacional de Pesquisas Espaciais (INPE) registrou alertas de desmatamento somando 6,16 km² em Sao Felix do Xingu / Pará, que acumula 11 dias com alertas no mês. O desmatamento com solo exposto, que deixa o solo sem vegetação, foi a principal causa de desmatamento. Sao Felix do Xingu acumula 67,98 km² em junho.
**********
CAUSE(cause="DESMATAMENTO_CR",city="Colniza",location="deter-amz",month="6",state="MT",year="2020")
DAILY_ALERT(area="5.3270460286338235",city="Colniza",daily_accumulation="8",day="30",location="deter-amz",month="6",state="MT",year="2020")
TOTAL_DEFORESTATION(area="20.680526573003284",city="Colniza",location="deter-amz",month="6",state="MT",year="2020")
Segundo o INPE, Colniza / Mato Grosso teve alertas de desmatamento no dia 30 de junho de 2020 que somaram 5,33 km². No mês de junho já foram desmatados 20,68 km² de floresta em Colniza. A principal causa do alerta diário gerado pelo Instituto para Colniza é o desmatamento com solo exposto, que acaba com toda vegetação do local.
**********
CAUSE(cause="MINERACAO",city="Novo Progresso",location="deter-amz",month="6",state="PA",uc="FLORESTA NACIONAL DO JAMANXIM",year="2020")
DAILY_ALERT(area="6.928084182841216",city="Novo Progresso",daily_accumulation="9",day="30",location="deter-amz",month="6",state="PA",uc="FLORESTA NACIONAL DO JAMANXIM",year="2020")
TOTAL_DEFORESTATION(area="25.703230927311303",city="Novo Progresso",location="deter-amz",month="6",state="PA",uc="FLORESTA NACIONAL DO JAMANXIM",year="2020")
No dia 30 de junho de 2020, o INPE divulgou alertas de desmatamento que somam 6,93 km² na FLORESTA NACIONAL DO JAMANXIM / Pará, acumulando 9 dias com alertas no mês. O alerta gerado indica degradação por causa de mineração, devido à ação de garimpos na floresta. Em junho, a FLORESTA NACIONAL DO JAMANXIM acumula 25,70 km² de área desmatada.
**********
CAUSE(cause="DESMATAMENTO_CR",city="Altamira",location="deter-amz",month="6",state="PA",uc="RESERVA BIOLÓGICA NASCENTES DA SERRA DO CACHIMBO",year="2020")
DAILY_ALERT(area="1.652996917021529",city="Altamira",daily_accumulation="7",day="30",location="deter-amz",month="6",state="PA",uc="RESERVA BIOLÓGICA NASCENTES DA SERRA DO CACHIMBO",year="2020")
TOTAL_DEFORESTATION(area="5.205300899521609",city="Altamira",location="deter-amz",month="6",state="PA",uc="RESERVA BIOLÓGICA NASCENTES DA SERRA DO CACHIMBO",year="2020")
Em 30 de junho de 2020, o INPE registrou alertas de desmatamento na RESERVA BIOLÓGICA NASCENTES DA SERRA DO CACHIMBO / Pará que somaram 1,65 km², acumulando 7 dias com alertas no mês. O principal motivo do alerta diário gerado pelo INPE para a RESERVA BIOLÓGICA NASCENTES DA SERRA DO CACHIMBO foi o desmatamento de solo exposto, que deixa a floresta sem nenhuma vegetação. A RESERVA BIOLÓGICA NASCENTES DA SERRA DO CACHIMBO soma 5,21 km² de desmatamento no mês analisado.
**********
|
examples/tutorials/Part 07 - Federated Learning with Federated Dataset.ipynb | ###Markdown
Part 7 - Federated Learning with FederatedDatasetHere we introduce a new tool for using federated datasets. We have created a `FederatedDataset` class which is intended to be used like the PyTorch Dataset class, and is given to a federated data loader `FederatedDataLoader` which will iterate on it in a federated fashion.Authors:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle) We use the sandbox that we discovered last lesson
###Code
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
###Output
Setting up Sandbox...
Done!
###Markdown
Then search for a dataset
###Code
boston_data = grid.search("#boston", "#data", verbose=False, return_counter=False)
boston_target = grid.search("#boston", "#target", verbose=False, return_counter=False)
###Output
_____no_output_____
###Markdown
We load a model and an optimizer
###Code
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
###Output
_____no_output_____
###Markdown
Here we cast the data fetched in a `FederatedDataset`. See the workers which hold part of the data.
###Code
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)
###Output
['bob', 'theo', 'jason', 'alice', 'andy', 'jon']
###Markdown
We put it in a `FederatedDataLoader` and specify options
###Code
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)
###Output
_____no_output_____
###Markdown
And finally we iterate over epochs. You can see how similar this is compared to pure and local PyTorch training!
###Code
epochs = 50
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer = optimizers[data.location.id]
optimizer.zero_grad()
pred = model(data)
loss = ((pred.view(-1) - target)**2).mean()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 8 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
epoch, batch_idx, len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
###Output
Train Epoch: 1 [0/16 (0%)] Batch loss: 3156.205322
Train Epoch: 1 [8/16 (50%)] Batch loss: 59.617584
Train Epoch: 1 [16/16 (100%)] Batch loss: 879.110596
Total loss 25813.378814697266
Train Epoch: 2 [0/16 (0%)] Batch loss: 1049.295288
Train Epoch: 2 [8/16 (50%)] Batch loss: 142.419449
Train Epoch: 2 [16/16 (100%)] Batch loss: 385.578247
Total loss 12938.65673828125
Train Epoch: 3 [0/16 (0%)] Batch loss: 1212.881592
Train Epoch: 3 [8/16 (50%)] Batch loss: 61.300438
Train Epoch: 3 [16/16 (100%)] Batch loss: 174.486832
Total loss 9758.002326965332
Train Epoch: 4 [0/16 (0%)] Batch loss: 1113.664429
Train Epoch: 4 [8/16 (50%)] Batch loss: 72.734505
Train Epoch: 4 [16/16 (100%)] Batch loss: 156.222260
Total loss 8611.730628967285
Train Epoch: 5 [0/16 (0%)] Batch loss: 704.376953
Train Epoch: 5 [8/16 (50%)] Batch loss: 108.787155
Train Epoch: 5 [16/16 (100%)] Batch loss: 59.056713
Total loss 5583.505939483643
Train Epoch: 6 [0/16 (0%)] Batch loss: 588.390381
Train Epoch: 6 [8/16 (50%)] Batch loss: 88.086395
Train Epoch: 6 [16/16 (100%)] Batch loss: 50.938488
Total loss 3429.2552757263184
Train Epoch: 7 [0/16 (0%)] Batch loss: 493.895111
Train Epoch: 7 [8/16 (50%)] Batch loss: 85.962540
Train Epoch: 7 [16/16 (100%)] Batch loss: 62.055855
Total loss 2460.794744491577
Train Epoch: 8 [0/16 (0%)] Batch loss: 275.886078
Train Epoch: 8 [8/16 (50%)] Batch loss: 96.523705
Train Epoch: 8 [16/16 (100%)] Batch loss: 71.164154
Total loss 1674.9585437774658
Train Epoch: 9 [0/16 (0%)] Batch loss: 124.773972
Train Epoch: 9 [8/16 (50%)] Batch loss: 82.355904
Train Epoch: 9 [16/16 (100%)] Batch loss: 106.273682
Total loss 1359.2087535858154
Train Epoch: 10 [0/16 (0%)] Batch loss: 53.476452
Train Epoch: 10 [8/16 (50%)] Batch loss: 65.665939
Train Epoch: 10 [16/16 (100%)] Batch loss: 105.537766
Total loss 1321.780421257019
Train Epoch: 11 [0/16 (0%)] Batch loss: 23.041601
Train Epoch: 11 [8/16 (50%)] Batch loss: 58.526306
Train Epoch: 11 [16/16 (100%)] Batch loss: 82.434570
Total loss 1344.7198657989502
Train Epoch: 12 [0/16 (0%)] Batch loss: 33.982090
Train Epoch: 12 [8/16 (50%)] Batch loss: 51.923664
Train Epoch: 12 [16/16 (100%)] Batch loss: 71.666611
Total loss 1420.0852355957031
Train Epoch: 13 [0/16 (0%)] Batch loss: 53.072071
Train Epoch: 13 [8/16 (50%)] Batch loss: 46.088745
Train Epoch: 13 [16/16 (100%)] Batch loss: 62.153725
Total loss 1414.037546157837
Train Epoch: 14 [0/16 (0%)] Batch loss: 61.432991
Train Epoch: 14 [8/16 (50%)] Batch loss: 44.375298
Train Epoch: 14 [16/16 (100%)] Batch loss: 51.755058
Total loss 1325.2543201446533
Train Epoch: 15 [0/16 (0%)] Batch loss: 57.004494
Train Epoch: 15 [8/16 (50%)] Batch loss: 45.408813
Train Epoch: 15 [16/16 (100%)] Batch loss: 42.930199
Total loss 1207.832085609436
Train Epoch: 16 [0/16 (0%)] Batch loss: 41.909943
Train Epoch: 16 [8/16 (50%)] Batch loss: 47.311516
Train Epoch: 16 [16/16 (100%)] Batch loss: 34.088875
Total loss 1097.0017757415771
Train Epoch: 17 [0/16 (0%)] Batch loss: 28.952778
Train Epoch: 17 [8/16 (50%)] Batch loss: 50.566902
Train Epoch: 17 [16/16 (100%)] Batch loss: 26.027718
Total loss 1030.4123096466064
Train Epoch: 18 [0/16 (0%)] Batch loss: 25.858299
Train Epoch: 18 [8/16 (50%)] Batch loss: 53.949982
Train Epoch: 18 [16/16 (100%)] Batch loss: 19.568899
Total loss 1007.2871789932251
Train Epoch: 19 [0/16 (0%)] Batch loss: 32.356831
Train Epoch: 19 [8/16 (50%)] Batch loss: 55.524818
Train Epoch: 19 [16/16 (100%)] Batch loss: 15.204272
Total loss 1010.2519159317017
Train Epoch: 20 [0/16 (0%)] Batch loss: 44.068043
Train Epoch: 20 [8/16 (50%)] Batch loss: 55.615528
Train Epoch: 20 [16/16 (100%)] Batch loss: 12.976721
Total loss 1019.6085443496704
Train Epoch: 21 [0/16 (0%)] Batch loss: 54.451321
Train Epoch: 21 [8/16 (50%)] Batch loss: 54.670746
Train Epoch: 21 [16/16 (100%)] Batch loss: 12.432034
Total loss 1015.7627086639404
Train Epoch: 22 [0/16 (0%)] Batch loss: 59.559566
Train Epoch: 22 [8/16 (50%)] Batch loss: 52.910492
Train Epoch: 22 [16/16 (100%)] Batch loss: 13.123858
Total loss 995.332706451416
Train Epoch: 23 [0/16 (0%)] Batch loss: 58.577301
Train Epoch: 23 [8/16 (50%)] Batch loss: 51.006618
Train Epoch: 23 [16/16 (100%)] Batch loss: 14.529527
Total loss 965.210844039917
Train Epoch: 24 [0/16 (0%)] Batch loss: 52.942257
Train Epoch: 24 [8/16 (50%)] Batch loss: 49.282093
Train Epoch: 24 [16/16 (100%)] Batch loss: 16.257212
Total loss 933.2088041305542
Train Epoch: 25 [0/16 (0%)] Batch loss: 45.461979
Train Epoch: 25 [8/16 (50%)] Batch loss: 47.733994
Train Epoch: 25 [16/16 (100%)] Batch loss: 17.965858
Total loss 906.1298294067383
Train Epoch: 26 [0/16 (0%)] Batch loss: 38.472519
Train Epoch: 26 [8/16 (50%)] Batch loss: 46.439308
Train Epoch: 26 [16/16 (100%)] Batch loss: 19.301504
Total loss 886.8441505432129
Train Epoch: 27 [0/16 (0%)] Batch loss: 33.182323
Train Epoch: 27 [8/16 (50%)] Batch loss: 45.453724
Train Epoch: 27 [16/16 (100%)] Batch loss: 20.096941
Total loss 874.4864692687988
Train Epoch: 28 [0/16 (0%)] Batch loss: 29.860424
Train Epoch: 28 [8/16 (50%)] Batch loss: 44.784172
Train Epoch: 28 [16/16 (100%)] Batch loss: 20.336018
Total loss 866.9493885040283
Train Epoch: 29 [0/16 (0%)] Batch loss: 28.156925
Train Epoch: 29 [8/16 (50%)] Batch loss: 44.447277
Train Epoch: 29 [16/16 (100%)] Batch loss: 20.098642
Total loss 862.1956396102905
Train Epoch: 30 [0/16 (0%)] Batch loss: 27.574831
Train Epoch: 30 [8/16 (50%)] Batch loss: 44.402184
Train Epoch: 30 [16/16 (100%)] Batch loss: 19.542740
Total loss 858.7245779037476
Train Epoch: 31 [0/16 (0%)] Batch loss: 27.749989
Train Epoch: 31 [8/16 (50%)] Batch loss: 44.542336
Train Epoch: 31 [16/16 (100%)] Batch loss: 18.831099
Total loss 855.648473739624
Train Epoch: 32 [0/16 (0%)] Batch loss: 28.452106
Train Epoch: 32 [8/16 (50%)] Batch loss: 44.748314
Train Epoch: 32 [16/16 (100%)] Batch loss: 18.099174
Total loss 852.4422521591187
Train Epoch: 33 [0/16 (0%)] Batch loss: 29.483768
Train Epoch: 33 [8/16 (50%)] Batch loss: 44.906120
Train Epoch: 33 [16/16 (100%)] Batch loss: 17.449026
Total loss 848.7872800827026
Train Epoch: 34 [0/16 (0%)] Batch loss: 30.614393
Train Epoch: 34 [8/16 (50%)] Batch loss: 44.936432
Train Epoch: 34 [16/16 (100%)] Batch loss: 16.937519
Total loss 844.6193780899048
Train Epoch: 35 [0/16 (0%)] Batch loss: 31.597649
Train Epoch: 35 [8/16 (50%)] Batch loss: 44.816074
Train Epoch: 35 [16/16 (100%)] Batch loss: 16.579081
Total loss 840.1143188476562
Train Epoch: 36 [0/16 (0%)] Batch loss: 32.248840
Train Epoch: 36 [8/16 (50%)] Batch loss: 44.563957
Train Epoch: 36 [16/16 (100%)] Batch loss: 16.359095
Total loss 835.5920009613037
Train Epoch: 37 [0/16 (0%)] Batch loss: 32.505077
Train Epoch: 37 [8/16 (50%)] Batch loss: 44.221420
Train Epoch: 37 [16/16 (100%)] Batch loss: 16.246435
Total loss 831.391242980957
Train Epoch: 38 [0/16 (0%)] Batch loss: 32.420982
Train Epoch: 38 [8/16 (50%)] Batch loss: 43.835026
Train Epoch: 38 [16/16 (100%)] Batch loss: 16.205231
Total loss 827.7498416900635
Train Epoch: 39 [0/16 (0%)] Batch loss: 32.119923
Train Epoch: 39 [8/16 (50%)] Batch loss: 43.442734
Train Epoch: 39 [16/16 (100%)] Batch loss: 16.202524
Total loss 824.7718410491943
Train Epoch: 40 [0/16 (0%)] Batch loss: 31.735329
Train Epoch: 40 [8/16 (50%)] Batch loss: 43.071121
Train Epoch: 40 [16/16 (100%)] Batch loss: 16.211815
Total loss 822.4543037414551
Train Epoch: 41 [0/16 (0%)] Batch loss: 31.369652
Train Epoch: 41 [8/16 (50%)] Batch loss: 42.737549
Train Epoch: 41 [16/16 (100%)] Batch loss: 16.214901
Total loss 820.7176504135132
Train Epoch: 42 [0/16 (0%)] Batch loss: 31.079966
Train Epoch: 42 [8/16 (50%)] Batch loss: 42.451820
Train Epoch: 42 [16/16 (100%)] Batch loss: 16.201923
Total loss 819.4225702285767
Train Epoch: 43 [0/16 (0%)] Batch loss: 30.882053
Train Epoch: 43 [8/16 (50%)] Batch loss: 42.218243
Train Epoch: 43 [16/16 (100%)] Batch loss: 16.169653
Total loss 818.3853406906128
Train Epoch: 44 [0/16 (0%)] Batch loss: 30.762981
Train Epoch: 44 [8/16 (50%)] Batch loss: 42.036167
Train Epoch: 44 [16/16 (100%)] Batch loss: 16.119072
Total loss 817.4064502716064
Train Epoch: 45 [0/16 (0%)] Batch loss: 30.695518
Train Epoch: 45 [8/16 (50%)] Batch loss: 41.900177
Train Epoch: 45 [16/16 (100%)] Batch loss: 16.052885
Total loss 816.3154420852661
Train Epoch: 46 [0/16 (0%)] Batch loss: 30.650360
###Markdown
Part 7 - Federated Learning with FederatedDatasetHere we introduce a new tool for using federated datasets. We have created a `FederatedDataset` class which is intended to be used like the PyTorch Dataset class, and is given to a federated data loader `FederatedDataLoader` which will iterate on it in a federated fashion.Authors:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle) We use the sandbox that we discovered last lesson
###Code
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
###Output
_____no_output_____
###Markdown
Then search for a dataset
###Code
boston_data = grid.search("#boston", "#data")
boston_target = grid.search("#boston", "#target")
###Output
_____no_output_____
###Markdown
We load a model and an optimizer
###Code
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
###Output
_____no_output_____
###Markdown
Here we cast the data fetched in a `FederatedDataset`. See the workers which hold part of the data.
###Code
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)
###Output
_____no_output_____
###Markdown
We put it in a `FederatedDataLoader` and specify options
###Code
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)
###Output
_____no_output_____
###Markdown
And finally we iterate over epochs. You can see how similar this is compared to pure and local PyTorch training!
###Code
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer = optimizers[data.location.id]
optimizer.zero_grad()
pred = model(data)
loss = ((pred.view(-1) - target)**2).mean()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 8 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
epoch, batch_idx, len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
###Output
_____no_output_____
###Markdown
Part 7 - Federated Learning with FederatedDatasetHere we introduce a new tool for using federated datasets. We have created a `FederatedDataset` class which is intended to be used like the PyTorch Dataset class, and is given to a federated data loader `FederatedDataLoader` which will iterate on it in a federated fashion.Authors:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle) We use the sandbox that we discovered last lesson
###Code
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
###Output
_____no_output_____
###Markdown
Then search for a dataset
###Code
boston_data = grid.search("#boston", "#data", verbose=False, return_counter=False)
boston_target = grid.search("#boston", "#target", verbose=False, return_counter=False)
###Output
_____no_output_____
###Markdown
We load a model and an optimizer
###Code
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
###Output
_____no_output_____
###Markdown
Here we cast the data fetched in a `FederatedDataset`. See the workers which hold part of the data.
###Code
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)
###Output
_____no_output_____
###Markdown
We put it in a `FederatedDataLoader` and specify options
###Code
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)
###Output
_____no_output_____
###Markdown
And finally we iterate over epochs. You can see how similar this is compared to pure and local PyTorch training!
###Code
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer = optimizers[data.location.id]
optimizer.zero_grad()
pred = model(data)
loss = ((pred.view(-1) - target)**2).mean()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 8 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
epoch, batch_idx, len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
###Output
_____no_output_____
###Markdown
Part 7 - Federated Learning with FederatedDatasetHere we introduce a new tool for using federated datasets. We have created a `FederatedDataset` class which is intended to be used like the PyTorch Dataset class, and is given to a federated data loader `FederatedDataLoader` which will iterate on it in a federated fashion.Authors:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle) We use the sandbox that we discovered last lesson
###Code
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
###Output
_____no_output_____
###Markdown
Then search for a dataset
###Code
boston_data = grid.search("#boston", "#data")[0]
boston_target = grid.search("#boston", "#target")[0]
###Output
_____no_output_____
###Markdown
We load a model and an optimizer
###Code
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
###Output
_____no_output_____
###Markdown
Here we cast the data fetched in a `FederatedDataset`. See the workers which hold part of the data.
###Code
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)
###Output
_____no_output_____
###Markdown
We put it in a `FederatedDataLoader` and specify options
###Code
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)
###Output
_____no_output_____
###Markdown
And finally we iterate over epochs. You can see how similar this is compared to pure and local PyTorch training!
###Code
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer = optimizers[data.location.id]
optimizer.zero_grad()
pred = model(data)
loss = ((pred.view(-1) - target)**2).mean()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 8 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
epoch, batch_idx, len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
###Output
_____no_output_____
###Markdown
Part 7 - Federated Learning with FederatedDatasetHere we introduce a new tool for using federated datasets. We have created a `FederatedDataset` class which is intended to be used like the PyTorch Dataset class, and is given to a federated data loader `FederatedDataLoader` which will iterate on it in a federated fashion.Authors:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)- Théo Ryffel - GitHub: [@LaRiffle](https://github.com/LaRiffle) We use the sandbox that we discovered last lesson
###Code
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
###Output
_____no_output_____
###Markdown
Then search for a dataset
###Code
boston_data = grid.search("#boston", "#data")
boston_target = grid.search("#boston", "#target")
###Output
_____no_output_____
###Markdown
We load a model and an optimizer
###Code
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
###Output
_____no_output_____
###Markdown
Here we cast the data fetched in a `FederatedDataset`. See the workers which hold part of the data.
###Code
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
optimizers = {}
for worker in dataset.workers:
optimizers[worker] = th.optim.Adam(params=model.parameters(),lr=1e-2)
###Output
_____no_output_____
###Markdown
We put it in a `FederatedDataLoader` and specify options
###Code
train_loader = sy.FederatedDataLoader(dataset, batch_size=32, shuffle=False, drop_last=False)
###Output
_____no_output_____
###Markdown
And finally we iterate over epochs. You can see how similar this is compared to pure and local PyTorch training!
###Code
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer = optimizers[data.location.id]
optimizer.zero_grad()
pred = model(data)
loss = ((pred.view(-1) - target)**2).mean()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 8 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tBatch loss: {:.6f}'.format(
epoch, batch_idx, len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
###Output
_____no_output_____ |
Semana 5/Clase funciones.ipynb | ###Markdown
Funciones Las funciones son básicamente una serie de sentencias que pueden ser referenciadas bajo un solo nombre. Las funciones son súper útiles puedes ayudan a guardar código que quieres usar repetidamente sin tener que volver a repetirlo una y otra vez. Podemos ver 2 grandes tipos de funciones: Funciones predefinidas de Python (Built-in functions). Ya hemos usado varias de ellas, como ```print()``` o ```len()```. Funciones definidas por el usuario (lo que veremos en esta clase). Imaginemos el ejemplo de los gatitos
###Code
gatitos = 0
assert gatitos >= 0
if gatitos == 1:
print("Que lindo gatito!")
elif gatitos >1:
print("Que lindos gatitos!")
else:
print("Deberia adoptar un gatito")
###Output
Deberia adoptar un gatito
###Markdown
Qué pasaría si quiero volver a correr nuestro programa porque nuestro número de gatitos cambia? Tengo que volver a definir una y otra vez la variable ```gatitos```.Qué pasaría si quiero agregar más condicionalidades a mi programa? Qué pasa si tengo varios amigos que también quieren probar el número de gatitos que tienen. Las funciones nos ayudan a solucionar todos estos problemas: Los problemas de repetición, que vienen de probar otro número diferente de gatitos, o de querer probar varias veces (como en el caso de mis amigos) cuántos gatitos tiene cada unoIncluso nos ayuda siqueremos que este control flow sea un input para otro programa (Esto último se llama modularidad).A continuacion veremos las funciones definidas por el usuario (user-defined functions en inglés). Escribiendo una función: Imaginemos que queremos sumar dos números
###Code
a = 1
b = 2
c = a + b
c
###Output
_____no_output_____
###Markdown
Esto como función se vería así:```pythondef suma_numeros(a, b): ''' Suma dos números Insumo (input): a: número b: número Producto (output): resultado: un número ''' resultado = a + b return resultado```Vamos a ver cada una de las partes de esta función:```def ``` es la palabra clave para def-inir una función. Después se coloca el nombre de la función, que en este caso es ``` suma_numeros```. - Después del nombre de la función, se ponen, entre paréntesis, los parámetros de la función. Los parámetros son los _insumos_ de la función. - Luego, viene el ``` : ```. Al conjunto del ``` def```,los parámetros y el dos puntos, se le llama el encabezado de la función. - Luego, tenemos la documentación de la función: Aquí describimos qué hace nuestra función, qué necesita como insumo, y sobre todo de qué tipos son nuestros insumos, y qué produce nuestra función, y de qué tipo es esto. - A ello, le sigue el cuerpo de la función, o las instrucciones de cómo queremos que los parámetros sean usados dentro de nuestra función, o en qué los queremos convertir. Ojo que aquí queremos es dar las instrucciones. Cuando corramos la función, estas instrucciones serán asociadas al nombre de la función pero no ejecutará ningún código (ni arrojará ningún resultado).- Al final, verás que hay una sentencia que empieza con ``` return```. Esta palabra le antecede a lo queremos que nos arroje la función. Viéndolo como un diagrama, nuestros parámetros o insumos `a` y `b` son el insumo `x` que le damos a nuestra función `suma_numeros` que nos producirá un resultado `y` (llamado resultado).En el siguiente gráfico se ve que los insumos a y b entran a la función, y lo que sale es el resultado y. Definiendo la función, tenemos:
###Code
def suma_numeros(a, b):
'''
Suma dos números
Insumo (input):
a: número
b: número
Producto (output):
resultado: un número
'''
resultado = a + b
return resultado
###Output
_____no_output_____
###Markdown
Y ahora, para usarla, hacemos lo siguiente:
###Code
suma_numeros(1,2)
a = 1
b = 2
suma_numeros(a,b)
10 + suma_numeros(1,2)
c = suma_numeros(1,2)
c
a = 1
b = 2
suma_numeros(a + 5, b + 7)
a = 1
b = 2
suma_numeros(a ,suma_numeros(a,b))
###Output
_____no_output_____
###Markdown
Hemos hecho varias llamadas a la función (o function call, como convencionalmente se llama en inglés), y hemos visto que podemos hacerlo de diferentes maneras! Un punto importante: Imaginemos que definimos una función como la de arriba, pero en vez del return, usamos el print.
###Code
def print_suma_numeros(a, b):
'''
Suma dos números
Insumo (input):
a: número
b: número
Producto (output):
resultado: un número
'''
resultado = a + b
print(resultado)
###Output
_____no_output_____
###Markdown
Cuando llamamos a la función, vemos que retorna lo siguiente:
###Code
print_suma_numeros(1, 2)
###Output
3
###Markdown
Ahora, pese a que hay un resultado "igual", veremos que hay diferencias entre usar el print y el return.
###Code
nuevo_c = print_suma_numeros(1, 2) ## Aquí asignamos al resultado de usar la función con print, una variable.
print(type(c)) ## Vemos qué tipo retorna el llamado a la función original
print(type(nuevo_c)) ##Vemos qué retorna el tipo de la variable
###Output
3
<class 'int'>
<class 'NoneType'>
###Markdown
Vemos que cuando usamos un print, el tipo del resultado es ```None```. Hay que tener cuidado respecto a usar print!!! Sobre todo si queremos usar el resultado de una función como parte de otro programa más grande. Pues podríamos obtener error. En nuestro ejemplo inicial de los gatitos:
###Code
def contar_gatitos(gatitos):
'''
saluda gatitos dependiendo de cuantos tengas
Input: un número entero
Output: nada, solo saluda a mi(s) gatitos
'''
assert gatitos >= 0
if gatitos == 1:
print("Que lindo gatito!")
elif gatitos >1:
print("Que lindos gatitos!")
else:
print("Deberia adoptar un gatito")
contar_gatitos(0)
###Output
Deberia adoptar un gatito
###Markdown
Funciones sin parámetrosTambién podemos definir funciones sin parámetros, por ejemplo:
###Code
def ropa_limpia():
print("Hola, esta función te pregunta si aún tienes ropa limpia")
ropa_limpia()
###Output
Hola, esta función te pregunta si aún tienes ropa limpia
###Markdown
Diferencia entre parámetros y argumentos: Antes de proceder, hay que hacer la distinción entre dos conceptos que están muy relacionados: los parámetros y los argumentos. **Parámetros** son los nombres de las variables a la hora de definir la función. En el caso de `suma_numeros` es `a` y `b`. **Argumentos** son los valores que toman los parámetros cuando llamamos a una función. Por ejemplo, cuando `a = 1` y `b = 2`. En resumen, la función se define con parámetros y se llama con argumentos. Argumentos posicionalesLos argumentos posicionales son pasados a la llamada de la función (function call) sin ser nombrados. Como su nombre lo indica, dependen enteramente del orden en el que fueron colocados cuando creamos nuestra función. El orden en que pasamos nuestros argumentos **importa**. Imaginemos una función que eleva a una potencia, un determinado número:
###Code
def eleva_potencia(a,b):
'''
Eleva número "a" a la "b" potencia.
Insumo (input):
a: número
b: número
Producto (output):
resultado: un número
'''
resultado = a**b
return resultado
eleva_potencia(2,3)
eleva_potencia(3,2)
###Output
_____no_output_____
###Markdown
Argumentos con palabras claveLos argumentos con palabras clave son pasados a la llamada de la función con una indicación de qué parámetros queremos alterar, en este caso no importa el orden. En nuestro ejemplo de eleva potencia:
###Code
eleva_potencia(a = 2, b = 3)
eleva_potencia(b = 3, a = 2)
###Output
_____no_output_____
###Markdown
Algo muy importante a tener en cuenta es que cuando intercalamos la especificación de argumentos que mezclan posición con palabras claves, **primero** se definen a los posicionales y después a los de palabras clave.
###Code
def eleva_potencia_suma(a,b,c):
'''
Eleva número "a" a la "b" potencia.
Insumo (input):
a: número
b: número
Producto (output):
resultado: un número
'''
resultado = a**b
return resultado + c
eleva_potencia_suma(1, 2, c = 5)
eleva_potencia_suma(1, b = 2, 5)
eleva_potencia_suma(1, 2, b = 5)
###Output
_____no_output_____ |
07-census-soma.ipynb | ###Markdown
Census Data* **Inspo:** https://www.nytimes.com/2019/05/01/upshot/all-white-neighborhoods-are-dwindling-as-america-grows-more-diverse.html* **Datasets:** - `1980.csv` - table for 1980 - `1980.txt` - codebook - same for `1990`, `2000`, `2010`* **Source:** Social Explorer, which you can access through https://library.columbia.edu* **Topics:** - LOL @ the US government - renaming columns - a real fun way to import data - a real fun way to rename columns - census data
###Code
import pandas as pd
%matplotlib inline
df = pd.read_csv("1980.csv", encoding='latin-1')
df.head(2)
df.columns = df.columns.str.replace("SE_", "")
df.head(2)
df.rename(columns={
'T012_001': 'Total Population',
'T012_002': 'White'
})
# sep=': ' means "our separator is a colon and some spcae"
# skiprows=41 means "skip the first 41 rows, we don't care about them"
# nrows=13 means "give us only 12 rows, ignor ethe rest"
# names=['code', 'name'] means "ignore any headers and use these instead"
codes = pd.read_csv("1980.txt", sep=': ', skiprows=41, nrows=13, names=['code', 'name'])
codes
# Remove blank space from both sides of our name
codes['name'] = codes.name.str.strip()
# Do some magic that turns it into a dictionary we can use with .rename
codes_dict = dict(zip(codes.code, codes.name))
codes_dict
df = df.rename(columns=codes_dict)
df.head()
# Make a new column based on what percent of people are white
df['pct_white'] = df['Total Population: White'] / df['Total Population']
df.head()
###Output
_____no_output_____
###Markdown
Now let's make our graph!!!!
###Code
df.pct_white.hist(bins=50)
import numpy as np
weights = np.ones_like(df.pct_white.dropna()) / len(df.pct_white.dropna())
df.pct_white.dropna().hist(bins=50, weights=weights)
###Output
_____no_output_____ |
trax/models/reformer/image_generation.ipynb | ###Markdown
Copyright 2020 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Reformer: Image Generation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/image_generation.ipynb) This notebook was designed to run on TPU.To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
###Code
# Install JAX. This custom build raises the TPU timeout threshold, because the
# default limit of 2 minutes is too short for sampling very long sequences.
!gsutil cp gs://trax-ml/reformer/jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl .
!gsutil cp gs://trax-ml/reformer/jax-0.1.59-cp36-none-manylinux2010_x86_64.whl .
!pip install --upgrade -q ./jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl
!pip install --upgrade -q ./jax-0.1.59-cp36-none-manylinux2010_x86_64.whl
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q gin git+https://github.com/google/[email protected]
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.models.beam_search import Search
from trax.supervised import inputs
import numpy as np
import jax.numpy as jnp
from scipy.special import softmax
%matplotlib inline
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Load example data and model
###Code
# Normally we train on the full imagenet64 training set, which is quite large so
# we won't be loading it from this notebook. Instead, let's just load a few PNG
# images to use in our data pipeline.
DATA = []
for i in range(8):
img = plt.imread(GFile('gs://trax-ml/reformer/img{}.png'.format(i), 'rb'))
# Convert from RGBA floating-point to RGB integer representation.
img = np.asarray(img[:, :, :3] * 255, dtype=np.int32)
DATA.append(img)
# We can examine one of the images to make sure we've loaded it correctly.
plt.figure(figsize=(1.5, 1.5))
plt.axis('off')
plt.imshow(DATA[0])
# We'll be using a pre-trained 12-layer Reformer model.
# First, load the config (which sets all needed hyperparameters).
!gsutil cp gs://trax-ml/reformer/imgnet64/config.gin ./config.gin
gin.parse_config_file('./config.gin')
# Now we construct a ReformerLM instance and load the pre-trained weights.
# The 'predict' mode configures the model to accept single tokens at a time,
# instead of feeding in a complete image all at once.
model_infer = trax.models.ReformerLM(mode='predict')
model_infer.init_from_file(
'gs://trax-ml/reformer/imgnet64/model.pkl', weights_only=True)
###Output
_____no_output_____
###Markdown
Sample from the model Now we're ready to sample from the pre-trained Reformer model. Unlike during training, sampling processes the images one pixel and channel value at a time. The TPU colab runtime has 8 cores so we can sample 8 images in parallel.
###Code
sampling_decoder = Search(
trax.models.ReformerLM,
model_infer.weights,
temperature=1.0,
max_decode_len=32*64*3,
)
###Output
_____no_output_____
###Markdown
Sampling is an inherently serial process and will take up to 9 minutes to run. A good chunk of that time will be spent on JIT-compiling the code, though, so the code cell below will finish faster when re-run for a second time.
###Code
flat_prompt = []
for i, img in enumerate(DATA[:trax.fastmath.device_count()]):
img = img.reshape((-1, 64, 3))[:32, :, :]
flat_prompt.append(img.reshape((-1,)))
prompt = np.stack(flat_prompt, 0)
print("Prompt:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(prompt[i].reshape((-1, 64, 3)), aspect='equal')
plt.show()
seqs, scores = sampling_decoder.decode(targets_prefix=prompt, batch_size=8)
print("Sampled completions:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(seqs[i, -1].reshape((-1, 64, 3)), aspect='equal')
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
img = jnp.concatenate([prompt[i], seqs[i, -1]], -1)
plt.imshow(img.reshape((-1, 64, 3)), aspect='equal')
###Output
_____no_output_____
###Markdown
Copyright 2020 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Reformer: Image Generation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/image_generation.ipynb) This notebook was designed to run on TPU.To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
###Code
# Install JAX. This custom build raises the TPU timeout threshold, because the
# default limit of 2 minutes is too short for sampling very long sequences.
!gsutil cp gs://trax-ml/reformer/jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl .
!gsutil cp gs://trax-ml/reformer/jax-0.1.59-cp36-none-manylinux2010_x86_64.whl .
!pip install --upgrade -q ./jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl
!pip install --upgrade -q ./jax-0.1.59-cp36-none-manylinux2010_x86_64.whl
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q gin git+https://github.com/google/[email protected]
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.models.beam_search import Search
from trax.supervised import inputs
import numpy as np
import jax.numpy as jnp
from scipy.special import softmax
%matplotlib inline
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Load example data and model
###Code
# Normally we train on the full imagenet64 training set, which is quite large so
# we won't be loading it from this notebook. Instead, let's just load a few PNG
# images to use in our data pipeline.
DATA = []
for i in range(8):
img = plt.imread(GFile('gs://trax-ml/reformer/img{}.png'.format(i), 'rb'))
# Convert from RGBA floating-point to RGB integer representation.
img = np.asarray(img[:, :, :3] * 255, dtype=np.int32)
DATA.append(img)
# We can examine one of the images to make sure we've loaded it correctly.
plt.figure(figsize=(1.5, 1.5))
plt.axis('off')
plt.imshow(DATA[0])
# We'll be using a pre-trained 12-layer Reformer model.
# First, load the config (which sets all needed hyperparameters).
!gsutil cp gs://trax-ml/reformer/imgnet64/config.gin ./config.gin
gin.parse_config_file('./config.gin')
# Now we construct a ReformerLM instance and load the pre-trained weights.
# The 'predict' mode configures the model to accept single tokens at a time,
# instead of feeding in a complete image all at once.
model_infer = trax.models.ReformerLM(mode='predict')
model_infer.init_from_file(
'gs://trax-ml/reformer/imgnet64/model.pkl', weights_only=True)
###Output
_____no_output_____
###Markdown
Sample from the model Now we're ready to sample from the pre-trained Reformer model. Unlike during training, sampling processes the images one pixel and channel value at a time. The TPU colab runtime has 8 cores so we can sample 8 images in parallel.
###Code
sampling_decoder = Search(
trax.models.ReformerLM,
model_infer.weights,
temperature=1.0,
max_decode_len=32*64*3,
)
###Output
_____no_output_____
###Markdown
Sampling is an inherently serial process and will take up to 9 minutes to run. A good chunk of that time will be spent on JIT-compiling the code, though, so the code cell below will finish faster when re-run for a second time.
###Code
flat_prompt = []
for i, img in enumerate(DATA[:trax.math.device_count()]):
img = img.reshape((-1, 64, 3))[:32, :, :]
flat_prompt.append(img.reshape((-1,)))
prompt = np.stack(flat_prompt, 0)
print("Prompt:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(prompt[i].reshape((-1, 64, 3)), aspect='equal')
plt.show()
seqs, scores = sampling_decoder.decode(targets_prefix=prompt, batch_size=8)
print("Sampled completions:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(seqs[i, -1].reshape((-1, 64, 3)), aspect='equal')
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
img = jnp.concatenate([prompt[i], seqs[i, -1]], -1)
plt.imshow(img.reshape((-1, 64, 3)), aspect='equal')
###Output
_____no_output_____
###Markdown
Copyright 2020 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Reformer: Image Generation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/image_generation.ipynb) This notebook was designed to run on TPU.To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
###Code
# Install JAX. This custom build raises the TPU timeout threshold, because the
# default limit of 2 minutes is too short for sampling very long sequences.
!gsutil cp gs://trax-ml/reformer/jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl .
!gsutil cp gs://trax-ml/reformer/jax-0.1.59-cp36-none-manylinux2010_x86_64.whl .
!pip install --upgrade -q ./jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl
!pip install --upgrade -q ./jax-0.1.59-cp36-none-manylinux2010_x86_64.whl
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q gin git+https://github.com/google/[email protected]
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.models.beam_search import Search
from trax.supervised import inputs
import numpy as onp
import jax.numpy as np
from scipy.special import softmax
%matplotlib inline
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Load example data and model
###Code
# Normally we train on the full imagenet64 training set, which is quite large so
# we won't be loading it from this notebook. Instead, let's just load a few PNG
# images to use in our data pipeline.
DATA = []
for i in range(8):
img = plt.imread(GFile('gs://trax-ml/reformer/img{}.png'.format(i), 'rb'))
# Convert from RGBA floating-point to RGB integer representation.
img = onp.asarray(img[:, :, :3] * 255, dtype=onp.int32)
DATA.append(img)
# We can examine one of the images to make sure we've loaded it correctly.
plt.figure(figsize=(1.5, 1.5))
plt.axis('off')
plt.imshow(DATA[0])
# We'll be using a pre-trained 12-layer Reformer model.
# First, load the config (which sets all needed hyperparameters).
!gsutil cp gs://trax-ml/reformer/imgnet64/config.gin ./config.gin
gin.parse_config_file('./config.gin')
# Now we construct a ReformerLM instance and load the pre-trained weights.
# The 'predict' mode configures the model to accept single tokens at a time,
# instead of feeding in a complete image all at once.
model_infer = trax.models.ReformerLM(mode='predict')
model_infer.init_from_file(
'gs://trax-ml/reformer/imgnet64/model.pkl', weights_only=True)
###Output
_____no_output_____
###Markdown
Sample from the model Now we're ready to sample from the pre-trained Reformer model. Unlike during training, sampling processes the images one pixel and channel value at a time. The TPU colab runtime has 8 cores so we can sample 8 images in parallel.
###Code
sampling_decoder = Search(
trax.models.ReformerLM,
model_infer.weights,
temperature=1.0,
max_decode_len=32*64*3,
)
###Output
_____no_output_____
###Markdown
Sampling is an inherently serial process and will take up to 9 minutes to run. A good chunk of that time will be spent on JIT-compiling the code, though, so the code cell below will finish faster when re-run for a second time.
###Code
flat_prompt = []
for i, img in enumerate(DATA[:trax.math.device_count()]):
img = img.reshape((-1, 64, 3))[:32, :, :]
flat_prompt.append(img.reshape((-1,)))
prompt = onp.stack(flat_prompt, 0)
print("Prompt:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(prompt[i].reshape((-1, 64, 3)), aspect='equal')
plt.show()
seqs, scores = sampling_decoder.decode(targets_prefix=prompt, batch_size=8)
print("Sampled completions:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(seqs[i, -1].reshape((-1, 64, 3)), aspect='equal')
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
img = np.concatenate([prompt[i], seqs[i, -1]], -1)
plt.imshow(img.reshape((-1, 64, 3)), aspect='equal')
###Output
_____no_output_____
###Markdown
Copyright 2020 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Reformer: Image Generation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/image_generation.ipynb) This notebook was designed to run on TPU.To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
###Code
# Install JAX. This custom build raises the TPU timeout threshold, because the
# default limit of 2 minutes is too short for sampling very long sequences.
!gsutil cp gs://trax-ml/reformer/jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl .
!gsutil cp gs://trax-ml/reformer/jax-0.1.59-cp36-none-manylinux2010_x86_64.whl .
!pip install --upgrade -q ./jaxlib-0.1.39-cp36-none-manylinux2010_x86_64.whl
!pip install --upgrade -q ./jax-0.1.59-cp36-none-manylinux2010_x86_64.whl
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
!pip install --upgrade -q gin git+https://github.com/google/[email protected]
from tensorflow.compat.v1.io.gfile import GFile
import gin
import os
import jax
import trax
from trax.models.beam_search import Search
from trax.supervised import inputs
import numpy as np
import jax.numpy as jnp
from scipy.special import softmax
%matplotlib inline
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Load example data and model
###Code
# Normally we train on the full imagenet64 training set, which is quite large so
# we won't be loading it from this notebook. Instead, let's just load a few PNG
# images to use in our data pipeline.
DATA = []
for i in range(8):
img = plt.imread(GFile('gs://trax-ml/reformer/img{}.png'.format(i), 'rb'))
# Convert from RGBA floating-point to RGB integer representation.
img = np.asarray(img[:, :, :3] * 255, dtype=np.int32)
DATA.append(img)
# We can examine one of the images to make sure we've loaded it correctly.
plt.figure(figsize=(1.5, 1.5))
plt.axis('off')
plt.imshow(DATA[0])
# We'll be using a pre-trained 12-layer Reformer model.
# First, load the config (which sets all needed hyperparameters).
!gsutil cp gs://trax-ml/reformer/imgnet64/config.gin ./config.gin
gin.parse_config_file('./config.gin')
# Now we construct a ReformerLM instance and load the pre-trained weights.
# The 'predict' mode configures the model to accept single tokens at a time,
# instead of feeding in a complete image all at once.
model_infer = trax.models.ReformerLM(mode='predict')
model_infer.init_from_file(
'gs://trax-ml/reformer/imgnet64/model.pkl', weights_only=True)
###Output
_____no_output_____
###Markdown
Sample from the model Now we're ready to sample from the pre-trained Reformer model. Unlike during training, sampling processes the images one pixel and channel value at a time. The TPU colab runtime has 8 cores so we can sample 8 images in parallel.
###Code
sampling_decoder = Search(
trax.models.ReformerLM,
model_infer.weights,
temperature=1.0,
max_decode_len=32*64*3,
)
###Output
_____no_output_____
###Markdown
Sampling is an inherently serial process and will take up to 9 minutes to run. A good chunk of that time will be spent on JIT-compiling the code, though, so the code cell below will finish faster when re-run for a second time.
###Code
flat_prompt = []
for i, img in enumerate(DATA[:trax.fastmath.device_count()]):
img = img.reshape((-1, 64, 3))[:32, :, :]
flat_prompt.append(img.reshape((-1,)))
prompt = np.stack(flat_prompt, 0)
print("Prompt:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(prompt[i].reshape((-1, 64, 3)), aspect='equal')
plt.show()
seqs, scores = sampling_decoder.decode(targets_prefix=prompt, batch_size=8)
print("Sampled completions:")
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
plt.imshow(seqs[i, -1].reshape((-1, 64, 3)), aspect='equal')
plt.figure(figsize=(10, 10*8))
for i in range(prompt.shape[0]):
plt.subplot(1, 8, i+1)
plt.axis('off')
img = jnp.concatenate([prompt[i], seqs[i, -1]], -1)
plt.imshow(img.reshape((-1, 64, 3)), aspect='equal')
###Output
_____no_output_____ |
chapter16.ipynb | ###Markdown
Chapter 16. Multitapers
###Code
import numpy as np
import scipy.io
from matplotlib import pyplot as plt
#import basic functions from numpy that we'll need
from numpy import pi, sin, cos, exp, sqrt, log, log10, random, angle, real, imag
from numpy import zeros, ceil, floor, absolute, linspace
from numpy.fft import fft, ifft
from scipy import signal as sig
from scipy.signal.windows import dpss
from matplotlib.pyplot import *
###Output
_____no_output_____
###Markdown
Import EEG data
###Code
data = scipy.io.loadmat('sampleEEGdata')
EEGdata = data["EEG"][0,0]["data"]
EEGpnts = data["EEG"][0,0]["pnts"][0,0] #number of points in EEG data
EEGtimes = data["EEG"][0,0]["times"][0]
EEGsrate = float(data["EEG"][0,0]["srate"][0]) #make float for division purposes later
EEGtrials = data["EEG"][0,0]["trials"][0,0]
EEGnbchan = data["EEG"][0,0]["nbchan"][0,0]
EEGchanlocslabels=data["EEG"][0,0]["chanlocs"][0]["labels"]
###Output
_____no_output_____
###Markdown
Figure 16.1
###Code
# settings
channel2plot = 'O1';
timewin = 400. # in ms
timewinidx = int(np.round(timewin/(1000/EEGsrate)))
# compute tapers
# use dpss function from scipy.signal.windows
tapers = dpss(M=timewinidx,NW=5,Kmax=5)
# extract a bit of EEG data
d = sig.detrend(np.squeeze(EEGdata[EEGchanlocslabels==channel2plot,199:199+timewinidx,9]))
# plot eeg data from snippet
subplot(521)
plot(d)
axis('off')
for ii in range(5):
subplot(5,2,(2*ii)+2)
plot(tapers[ii,:])
axis('off')
tight_layout()
# multiply data by each taper and plot
for ii in range(5):
subplot(5,2,(2*(ii))+1)
plot(tapers[ii,:]*d)
axis('off')
# compute fft of the product (taper.*data) and plot
f=np.zeros([5,timewinidx]) * 1j
for ii in range(5):
subplot(5,2,(2*(ii))+2)
f[ii,:]=fft(tapers[ii,:]*d)
plot(absolute(f[ii,:int(timewinidx/2)])**2)
axis('off')
tight_layout()
# average over multitaper estimates and plot
subplot(5,2,2)
plot(np.mean(absolute(f[:,:timewinidx//2])**2,axis=0))
axis('off')
# compute hanning window for short-time FFT comparison
subplot(523)
hann = .5*(1-cos(2*pi*np.arange(timewinidx)/(timewinidx-1)));
plot(hann)
axis('off')
# apply hanning window to data
subplot(525)
plot(hann*d)
axis('off')
# compute short-time FFT
subplot(526)
ff=fft(hann*d);
plot((absolute(ff[:timewinidx//2])**2))
axis('off')
tight_layout()
###Output
_____no_output_____
###Markdown
Figure 16.2
###Code
channel2plot = 'P7'
frequency2plot = 15 # in Hz
timepoint2plot = 200 # ms
nw_product = 3 # determines the frequency smoothing, given a specified time window
times2save = np.arange(-300,1000+50,50)
baseline_range = np.array([-200, -0])
timewin = 400 # in ms
#define closest() to replace the use of Matlab's dsearchn() function
def closest(X, p):
disp = X - p
return np.argmin((disp*disp))
# convert time points to indices
times2saveidx = [closest(EEGtimes,x) for x in times2save]
timewinidx = np.round(timewin/(1000/EEGsrate)).astype(int)
# find baselinetimepoints
baseidx = zeros(baseline_range.shape);
baseidx[0] = np.argmin(abs(times2save-baseline_range[0]))
baseidx[1] = np.argmin(abs(times2save-baseline_range[1]))
# define tapers
# note that in practice, you'll want to set the temporal resolution to be a function of frequency
tapers = dpss(timewinidx,nw_product,6)
#define frequencies for FFT
f = np.linspace(0,EEGsrate/2,(timewinidx//2)+1)
# find logical channel index
chanidx = EEGchanlocslabels ==channel2plot
#initialize output matrix
multitaper_tf = np.zeros([int(floor(timewinidx/2)+1),len(times2save)])
# loop through time bins
for ti in range(len(times2saveidx)):
#init power vector (over tapers)
taperpow = np.zeros(int(floor(timewinidx/2)+1)) * 1j
#loop through tapers
for tapi in range(1, tapers.shape[0]-1):
#window and taper data, and get pwoer spectrum
data = np.squeeze(EEGdata[chanidx,
int(times2saveidx[ti]-floor(timewinidx/2)):int(times2saveidx[ti]+ceil(timewinidx/2)),
:])
#multiply by taper (Broadcasted)
data *= np.reshape(tapers[tapi,:],[1,tapers.shape[1]]).T
#compute power
power = fft(data,n=timewinidx,axis=0)/timewinidx
#take the real part
power = power[:int(floor(timewinidx/2)+1),:]
taperpow = taperpow + np.mean(power * np.conj(power),axis=1)
#finally get power from closest frequency
multitaper_tf[:,ti] = real(taperpow/tapi)
#db-correct
db_multitaper_tf = 10*log10( multitaper_tf
/np.tile(np.mean(multitaper_tf[:,int(baseidx[0]):int(baseidx[1])],axis=1),[len(times2save),1]).T)
subplot(121)
freq2plotidx=np.argmin(absolute(f-frequency2plot))
_=plot(times2save,np.mean(log10(multitaper_tf[freq2plotidx-2:freq2plotidx+2,:]),axis=0))
title( 'Sensor ' + channel2plot + ', ' + str(frequency2plot) + ' Hz' )
xlim([times2save[0],times2save[-1]])
subplot(122)
time2plotidx =np.argmin(absolute(times2save-timepoint2plot));
plot(f,log10(multitaper_tf[:,time2plotidx]))
title( 'Sensor ' + channel2plot + ', ' + str(timepoint2plot) + ' ms' )
setp(gca(),'xlim',[f[0], 40])
tight_layout()
contourf(times2save,f,db_multitaper_tf,40,cmap=cm.jet)
xlabel('Time (ms)'), ylabel('Frequency (Hz)')
clim([-2,2])
_=title( 'Power via multitaper from channel ' + channel2plot )
###Output
_____no_output_____
###Markdown
Chapter 16. Multitapers
###Code
import numpy as np
import scipy.io
from matplotlib import pyplot as plt
#import basic functions from numpy that we'll need
from numpy import pi, sin, cos, exp, sqrt, log, log10, random, angle, real, imag
from numpy import zeros, ceil, floor, absolute, linspace
from numpy.fft import fft, ifft
from scipy import signal as sig
from scipy.signal.windows import dpss
from matplotlib.pyplot import *
###Output
_____no_output_____
###Markdown
Import EEG data
###Code
data = scipy.io.loadmat('sampleEEGdata')
EEGdata = data["EEG"][0,0]["data"]
EEGpnts = data["EEG"][0,0]["pnts"][0,0] #number of points in EEG data
EEGtimes = data["EEG"][0,0]["times"][0]
EEGsrate = float(data["EEG"][0,0]["srate"][0]) #make float for division purposes later
EEGtrials = data["EEG"][0,0]["trials"][0,0]
EEGnbchan = data["EEG"][0,0]["nbchan"][0,0]
EEGchanlocslabels=data["EEG"][0,0]["chanlocs"][0]["labels"]
###Output
_____no_output_____
###Markdown
Figure 16.1
###Code
channel2plot = 'O1';
timewin = 400. # in ms
timewinidx = int(np.round(timewin/(1000/EEGsrate)))
#use dpss taken from nitime library
tapers = dpss(M=timewinidx,NW=5,Kmax=5)
#extract a bit of EEG data
d = sig.detrend(np.squeeze(EEGdata[EEGchanlocslabels==channel2plot,199:199+timewinidx,9]))
#plot eeg data from snippet
subplot(521)
plot(d)
axis('off')
for ii in range(5):
subplot(5,2,(2*ii)+2)
plot(tapers[ii,:])
axis('off')
tight_layout()
# plot taper*data
for ii in range(5):
subplot(5,2,(2*(ii))+1)
plot(tapers[ii,:]*d) #multiply data by tapers
axis('off')
# plot fft of taper.*data
f=np.zeros([5,timewinidx]) * 1j
for ii in range(5):
subplot(5,2,(2*(ii))+2)
f[ii,:]=fft(tapers[ii,:]*d)
plot(absolute(f[ii,:int(timewinidx/2)])**2)
axis('off')
tight_layout()
subplot(5,2,2)
plot(np.mean(absolute(f[:,:timewinidx//2])**2,axis=0))
axis('off')
subplot(523)
hann = .5*(1-cos(2*pi*np.arange(timewinidx)/(timewinidx-1)));
plot(hann)
axis('off')
subplot(525)
plot(hann*d)
axis('off')
subplot(526)
ff=fft(hann*d);
plot((absolute(ff[:timewinidx//2])**2))
axis('off')
tight_layout()
###Output
_____no_output_____
###Markdown
Figure 16.2
###Code
channel2plot = 'P7'
frequency2plot = 15 # in Hz
timepoint2plot = 200 # ms
nw_product = 3 # determines the frequency smoothing, given a specified time window
times2save = np.arange(-300,1000+50,50)
baseline_range = np.array([-200, -0])
timewin = 400 # in ms
#define closest() to replace the use of Matlab's dsearchn() function
def closest(X, p):
disp = X - p
return np.argmin((disp*disp))
# convert time points to indices
times2saveidx = [closest(EEGtimes,x) for x in times2save]
timewinidx = np.round(timewin/(1000/EEGsrate)).astype(int)
# find baselinetimepoints
baseidx = zeros(baseline_range.shape);
baseidx[0] = np.argmin(abs(times2save-baseline_range[0]))
baseidx[1] = np.argmin(abs(times2save-baseline_range[1]))
# define tapers
# note that in practice, you'll want to set the temporal resolution to be a function of frequency
tapers = dpss(timewinidx,nw_product,6)
#define frequencies for FFT
f = np.linspace(0,EEGsrate/2,(timewinidx//2)+1)
# find logical channel index
chanidx = EEGchanlocslabels ==channel2plot
#initialize output matrix
multitaper_tf = np.zeros([int(floor(timewinidx/2)+1),len(times2save)])
for ti in range(len(times2saveidx)):
#init power vector (over tapers)
taperpow = np.zeros(int(floor(timewinidx/2)+1)) * 1j
#loop through tapers
for tapi in range(1, tapers.shape[0]-1):
#window and taper data, and get pwoer spectrum
data = np.squeeze(EEGdata[chanidx,
int(times2saveidx[ti]-floor(timewinidx/2)):int(times2saveidx[ti]+ceil(timewinidx/2)),
:])
#multiply by taper (Broadcasted)
data *= np.reshape(tapers[tapi,:],[1,tapers.shape[1]]).T
#compute power
power = fft(data,n=timewinidx,axis=0)/timewinidx
#take the real part
power = power[:int(floor(timewinidx/2)+1),:]
taperpow = taperpow + np.mean(power * np.conj(power),axis=1)
#finally get power from closest frequency
multitaper_tf[:,ti] = real(taperpow/tapi)
#db-correct
db_multitaper_tf = 10*log10( multitaper_tf
/np.tile(np.mean(multitaper_tf[:,int(baseidx[0]):int(baseidx[1])],axis=1),[len(times2save),1]).T)
subplot(121)
freq2plotidx=np.argmin(absolute(f-frequency2plot))
_=plot(times2save,np.mean(log10(multitaper_tf[freq2plotidx-2:freq2plotidx+2,:]),axis=0))
title( 'Sensor ' + channel2plot + ', ' + str(frequency2plot) + ' Hz' )
xlim([times2save[0],times2save[-1]])
subplot(122)
time2plotidx =np.argmin(absolute(times2save-timepoint2plot));
plot(f,log10(multitaper_tf[:,time2plotidx]))
title( 'Sensor ' + channel2plot + ', ' + str(timepoint2plot) + ' ms' )
setp(gca(),'xlim',[f[0], 40])
contourf(times2save,f,db_multitaper_tf,40,cmap=cm.jet)
xlabel('Time (ms)'), ylabel('Frequency (Hz)')
clim([-2,2])
_=title( 'Power via multitaper from channel ' + channel2plot )
###Output
_____no_output_____
###Markdown
Import necessary libraries and functions.
###Code
import numpy as np, cmath, scipy as sp
import scipy.io
from matplotlib import pyplot as plt
#import basic functions from numpy that we'll need
from numpy import pi, sin, cos, exp, sqrt, log, log10, random, angle, real, imag
from numpy import zeros, ceil, floor, absolute, linspace
from numpy.fft import fft, ifft
from scipy import signal as sig
from scipy.signal import hilbert
from matplotlib.pyplot import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import optional libraries
###Code
import seaborn as sns
sns.set_palette('muted')
sns.set_style('darkgrid')
###Output
_____no_output_____
###Markdown
I couldn't find a Scipy implementation of dpss that is equivalent to MATLAB's, so we will need to load an external library Nitime (neuroimaging time analysis) [found here](http://nipy.org/nitime/api/generated/nitime.algorithms.spectral.html).
###Code
from nitime.algorithms.spectral import dpss_windows as dpss
###Output
_____no_output_____
###Markdown
Import EEG data
###Code
data = scipy.io.loadmat('sampleEEGdata')
EEGdata = data["EEG"][0,0]["data"]
EEGpnts = data["EEG"][0,0]["pnts"][0,0] #number of points in EEG data
EEGtimes = data["EEG"][0,0]["times"][0]
EEGsrate = float(data["EEG"][0,0]["srate"][0]) #make float for division purposes later
EEGtrials = data["EEG"][0,0]["trials"][0,0]
EEGnbchan = data["EEG"][0,0]["nbchan"][0,0]
EEGchanlocslabels=data["EEG"][0,0]["chanlocs"][0]["labels"]
###Output
_____no_output_____
###Markdown
Figure 16.1
###Code
channel2plot = 'O1';
timewin = 400. # in ms
timewinidx = int(np.round(timewin/(1000/EEGsrate)))
#use dpss taken from nitime library
tapers ,_ = dpss(N=timewinidx,NW=5,Kmax=5)
#extract a bit of EEG data
d = sig.detrend(np.squeeze(EEGdata[EEGchanlocslabels==channel2plot,199:199+timewinidx,9]))
#plot eeg data from snippet
subplot(521)
plot(d)
axis('off')
for ii in xrange(5):
subplot(5,2,(2*ii)+2)
plot(tapers[ii,:])
axis('off')
tight_layout()
# plot taper*data
for ii in xrange(5):
subplot(5,2,(2*(ii))+1)
plot(tapers[ii,:]*d) #multiply data by tapers
axis('off')
# plot fft of taper.*data
f=np.zeros([5,timewinidx]) * 1j
for ii in xrange(5):
subplot(5,2,(2*(ii))+2)
f[ii,:]=fft(tapers[ii,:]*d)
plot(absolute(f[ii,:int(timewinidx/2)])**2)
axis('off')
tight_layout()
subplot(5,2,2)
plot(np.mean(absolute(f[:,:timewinidx/2])**2,axis=0))
axis('off')
subplot(523)
hann = .5*(1-cos(2*pi*np.arange(timewinidx)/(timewinidx-1)));
plot(hann)
axis('off')
subplot(525)
plot(hann*d)
axis('off')
subplot(526)
ff=fft(hann*d);
plot((absolute(ff[:timewinidx/2])**2))
axis('off')
tight_layout()
###Output
_____no_output_____
###Markdown
Figure 16.2
###Code
channel2plot = 'P7'
frequency2plot = 15 # in Hz
timepoint2plot = 200 # ms
nw_product = 3 # determines the frequency smoothing, given a specified time window
times2save = np.arange(-300,1000+50,50)
baseline_range = np.array([-200, -0])
timewin = 400 # in ms
#define closest() to replace the use of Matlab's dsearchn() function
def closest(X, p):
disp = X - p
return np.argmin((disp*disp))
# convert time points to indices
times2saveidx = [closest(EEGtimes,x) for x in times2save]
timewinidx = np.round(timewin/(1000/EEGsrate)).astype(int)
# find baselinetimepoints
baseidx = zeros(baseline_range.shape);
baseidx[0] = np.argmin(abs(times2save-baseline_range[0]))
baseidx[1] = np.argmin(abs(times2save-baseline_range[1]))
# define tapers
# note that in practice, you'll want to set the temporal resolution to be a function of frequency
tapers,_ = dpss(timewinidx,nw_product,6)
#define frequencies for FFT
f = np.linspace(0,EEGsrate/2,floor(timewinidx/2)+1)
# find logical channel index
chanidx = EEGchanlocslabels ==channel2plot
#initialize output matrix
multitaper_tf = np.zeros([int(floor(timewinidx/2)+1),len(times2save)])
for ti in xrange(len(times2saveidx)):
#init power vector (over tapers)
taperpow = np.zeros(int(floor(timewinidx/2)+1)) * 1j
#loop through tapers
for tapi in xrange(tapers.shape[0]-1):
#window and taper data, and get pwoer spectrum
data = np.squeeze(EEGdata[chanidx,
int(times2saveidx[ti]-floor(timewinidx/2)):int(times2saveidx[ti]+ceil(timewinidx/2)),
:])
#multiply by taper (Broadcasted)
data *= np.reshape(tapers[tapi,:],[1,tapers.shape[1]]).T
#compute power
power = fft(data,n=timewinidx,axis=0)/timewinidx
#take the real part
power = power[:int(floor(timewinidx/2)+1),:]
taperpow = taperpow + np.mean(power * np.conj(power),axis=1)
#finally get power from closest frequency
multitaper_tf[:,ti] = real(taperpow/tapi)
#db-correct
db_multitaper_tf = 10*log10( multitaper_tf
/np.tile(np.mean(multitaper_tf[:,int(baseidx[0]):int(baseidx[1])],axis=1),[len(times2save),1]).T)
subplot(121)
freq2plotidx=np.argmin(absolute(f-frequency2plot))
_=plot(times2save,np.mean(log10(multitaper_tf[freq2plotidx-2:freq2plotidx+2,:]),axis=0))
title( 'Sensor ' + channel2plot + ', ' + str(frequency2plot) + ' Hz' )
xlim([times2save[0],times2save[-1]])
subplot(122)
time2plotidx =np.argmin(absolute(times2save-timepoint2plot));
plot(f,log10(multitaper_tf[:,time2plotidx]))
title( 'Sensor ' + channel2plot + ', ' + str(timepoint2plot) + ' ms' )
setp(gca(),'xlim',[f[0], 40])
contourf(times2save,f,db_multitaper_tf,40,cmap=cm.jet)
xlabel('Time (ms)'), ylabel('Frequency (Hz)')
clim([-2,2])
_=title( 'Power via multitaper from channel ' + channel2plot )
###Output
_____no_output_____ |
examples/tutorial/BERTCRFCascadeNER.ipynb | ###Markdown
BERTCRFCascadeNER可用的中文预训练参数:[`bert-base`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip),[`roberta-wwm-ext-base`](https://drive.google.com/uc?export=download&id=1jMAKIJmPn7kADgD3yQZhpsqM-IRM1qZt),[`roberta-wwm-ext-large`](https://drive.google.com/uc?export=download&id=1dtad0FFzG11CBsawu8hvwwzU2R0FDI94),[`macbert-base`](https://drive.google.com/uc?export=download&id=1aV69OhYzIwj_hn-kO1RiBa-m8QAusQ5b),[`macbert-large`](https://drive.google.com/uc?export=download&id=1lWYxnk1EqTA2Q20_IShxBrCPc5VSDCkT)
###Code
import uf
print(uf.__version__)
model = uf.BERTCRFCascadeNER('../../demo/bert_config.json', '../../demo/vocab.txt')
print(model)
X = ['天亮以前说再见', '笑着泪流满面', '去迎接应该你的', '更好的明天']
y = [{'nn': ['天亮']}, {}, {'v': ['迎接']}, {'adj': ['更好'], 'n': ['明天']}]
###Output
_____no_output_____
###Markdown
训练
###Code
model.fit(X, y, total_steps=20)
###Output
WARNING:tensorflow:From /Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/util/dispatch.py:1096: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
###Markdown
推理
###Code
model.predict(X)
###Output
INFO:tensorflow:Time usage 0m-7.83s, 0.13 steps/sec, 0.51 examples/sec
###Markdown
评分
###Code
model.score(X, y)
###Output
INFO:tensorflow:Time usage 0m-4.72s, 0.21 steps/sec, 0.85 examples/sec
|
Algorithms & Best Practices/Supervised Learning Algorithms/6. Logistic Regression Single Variable.ipynb | ###Markdown
LOGISTIC REGRESSION (Classification algorithm) Single Feature/Variable Logistic Regression is a classification algorithm and helps to classify data. It gives ***DISCRETE VALUES AS OUTPUT***. Don't confuse with the term regression in it's name. In ML regression means a function which gives continuos value as output. But this one, is a classification algorithm. Hypothesis function:In logistic regression the hypothesis function is $h_{\theta}(x)$ = $1/(1+$ $e^{-\theta x}$ $)$.And it's values always lies between 0 and 1. I will frame a simple dataset for testing our function.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
x1 = np.arange(1,11)
y1 = np.array([0,0,0,0,1,0,1,1,1,1])
plt.scatter(x1,y1)
###Output
_____no_output_____
###Markdown
So this is our data for the classification problem. Now let us write an algoritm to predict a logistic regression model.
###Code
def Gradient_Descent(x, y, learning_rate, iterations):
theta_0,theta_1 = 0.001,0.001
m = x.shape[0]
def mean_error(a, b):
sum_mean = 0
for i in range(m):
sum_mean+= a[i] - b[i]
return sum_mean/m
cost_func = []
def cal_cost_func(t_0,t_1,h_xi):
sum = 0
for i in range(m):
sum += y[i]*math.log(h_xi[i]) + (1-y[i])*math.log(1-h_xi[i])
return -sum/m
def perform_cal(theta_0,theta_1, m):
h_xi = np.ones((m))
for i in range(m):
h_xi[i] = (1/(1 + math.pow(math.e,-(theta_0 + theta_1*x[i]))))
cost_func_curr = (cal_cost_func(theta_0,theta_1, h_xi))
cost_func.append(cost_func_curr)
temp_0 = theta_0 - learning_rate*mean_error( h_xi, y)
temp_1 = theta_1 - learning_rate*mean_error(h_xi*x, y*x)
return temp_0 , temp_1
for i in range(iterations):
theta_0, theta_1 = perform_cal(theta_0, theta_1, m)
return theta_0,theta_1, cost_func
###Output
_____no_output_____
###Markdown
Let us plot the cost function vs iterations graph. And also the linear line predicted by the algorithm for our dataset.
###Code
itr = 1000
t_0,t_1, cost_func = Gradient_Descent(x1, y1, 0.3, itr)
print(t_0,t_1)
fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2)
fig.set_figwidth(15)
ax1.plot(np.arange(itr), cost_func)
ax1.set(xlabel='Iterations', ylabel='Cost Function')
ax1.set_title('Cost Function vs Iterations')
ax2.plot(x1, t_0+t_1*x1)
ax2.axhline(y=0, color='g')
ax2.axvline(x=(-t_0/t_1), color='r', linestyle='-') #plotting a vertical line
print('x = ', (-t_0/t_1))
###Output
-6.393068781956088 1.170504377336995
x = 5.461806812291387
###Markdown
So the cost function decreases as the number of iterations increases, hence we are on a good track. Now, the second plot is the line which shows the model predicted by logistic regression algorithm. ***Intrepretation of the model :***1) We see that at x = 5.46 the value of y=0 for this line, hence it means that when passed with a number less than 5.46, the algorithm would give us 0 as output. Which is quite obvious if we see the data set made by us.2) For values greater than x = 5.46, it would give 1 as output. A function to check all the numbers fro 1 to 10, to be classified as 0 or 1.
###Code
def predict(x,t0,t1):
y_pred = t0 + t1*x
if y_pred<0:
return 0
return 1
for i in range(1,11):
print(predict(i, t_0,t_1), end=' ')
###Output
0 0 0 0 0 1 1 1 1 1
###Markdown
Now that's a very correct prediction. Notice that the data at x=6, y=0 in our datset doesn't affect the predicted model alot. As even we can predict that it is by chance. ***Matrix form calculations are very powerful. When writing functions like these it is very helpful to use numpy ndarrays (N-dimensional arrays) as it makes it easy to understand ,time-efficient and takes less number of lines***
###Code
def Gradient_Descent_Matrices_Magic(x, y, l_t, itr):
theta = np.array([0.001,0.001]).reshape(1,2)
m = x.shape[0]
y,x = y.reshape(m,1), x.reshape(m,1)
x = np.concatenate((np.ones((m,1)),x), axis=1)
for i in range(itr):
h_xi = (1/(1 + np.exp(-np.dot(x,np.transpose(theta)))))
theta = theta - (l_t/m) * np.dot(np.transpose(h_xi- y) , x)
return theta
theta_m= Gradient_Descent_Matrices_Magic(x1,y1,0.3,1000)
for i in range(1,11):
print(predict(i, theta_m[0,0], theta_m[0,1]), end=' ')
###Output
0 0 0 0 0 1 1 1 1 1 |
Notebooks/1804_SyGMA_metabolites_v3_SMILES_FINAL.ipynb | ###Markdown
SyGMA processing for metabolitesInfo: Jupyter notebook made by LF and Pryia on April 2018Description: Process a list of structures (SMILES) and generate Phase I and II metabolites
###Code
import pandas as pd
import numpy as np
import sygma
from rdkit import Chem
#Open dataframes
df = pd.read_csv("Saliva_InChI.txt", sep='\t', header=0, encoding='utf-8')
df2 = pd.read_csv("drugbank_approved_structure_links.csv", sep=',', header=0, encoding='latin-1')
df3 = pd.read_csv("drugbank_illicit_structure_links.csv", sep=',', header=0, encoding='latin-1')
df4 = pd.read_csv("drugbank_nutraceutical_structure_links.csv", sep=',', header=0, encoding='latin-1')
df5 = pd.read_csv("FOODB_compounds_3.txt", sep='\t', header=0, encoding='utf-8')
df6 = pd.read_csv("Exposome_explorer_biomarkers.csv", sep=',', header=0)
df7 = pd.read_csv("toxins_edited_v2.txt", sep='\t', header=0, encoding='latin-1')
df8 = pd.read_csv("PTID_Pesticide_3D-Plant2Cells_compound.txt", sep='\t', header=0, encoding='latin-1')
df6.head(5)
#Keep just a column per dataset
df = df[['SMILES']]
df2 = df2[['SMILES']]
df3 = df3[['SMILES']]
df4 = df4[['SMILES']]
df5 = df5[['SMILES']]
df6 = df6[['SMILES']]
df7 = df7[['SMILES']]
df8 = df8[['SMILES']]
#Concatenate tables
frames = [df, df2, df3, df4, df5, df6, df7, df8]
Table = pd.concat(frames)
Table.head(25)
Table.shape
len(Table.SMILES.unique())
#Remove empty rows
Table_clean = Table[Table.SMILES.str.contains('nan') == False]
# Remove steoreochemistry from SMILES
Table_clean['SMILES'].replace(regex=True,inplace=True,to_replace=r'@@',value=r'')
Table_clean['SMILES'].replace(regex=True,inplace=True,to_replace=r'@',value=r'')
Table_clean['SMILES'].replace(regex=True,inplace=True,to_replace=r'/',value=r'')
Table_clean['SMILES'].replace(regex=True,inplace=True,to_replace=r'\\',value=r'')
#Filter SMILES and Formula lenght
Table_clean = Table_clean[Table_clean['SMILES'].map(len) > 8]
Table_clean = Table_clean[Table_clean['SMILES'].map(len) < 125]
#Sort by InChI lenght
Table_clean.sort_values('SMILES',inplace=True, ascending=False)
Table_clean.index = Table_clean['SMILES'].str.len()
Table_clean = Table_clean.sort_index(ascending=False).reset_index(drop=True)
Table_clean.shape
#Write the file out
Table_clean.to_csv('Combined_DB_DrugDB_FoodDB_Pesticide_Exposome_Sygma_SMILES.tsv', sep = '\t', index = False)
#Make full list from SMILES column
List_smiles = []
List_smiles = Table_clean['SMILES'].tolist()
List_smiles
#Remove duplicates
List_smiles_unique = []
for item in List_smiles:
if item not in List_smiles_unique:
List_smiles_unique.append(item)
print(len(List_smiles_unique))
###Output
11288
###Markdown
Running SyGMaEach step in a scenario lists the rulesetand the number of reaction cycles to be applied
###Code
# Define SyGMA scenario.
scenario = sygma.Scenario([
[sygma.ruleset['phase1'], 1],
[sygma.ruleset['phase2'], 1]])
# Create a list of RDkit object from smiles
list_mol = [Chem.MolFromSmiles(x) for x in List_smiles_unique]
list_mol
# Remove None
list_mol_clean = [x for x in list_mol if x!=None]
print(len(list_mol))
print(len(list_mol_clean))
df = pd.DataFrame(data=None)
df2 = pd.DataFrame(data=None)
for x in list_mol_clean:
print(x)
metabolic_tree = scenario.run(x)
metabolic_tree.calc_scores()
metabolites = metabolic_tree.to_smiles()
df = pd.DataFrame(metabolites[1:],columns=metabolites[0])
df['parent'] = (metabolites[0][0])
df.columns.values[0] = 'metabolite'
df.columns.values[1] = 'score'
#Keep only top20
df2 = df2.append(df[:20], ignore_index=True)
df2.shape
#Number of unique parent
len(df2.parent.unique())
df2.head(5)
#Write the file out
df2.to_csv('Combined_DB_DrugDB_FoodDB_Pesticide_Exposome_Sygma_1_1_metabolites.tsv', sep = '\t', index = False)
###Output
_____no_output_____ |
australian-open-mens-final-2019-data-exploration.ipynb | ###Markdown
Australian Open 2019 - Mens Final 2019 Djokovic vs. NadalNowadays, in most sports either tracking or event data is available for sports data scientists to analyse leagues, teams, games or players. For example, in soccer event-based data is available for all major leagues by professional data providers like [Opta](https://www.optasports.com/), [Statsbomb](https://statsbomb.com/) or [Wyscout](https://wyscout.com/). Generating event data for football matches is incredibly time consuming as there are lots of different events to annotate. For tennis there is no event data available that covers the actual rallies played in games on either ATP or WTA tour. There exists official statistics of all matches played on ATP and WTA Tour. There is also Jeff Sackmans [github repository](https://github.com/JeffSackmann), which is a great way to start. He also has a [match charting project](http://www.tennisabstract.com/blog/2015/09/23/the-match-charting-project-quick-start-guide/) where point-by-point data is collected.But when I think about tennis, it is the movement of the ball and players, the tactics and the actual rallies and strokes. This is what I want to be able to see and analyse. As a proof of concept that collecting detailed data about positional, temporal and stroke information in tennis is possible and as a tribute to Novac Djokovic and Rafael Nadal, two of the greatest tennis players of all time, I manually annotated each rally and stroke of their [Australian Open final 2019](https://www.atptour.com/en/scores/2019/580/MS001/match-stats?isLive=False). Fortunately for me it only went over three sets.The results are a couple of csv files that can be easily analyzed using pandas or R. I prepared this notebook as a staring point for everyone who wants to work with rally-by-rally tennis data and I hope you enjoy it.
###Code
import os
import numpy as np
import pandas as pd
# Plotly and cufflinks for data analysis
from plotly.offline import init_notebook_mode, iplot
import cufflinks as cf
init_notebook_mode(connected=True)
cf.set_config_file(theme='ggplot')
cf.go_offline()
# Matplotlib for drawing the court
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
%matplotlib inline
INPUT_DIR = '../input'
def load_data():
return [pd.read_csv(os.path.join(INPUT_DIR, file_), index_col=0) for file_ in os.listdir(INPUT_DIR)]
serves, rallies, points, events = load_data()
###Output
_____no_output_____
###Markdown
The DataThe data consists of all points played in the match. It is build hierarchically from **events**, to **rallies**, to actual **points**.- **Events**: Each time a player hit the ball, the stroke type, position of the player, and position of the opponent were recorded. - **Points**: a list of all points played in the final with information about the server, receiver, point type, number of strokes, time of rally, new score of the game. - **Rallies**: A list of all rallies with Server, Returner, etc. - **Serves**: For each successful serve, which was no failure, the position of the serve in the service box was recorded (whenever possible)I have already done the hard part of data cleaning, and the dataset is hopefully easy to understand and ready to use.The x,y coordinates for the players were mapped from pixel coordinates to real world coordinates employing a [Direct Linear Transform (DLT)](https://en.wikipedia.org/wiki/Direct_linear_transformation). It is not perfect because the camera perspective changed slightly during the match, but it worked reasonably well. Events
###Code
# events
events.head()
###Output
_____no_output_____
###Markdown
This dataframe contains all labeled events from the tennis match. Each row relates to when one of the players hit the ball. The player who his the ball is called hitter and the other player receiver. A flag isserve describes if the shot was a serve or not. serve tells us if the stroke happened in a rally after a first or second serve. Type and stroke relate to the recognized stroke type, i.e. forehand topspin. Points
###Code
# points
points.head()
###Output
_____no_output_____
###Markdown
The dataframe contains data for all 142 points played in the match. Each point has a related rallyid and consists of the player who started the rally with his serve, the return player, the winner of the point, the number of strokes in the rally and the total time in seconds elapsed in the rally. The x,y coordinate describes the position on the court where the rally ended. For a winner it is where the ball bounced the first time, for a net ball it is the position in the net and for an out ball the position outside the court where the ball bounced the first time. Rallies
###Code
# rallies
rallies.head()
###Output
_____no_output_____
###Markdown
This dataframe is similar to the data in points.csv but including all rallies, especially all serves that lead to a second serve which are encoded as rallies with only one stroke and an undefined winner. Serves
###Code
# serves
serves.head()
###Output
_____no_output_____
###Markdown
This dataframe contains the positions in the service box where successful serves bounced. Failed serves are not included. For some serves the actual position of the bounce could not be tracked due to changes in the perspective of the broadcast video. Due to a shift in perspective through the game most of the positions are shifted a little bit to the left. The distribution of serves of both players can still be seen reasonably well. General statisticsIt is straight-forward to aggregate the rally results into general statistics of the match. Here are some examples Points won
###Code
points[['rallyid','winner']].groupby('winner').count()
###Output
_____no_output_____
###Markdown
Points won by serve
###Code
points.groupby(['winner','serve']).size().reset_index(name='counts')
###Output
_____no_output_____
###Markdown
Points won by resultEach rally contains an encoding of the reason why a rally ended and the winner of the rally. The events `double_fault`, `net`, and `out` refer to a fault by the opponent of the winning player.
###Code
points.groupby('reason')['winner'].value_counts().unstack().iplot(kind='bar')
###Output
_____no_output_____
###Markdown
Simple PlotsAs this notebook is intended as starting point for own investigations, here are some sample plots that might be useful in your own investigations. Tennis court dimensionsHere is a function to draw the tennis court and its coordinate system. (The code is shown in the notebook.)
###Code
#### Tennis data
height_court = 10.97
width_court = 11.89*2
service_box = 6.4
double_field = 1.37
baseline_serviceline = 5.5
breite_einzel = 8.23
serviceline_net = 6.4
def draw_court(hide_axes=False):
"""Sets up field
Returns matplotlib fig and axes objects.
"""
fig = plt.figure(figsize=(height_court/2, width_court/2))
#fig = plt.figure(figsize=(9, 9))
fig.patch.set_facecolor('#5080B0')
axes = fig.add_subplot(1, 1, 1, facecolor='#5080B0')
if hide_axes:
axes.xaxis.set_visible(False)
axes.yaxis.set_visible(False)
axes.axis('off')
axes = draw_patches(axes)
return fig, axes
def draw_patches(axes):
plt.xlim([-2,height_court+2])
plt.ylim([-6.5,width_court+6.5])
#net
axes.add_line(plt.Line2D([height_court, 0],[width_court/2, width_court/2],
c='w'))
# court outline
y = 0
dy = width_court
x = 0#height_court-double_field
dx = height_court
axes.add_patch(plt.Rectangle((x, y), dx, dy,
edgecolor="white", facecolor="#5581A6", alpha=1))
# serving rect
y = baseline_serviceline
dy = serviceline_net*2
x = 0 + double_field
dx = breite_einzel
axes.add_patch(plt.Rectangle((x, y), dx, dy,
edgecolor="white", facecolor="none", alpha=1))
#?
#net
axes.add_line(plt.Line2D([height_court/2, height_court/2], [width_court/2 - service_box, width_court/2 + service_box],
c='w'))
axes.add_line(plt.Line2D([height_court/2, height_court/2], [0, 0 + 0.45],
c='w'))
axes.add_line(plt.Line2D([height_court/2, height_court/2], [width_court, width_court - 0.45],
c='w'))
axes.add_line(plt.Line2D([1.37, 1.37], [0, width_court],
c='w'))
axes.add_line(plt.Line2D( [height_court - 1.37, height_court - 1.37], [0, width_court],
c='w'))
return axes
fig, ax = draw_court();
###Output
_____no_output_____
###Markdown
Tennis court with playersOf course a court alone is not enough, here is how you can add players to the court.
###Code
def draw_players(axes):
colors = {'djokovic': 'gray',
'nadal': '#00529F'}
size = 2
color='white'
edge=colors['djokovic']
axes.add_artist(Ellipse((6,
-0.2),
size,size,
edgecolor=edge,
linewidth=2,
facecolor=color,
alpha=1,
zorder=20))
axes.text(6-0.4,-0.2-0.2,'Dj',fontsize=14, color='black', zorder=30)
edge=colors['nadal']
axes.add_artist(Ellipse((1.75,
25),
size,size,
edgecolor=edge,
linewidth=2,
facecolor=color,
alpha=1,
zorder=20))
axes.text(1.75-0.4,25-0.15,'Na',fontsize=14, color='black', zorder=30)
return axes
fig, ax = draw_court(hide_axes=True);
ax = draw_players(ax)
###Output
_____no_output_____ |
Time_Series_RNN/sine_pred.ipynb | ###Markdown
Simple RNNIn ths notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!> * First, we'll create our data* Then, define an RNN in PyTorch* Finally, we'll train our network and see how it performs Import resources and create data
###Code
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(8,5))
# how many time steps/data pts are in one batch of data
seq_length = 20
# generate evenly spaced data pts
time_steps = np.linspace(0, np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension
x = data[:-1] # all but the last piece of data
y = data[1:] # all but the first
# display the data
plt.plot(time_steps[1:], x, 'r.', label='input, x') # x
plt.plot(time_steps[1:], y, 'b.', label='target, y') # y
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
--- Define the RNNNext, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:* **input_size** - the size of the input* **hidden_dim** - the number of features in the RNN output and in the hidden state* **n_layers** - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN* **batch_first** - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)Take a look at the [RNN documentation](https://pytorch.org/docs/stable/nn.htmlrnn) to read more about recurrent layers.
###Code
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
# define an RNN with specified parameters
# batch_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# last, fully-connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
# x (batch_size, seq_length, input_size)
# hidden (n_layers, batch_size, hidden_dim)
# r_out (batch_size, time_step, hidden_size)
batch_size = x.size(0)
# get RNN outputs
r_out, hidden = self.rnn(x, hidden)
# shape output to be (batch_size*seq_length, hidden_dim)
r_out = r_out.view(-1, self.hidden_dim)
# get final output
output = self.fc(r_out)
return output, hidden
###Output
_____no_output_____
###Markdown
Check the input and output dimensionsAs a check that your model is working as expected, test out how it responds to input data.
###Code
# test that dimensions are as expected
test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size())
###Output
Input size: torch.Size([1, 20, 1])
Output size: torch.Size([20, 1])
Hidden state size: torch.Size([2, 1, 10])
###Markdown
--- Training the RNNNext, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
###Code
# decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn)
###Output
RNN(
(rnn): RNN(1, 32, batch_first=True)
(fc): Linear(in_features=32, out_features=1, bias=True)
)
###Markdown
Loss and OptimizationThis is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.* It's typical to use an Adam optimizer for recurrent models.
###Code
# MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
Defining the training functionThis function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often. Hidden StatePay close attention to the hidden state, here:* Before looping over a batch of training data, the hidden state is initialized* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps
###Code
# train the RNN
def train(rnn, n_steps, print_every):
# initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
x = data[:-1]
y = data[1:]
# convert data into Tensors
x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension
y_tensor = torch.Tensor(y)
# outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
## Representing Memory ##
# make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# calculate the loss
loss = criterion(prediction, y_tensor)
# zero gradients
optimizer.zero_grad()
# perform backprop and update weights
loss.backward()
optimizer.step()
# display loss and predictions
if batch_i%print_every == 0:
print('Loss: ', loss.item())
plt.plot(time_steps[1:], x, 'r.') # input
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# train the rnn and monitor results
n_steps = 75
print_every = 15
trained_rnn = train(rnn, n_steps, print_every)
###Output
Loss: 0.6113325357437134
|
12_Motif-III/Motif_3.ipynb | ###Markdown
Motif discovery analysis - III Motif analysis for ChIP-Seq data In this lab, we are going to learn an important downstream analysis for ChIP-Seq data: how to find motifs enriched in ChIP-Seq peaks. Why might we want to do motif analysis for ChIP-Seq data? There are several reasons: (1). Motif analysis can be used to validate ChIP-Seq experimental data. If you are doing a ChIP-Seq experiment for a transcription factor with known binding motifs, you would expect to identify those motifs enriched in the ChIP-Seq peaks. For example, it is known that transcription factor Foxa2 binds to motif "GTAAACA". Then motif analysis of Foxa2 ChIP-Seq experiment should identify "GTAAACA" as one of the enriched motifs in the peaks. Otherwise, the quality of the ChIP-Seq experiment is questionable and probably needs further investigation. Therefore, researchers can use motif analysis results to validate their ChIP-Seq experiments. If you are interested, see the following reference for an example: Xu, Chenhuan, et al. "Genome-wide roles of Foxa2 in directing liver specification." Journal of molecular cell biology (2012) (2). Motif analysis can also be used to identify novel binding motifs for transcription factors. If you are studing a transcription factor that has an unknown binding motif, you can use motif analysis to identify novel binding motifs. Those novel binding motifs can give useful information about the function of the transcription factor. For example, Bing Ren's group identified a novel binding motif for insulator protein CTCF. By analyzing the new motif, they identified some new functions of this CTCF protein. For reference, you can read this paper: Kim, Tae Hoon, et al. "Analysis of the vertebrate insulator protein CTCF-binding sites in the human genome." Cell 128.6 (2007): 1231-1245. (3). Motif analysis can be used to identify cofactors from the ChIP-Seq experiment. It is common to identify multiple different motifs from a given ChIP-Seq experiment. Some of the motifs may belong to transcription factors that are not studied in the ChIP-Seq experiment and those transcription factors are potentially cofactors. Here is a reference for this kind of analysis: Ding, Jun, et al. "Systematic discovery of cofactor motifs from ChIP-seq data by SIOMICS." Methods 79 (2015): 47-51. In this lab, we are going to learn how to use HOMER to run motif analysis on some ChIP-Seq data. HOMER is a toolkit for motif discovery based on sequencing data and it is freely available at http://homer.salk.edu/homer/ngs/peakMotifs.html. HOMER contains several perl scripts (perl is a programming language similar to python). We already installed HOMER on CoCalc so you can use it directly for the following analysis. The data we are going to use is from a published Foxa2 ChIP-Seq experiment. The winged helix protein FOXA2 is a highly conserved, regionally-expressed transcription factor that regulates networks of genes controlling complex metabolic functions. The raw reads were aligned to the reference genome and Foxa2 binding peaks were identified. You can find the ChIP-Seq peak data (GSE25836_Human_Liver_FOXA2_GLITR_1p5_FDR.bed) in BED format in the folder "data_for_motif_analysis". We are going to use this file for the following motif analysis. We will use the findMotifsGenome.pl script in HOMER to find enriched motifs in Foxa2 ChIP-Seq peaks. The basic syntax is as follows: **NOTE:** These commands are to be run in the CoCalc terminal
###Code
findMotifsGenome.pl <peak/BED file> <genome> <output directory> -size # [options]
1.<peak/BED file> is the input ChIP-Seq peaks in BED file format.
2.<genome> is the reference genome. We will use human reference genome hg18 for the analysis.
3.<output directory> is the output folder.
4.-size Selecting the size of the region for motif finding. If you wish to find motifs using your peaks using their exact sizes, use the option "-size given"). However, for Transcription Factor peaks, most of the motifs are found +/- 50-75 bp from the peak center, making it better to use a fixed size rather than depend on your peak size.
5.[options] are some other options.
###Output
_____no_output_____
###Markdown
Here is an example using findMotifsGenome.pl to identify motifs in peaks.bed.
###Code
findMotifsGenome.pl peaks.bed hg18 MotifOutput/ -size 200 -mask -preparsedDir parsed_genome -len 8
###Output
_____no_output_____ |
evaluations/simulation/5-query.simulation-alpha-1.0.ipynb | ###Markdown
1. Parameters
###Code
simulation_dir = 'simulations/unset'
metadata_file = 'input/metadata.tsv.gz'
# Parameters
read_coverage = 30
mincov = 10
simulation_dir = "simulations/alpha-1.0-cov-30"
iterations = 3
sub_alpha = 1.0
from pathlib import Path
import imp
fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib'])
gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description)
simulation_dir_path = Path(simulation_dir)
case_name = str(simulation_dir_path.name)
index_reads_path = simulation_dir_path / 'index-reads'
index_assemblies_path = simulation_dir_path / 'index-assemblies'
output_api_reads_path = simulation_dir_path / 'query-reads-api.tsv'
output_api_assemblies_path = simulation_dir_path / 'query-assemblies-api.tsv'
output_cli_reads_path = simulation_dir_path / 'query-reads-cli.tsv'
output_cli_assemblies_path = simulation_dir_path / 'query-assemblies-cli.tsv'
###Output
_____no_output_____
###Markdown
2. Benchmark command-line
###Code
import pandas as pd
import genomics_data_index.api as gdi
def benchmark_cli_index(name: str, index_path: Path) -> pd.DataFrame:
db = gdi.GenomicsDataIndex.connect(index_path)
mutations_df = db.mutations_summary(reference_name='reference').sort_values('Count', ascending=False)
top_mutation = mutations_df.iloc[0].name
if 'chrom' not in top_mutation:
raise Exception(f'Does not exist a single mutation for index {index_path}')
else:
print(f'top_mutation={top_mutation}')
benchmark_commands = {
'query hasa': f'gdi --project-dir {index_path} --ncores 1 query "hasa:{top_mutation}"',
'query isa': f'gdi --project-dir {index_path} --ncores 1 query "isa:SH13-007"',
'query --summary': f'gdi --project-dir {index_path} --ncores 1 query --summary',
'query --features-summary': f'gdi --project-dir {index_path} --ncores 1 query --features-summary mutations',
'query isin': f'gdi --project-dir {index_path} --ncores 1 query --reference-name reference "isin_100_substitutions:SH13-007"',
'list samples': f'gdi --project-dir {index_path} --ncores 1 list samples',
}
number_samples = db.count_samples()
number_features_no_unknown = db.count_mutations(reference_genome='reference', include_unknown=False)
number_features_all = db.count_mutations(reference_genome='reference', include_unknown=True)
iterations = 10
benchmarker = gdi_benchmark.QueryBenchmarkHandler()
return benchmarker.benchmark_cli(name=name, kind_commands=benchmark_commands,
number_samples=number_samples,
number_features_no_unknown=number_features_no_unknown,
number_features_all=number_features_all,
iterations=iterations)
###Output
_____no_output_____
###Markdown
2.1. Benchmark reads
###Code
reads_cli_df = benchmark_cli_index(name=f'{case_name} (reads)', index_path=index_reads_path)
reads_cli_df.head(3)
reads_cli_df.to_csv(output_cli_reads_path, sep='\t', index=False)
###Output
_____no_output_____
###Markdown
2.1. Benchmark assemblies
###Code
assemblies_cli_df = benchmark_cli_index(name=f'{case_name} (reads)', index_path=index_assemblies_path)
assemblies_cli_df.head(3)
assemblies_cli_df.to_csv(output_cli_assemblies_path, sep='\t', index=False)
###Output
_____no_output_____
###Markdown
3. Test query API 3.1. Load (example) metadataThe simulated data is based off of real sample names and a real tree. So I can load up real metadata and attach it to a query (though the mutations and reference genome are all simulated).
###Code
import pandas as pd
metadata_df = pd.read_csv(metadata_file, sep='\t').rename({'Sample Name': 'Sample Name Orig'}, axis='columns')
metadata_df.head(2)
###Output
_____no_output_____
###Markdown
3.2. Define benchmark cases
###Code
from typing import List
import genomics_data_index.api as gdi
def benchmark_api_index(name: str, index_path: Path) -> pd.DataFrame:
db = gdi.GenomicsDataIndex.connect(index_path)
q_no_join = db.samples_query(reference_name='reference', universe='mutations')
q_join = db.samples_query(reference_name='reference', universe='mutations').join(metadata_df, sample_names_column='Sample Name Orig')
mutations_df = db.mutations_summary(reference_name='reference').sort_values('Count', ascending=False)
top_mutations = mutations_df.iloc[[0,1]].index.tolist()
if len(top_mutations) != 2:
raise Exception(f'Does not exist two mutations for index {index_path}')
else:
mutation1 = top_mutations[0]
mutation2 = top_mutations[1]
print(f'mutation1={mutation1}, mutation2={mutation2}')
q = q_join.hasa(mutation1)
r = q_join.hasa(mutation2)
number_samples = db.count_samples()
number_features_no_unknown = db.count_mutations(reference_genome='reference', include_unknown=False)
number_features_all = db.count_mutations(reference_genome='reference', include_unknown=True)
repeat = 10
benchmark_cases = {
'db.samples_query': lambda: db.samples_query(reference_name='reference', universe='mutations'),
'q.join': lambda: q_no_join.join(metadata_df, sample_names_column='Sample Name Orig'),
'q.features_summary': lambda: q_join.features_summary(),
'q.features_comparison': lambda: q_join.features_comparison(sample_categories='outbreak_number', categories_kind='dataframe', kind='mutations', unit='proportion'),
'q.hasa': lambda: q_join.hasa(mutation1),
'q.isa': lambda: q_join.isa("SH13-007"),
'q AND r': lambda: q & r,
'q.toframe': lambda: q_join.toframe(),
'q.summary': lambda: q_join.summary(),
'q.isin (distance)': lambda: q_join.isin("SH13-007", kind='distance', distance=100, units='substitutions'),
'q.isin (mrca)': lambda: q_join.isin(["SH13-007", "SH12-001"], kind='mrca'),
}
benchmarker = gdi_benchmark.QueryBenchmarkHandler()
return benchmarker.benchmark_api(name=name, kind_functions=benchmark_cases,
number_samples=number_samples,
number_features_no_unknown=number_features_no_unknown,
number_features_all=number_features_all,
repeat=repeat)
###Output
_____no_output_____
###Markdown
3.3. Benchmark reads index
###Code
reads_df = benchmark_api_index(name=f'{case_name} (reads)', index_path=index_reads_path)
reads_df.head(5)
reads_df.to_csv(output_api_reads_path, sep='\t', index=False)
###Output
_____no_output_____
###Markdown
3.4. Benchmark assemblies index
###Code
assemblies_df = benchmark_api_index(name=f'{case_name} (assemblies)', index_path=index_assemblies_path)
assemblies_df.head(5)
assemblies_df.to_csv(output_api_assemblies_path, sep='\t', index=False)
###Output
_____no_output_____ |
Pyth_fordata/freecodecamp.ipynb | ###Markdown
Refresher for python on free code camp
###Code
# a program that asks the user to enter their name and welcomes them
name=input("Please enter your name: ")
print("Hello", name,". Welcome to todays session")
# A program to compute for the gross salary
hours=input("Please enter the no of hours worked: ")
rate=input("Please enter the rate per hour worked: ")
gross_salary=int(hours)*float(rate)
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
?round # getting help for rounding off
x=0
y=10
if 0==x:
if y==10:
print("Yes")
if 0==x:
if y==10:
print("Yes")
temp = "5 degrees"
cel = 0
try:
fahr = float(temp)
cel = (fahr - 32.0) * 5.0 / 9.0
except:
print(cel)
hours=input("Please enter the no of hours worked: ")
rate=input("Please enter the rate per hour worked: ")
if int(hours)<=40:
gross_salary=int(hours)*float(rate)
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
else:
gross_salary=40*float(rate)+(((int(hours)-40)*(float(rate)*1.5)))
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
# Try and except
hours=input("Please enter the no of hours: ")
rate=input("Please enter the rate per hour : ")
try:
if int(hours)<=40:
gross_salary=int(hours)*float(rate)
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
else:
gross_salary=40*float(rate)+(((int(hours)-40)*(float(rate)*1.5)))
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
except:
print("ERROR,Enter a numeric instead of characters")
def Computepay(hours,rate):
try:
if int(hours)<=40:
gross_salary=int(hours)*float(rate)
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
else:
gross_salary=40*float(rate)+(((int(hours)-40)*(float(rate)*1.5)))
gross_salary=round(gross_salary,2)
print("Your gross salary is: ",gross_salary)
except:
print("ERROR,Enter a numeric instead of characters")
Computepay(10,10)
Computepay(45,"ten")
def Computegrade(score):
if score==0.9:
return "A"
elif score==0.8:
return "B"
elif score==0.7:
return "C"
else:
return "D"
Computegrade(0.9)
#Computegrade(0.5)
#While loop
n=10
while n>0:
print(n)
n=n-1
#print("Gatua we won")
#For loop
n=10
for i in range(n+1):
print(i)
friends=["winnnie","Beatrice","Linda","Joy","Ivy"]
for name in friends:
print("Welcome to our 2nd meeting: ", name)
print("I have successfully invited your friends")
# Use of Break to exit loops
n=5
while n<0:
print(n)
if n==3:
continue
print("Gatua we are done")
n = 0
while True:
if n == 3:
break
print(n)
n = n + 1
# Continue
# the largest number in a list
largest_no=0
lst1=[3,87,45,6,1,9,87,76,3,4,67,567,89,78]
for element in lst1:
if element>largest_no:
largest_no=element
print("the largest number is: ",largest_no)
smallest = None
print("Before:", smallest)
for itervar in [24, 41, 12, 9, 74, 15]:
if smallest is None or itervar < smallest:
smallest = itervar
# break--it causes the loop not to workas expected
print("Loop:", itervar, smallest)
print("Smallest:", smallest)
# Counting loop
#initialise a counter
total=0
interest="Win"
lst2=["Win","Ivy","Win","bridgit","Mutai","Nyaga","Vetasia","Kym","Win","Nyoroka","Mary","Gatua"]
for i in lst2:
if i==interest:
total+=1
print(total)
def counter(n): #function takesin a list and counts
total=0
interest=n
lst2=["Win","Ivy","Win",1,2,10,4,5,6,7,8,9,2,10,10,"bridgit","Mutai","Nyaga","Vetasia","Kym","Win","Nyoroka","Mary","Gatua"]
for i in lst2:
if i==interest:
total+=1
return total
counter(10)
#summing, counting and averaging in a loop
count=0
total=0
lst3=lst1[1:]
lst3
for i in lst3:
count+=1
total=total+i
print(count)
print(total, total/count)
lst3
#Searching for values in a list
found=False
for i in lst3:
if i==3:
found=True
print(found,i)
break
word="banana"
count=0
for i in word:
if i =="a":
count+=1
print(count)
# ask the user to input a number then get average.
num=0
total=0.0
while True:
val1=input("please enter a value:")
if val1=="done":
break
try:
val2=float(val1)
except:
print("invalid input")
continue
num=num+1
total=total+val2
print(num,total,total/num)
max(3,4+5)
dir(__builtins__) # accesses the builtins opeartors and syntanx
help(ord)
ord("a")
help(round)
round(5.7685, 3)
def bigger(x):
return x ** x
bigger(12)
round(45.35)
8=x
round(45)
print("3\n4\n5")
print("\\n is the newline character in Python")
def perimeter(a,b,c):
'''number,number,number>>float
this function takes in numbers and or integers
sums them up and returns the sum total'''
per=a+b+c
return per
perimeter(2,3,4)
def semiperimeter(a,b,c):
return perimeter(a,b,c)/2
semiperimeter(3,4,5)
# working with strings
word = "bananana"
i = word.find("na")
i
# slicing
word[:] # takes the entire word
word[1:4] # word sub 1 through 4 except 4
# using in as a logical operator
i="a"
if i in word:
print("a found")
# string comparison
# compares in lexicographical manner
if word=="bananana":
print("aLL IS WELL")
elif word<"banana":
print("your word is less than word")
else:
print("the word is all equal")
# help for str
#?str
print("helloWinnie ".lower()) # converting all uppercase to lowercase
# removing white spaces in the string
print(" hello Winnie ".lstrip()) # left strip
?str.find
# Reading files
fhand=open("./refresher.py","r")
count=0
for line in fhand:
count=count+1
print("these line counts are:",count)
###Output
these line counts are: 37
###Markdown
fhand=open("./refresher.py","r")count=0fhand1=fhand.read()for line in fhand: count=count+1print(fhand1)len(fhand1)fhand1[:]
###Code
# Reading files
fhand=open("./refresher.py","r")
for line in fhand:
line=line.rstrip() # removes newline characters
if line.startswith("print"):
print(line)
# Reading files
fhand=open("./refresher.py","r")
for line in fhand:
line=line.rstrip() # removes newline characters
if not line.startswith("print"):
continue
print(line)
# Reading files
fhand=open("./refresher.py","r")
for line in fhand:
line=line.rstrip() # removes newline characters
if not "print" in line:
continue
print(line)
filename=input("please enter a file name")
fhand2=open(filename)
count=0
for i in fhand2:
count+=1
print(count)
#try and except block to catch the error
filename=input("please enter a file name")
try:
fhand3=open(filename)
except:
print("the file does not exist and could not be opened",filename)
quit()
count=0
for i in fhand2:
count+=1
print(count)
import os
path=input("please enter the path to the file")
fname=input("please enter a file name")
mypath_file=os.path.join(path,fname) # concatinating the path and file name
try:
fhand4=open(mypath_file)
except:
print("Nosuch file name, check the path")
quit()
for line in fhand4:
line=line.rstrip()
print(line)
fhand4.close()
import os # operating system imported which allows me to usethe paths
#path=input("please enter the path to the file")
fname=input("please enter a file name")
#mypath_file=os.path.join(path,fname) # concatinating the path and file name
try:
fhand4=open(fname)
except:
print("Nosuch file name, check the path")
quit()
for line in fhand4:
line=line.rstrip()
print(line)
fhand4.close()
# check whether the path exists
import os
path=input("please enter the path to the file")
tag=False
if os.path.exists(path):
tag=True
print("The path Exists".upper())
else:
print("Check the path entred, not existent".upper())
fname=input("please enter a file name")
mypath_file=os.path.join(path,fname) # concatinating the path and file name
try:
fhand5=open(mypath_file)
except:
print("Nosuch file name, check the path")
quit()
for line in fhand5:
line=line.rstrip()
print(line)
fhand5.close()
#split converts a string to a list
word="World Winfred Gatua"
lst1=word.split() # splits a string to a list that we can loop through
lst1
for i in range(len(lst1)):
name=lst1[i]
print("Hello",name)
print("Successfully done with greeting")
# alternatively
#split converts a string to a list
word="World Winfred Gatua"
lst1=word.split() # splits a string to a list that we can loop through
lst1
for i in (lst1):
print("Hello",i)
print("Successfully done with greeting")
?str.split # help on split usage
?str.splitlines
###Output
_____no_output_____
###Markdown
Dictionaries
###Code
# unordered entries
dict1=dict() #initialising dictionaries
print(dict1) # prints out an empty dictionary
#alternatively
dict2={} #initialising an empty dictionary
print(dict2) # prints an empty dictionary
dict3={"win":45,"willy":34,"name":"Gatua"} # initialising a dictionary with contents
print(dict3)
# populating dict1 wit keys and value pairs
dict1['win']=3
dict1['string']=4
dict1
counts = { 'quincy' : 1 , 'mrugesh' : 42, 'beau': 100, '0': 10}
print(counts.get('kris',0))
?counts.get # getting help on getting values associated with the key
# counting using dict
counts=dict()
names=["win","gatua","win","mary","jackson","Lawrence","mary","win","gatua"]
for name in names:
if name not in counts:
counts[name]=1
else:
counts[name]=counts[name]+1
print(counts)
# alternatively
counts=dict()
names=["win","gatua","win","mary","jackson","Lawrence","mary","win","gatua"]
for name in names:
counts[name]=counts.get(name,0)+1
print(counts)
handle1=open("../../nextflow_h3abionet/nf-tut-2020/files/data/11_f2.bim")
line1=[] # initialising an empty list
linedict=dict() # initialising an empty dictionary
for line in handle1:
line=line.rstrip() # removing the newline characters
line=line.split("\t") # splitting the line based on tabs
#line1.append(line)
#print(line)
'''
for item in line1:
if item not in linedict:
linedict[item]=1
else:
linedict[item]=linedict[item]+1
print(linedict)
'''
'''Reading in text from a file and processing it by counting the individual
entries in form of. a dictionary for storae of the data. finally
printing out the text and their respective counts '''
handle1=open("../../nextflow_h3abionet/nf-tut-2020/files/data/11_f2.bim")
line1=[] # initialising an empty list
linedict=dict() # initialising an empty dictionary
cont=handle1.readlines() # gives the output as list
''' for line in cont:
line=line.rstrip() # removing the newline characters
#line=line.split("\t") # splitting the line based on tabs
line1.append(line)
#print(line1)
for item in line1:
if item not in linedict:
linedict[item]=1
else:
linedict[item]=linedict[item]+1
#print(linedict)
'''
#print(cont)
for item in cont:
if item not in linedict:
linedict[item]=1
else:
linedict[item]=linedict[item]+1
#print(linedict)
# from the formed dictionary then get the most repeated word
bigword=None
bigcount=None
for ky,val in linedict.items():
if bigcount is None or val>bigcount:
bigword=ky
bigcount=val
print(bigword,bigcount)
###Output
11:134399125
3
###Markdown
Counting in dictionaries and looping through dictionaries
###Code
#initialising a dictionary
contdict={'win':23,'gat':45,'Mary':34}
#getting keys only
contdict.keys()
#getting values only
contdict.values()
#getting both items i.e key and value pairs
contdict.items()
#Using a for loop to itearte through the items
for keys,values in contdict.items():
print(keys,values)
###Output
win 23
gat 45
Mary 34
###Markdown
Text processing using dictionaries, determining the max values
###Code
#ask the reader to enter a string
string1=input("Please enter a sentence that you love most here")
#split the string into a list then form a dictionary from that
string_split=string1.split()
#print(string_split)
#Initialise an empty dictionary and populate it later
str_dict=dict()
for word in string_split:
#print(word)
str_dict[word]=str_dict.get(word,0)+1
#print(str_dict)
# determine which word is repeated the most here
'''bigword=None
bigcount=None
for key,value in str_dict.items():
if bigcount is None or value>bigcount:
bigword=key
bigcount=value
print(bigword,bigcount)
'''
#Alternatively
for key,value in str_dict.items():
if value is max(str_dict.values()):
print(key,value)
import os
# Processing from a file entred by user
file=input("please enter a file name here")
#initialise an empty dictionary
counts=dict()
#Reading in a file and catching errors
try:
handle=open(file)
except:
print("""The file you have entred does not exist would you mind
checking the path and file name""".upper())
quit()
for line in handle:
lin=line.split()
#print(lin)
for word in lin:
counts[word]=counts.get(word,0)+1
#print(counts)
for key,value in counts.items():
if value is max(counts.values()):
print(key,value)
handle.close()
# Prompting the user to enter the pathto the file
import os
# Processing from a file and path entred by user
path=input("Please enter the path to your file")
file=input("please enter a file name here")
myfilepath=os.path.join(path,file)
# validating if the file indeed exists
tag=False
if os.path.exists(myfilepath):
tag=True
print("The filepath exists all together".upper())
else:
print("The filepath does not exist".upper())
#initialise an empty dictionary
counts=dict()
#Reading in a file and catching errors
try:
handle=open(myfilepath)
except:
print("""The file you have entred does not exist would you mind
checking the path and file name""".upper())
quit()
for line in handle:
lin=line.split()
#print(lin)
for word in lin:
counts[word]=counts.get(word,0)+1
#print(counts)
for key,value in counts.items():
if value is max(counts.values()):
print(key,value)
handle.close() # closing the file to make it available and not held in memory
path= input("Please enter a path")
path=os.path.abspath(path)#gives absolute path
path
#os.path.dirname(path)
path= input("Please enter a path")
tag=False
if os.path.exists(path):
tag=True
print("Path exists\n".upper(), os.path.abspath(path) )
abspath=os.path.abspath(path) #gives absolute path
else:
print("Path does not exist".upper())
#os.path.abspath(path) #gives absolute path
print(abspath)
###Output
_____no_output_____ |
DL_TF20/Part 2 - MNIST Exercise.ipynb | ###Markdown
Data Preprocess (MNIST)
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
%matplotlib inline
###Output
_____no_output_____
###Markdown
데이터 불러오기TensorFlow에서 제공해주는 데이터셋(MNIST) 예제 불러오기
###Code
from tensorflow.keras import datasets
###Output
_____no_output_____
###Markdown
- 데이터 shape 확인하기 Image Dataset 들여다보기불러온 데이터셋에서 이미지 데이터 하나만 뽑아서 시각화까지 확인 - 데이터 하나만 뽑기 - 시각화해서 확인 Channel 관련[Batch Size, Height, Width, Channel] GrayScale이면 1, RGB이면 3으로 만들어줘야함 - 다시 shape로 데이터 확인 - 데이터 차원수 늘리기 (numpy) - TensorFlow 패키지 불러와 데이터 차원수 늘리기 (tensorflow) - TensorFlow 공홈에서 가져온 방법 tf.newaxis *주의 사항 matplotlib로 이미지 시각화 할 때는 gray scale의 이미지는 3번쨰 dimension이 없으므로, 2개의 dimension으로 gray scale로 차원 조절해서 넣어줘야함 - new_train_x[0] -> new_train_x[0, :, :, 0] - 다시 시각화 Label Dataset 들여다보기 Label 하나를 열어서 Image와 비교하여 제대로 들어갔는지. 어떤 식으로 저장 되어있는지 확인 - label 하나만 뽑아보기 - Label 시각화 OneHot Encoding 컴퓨터가 이해할 수 있는 형태로 변환해서 Label을 주도록 함 ![image.png](attachment:image.png)
###Code
# 5
[0,0,0,0,0,1,0,0,0,0]
# 9
[0,0,0,0,0,0,0,0,0,1]
###Output
_____no_output_____
###Markdown
- tensorflow.keras.utils.to_categorical
###Code
from tensorflow.keras.utils import to_categorical
###Output
_____no_output_____ |
jupyter-notebooks/spectrogram/doppler_smearing.ipynb | ###Markdown
Doppler smearing
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from astropy import units as u
import scipy
import setigen as stg
# Sometimes it can be necessary to re-run this command for plots to show automatically
%matplotlib inline
###Output
_____no_output_____
###Markdown
When a cosine signal is Doppler drifted, the frequency response smears out the signal power across frequency bins, as a function of the drift rate. For low drift rates, there isn't a significant change, but when a signal spans multiple frequency bins in a single time bin, observed signal power decreases accordingly. The data resolution can therefore be a strong factor in how much Doppler smearing occurs. To see this effect for "real", you can use `setigen.voltage` to simulate a drifting cosine signal at high drift rates. However, generating raw voltage data, even using a GPU, can be computationally expensive. In this notebook, we explore methods of simulating this effect directly in time-frequency space. Modeling the linear chirp frequency response in time-frequency space Manually specifying widths and intensitiesThe simplest way to approximate these effects is to manually include them. We define the "unit drift rate" for a given data resolution to be the drift rate given by `df / dt`. Setigen's Frame object provides the property `frame.unit_drift_rate = frame.df / frame.dt` for convenience. Then, a signal traveling at this drift rate will appear to shift by one pixel in frequency per one pixel shift in time, i.e. a pixel by pixel slope of 1. So for signals with a drift rate higher than the unit drift rate by what we'll call the "drift factor", we can approximate the width and intensity attenuation of the signal. Namely, if we use a "box" spectral profile, we extend the signal width to be `drift_factor` pixels and diminish the signal level by `drift_factor`. This makes sense intuitively; if our signal crosses `n` frequency bins over one time step, the signal power should be spread equally in those `n` bins. Note that this approximation is better the larger `drift_factor` is, and only even applicable when `drift_factor >= 1`. Doing this with a simple signal drifting 4 times the unit drift rate:
###Code
drift_factor = 4
fr = stg.Frame(shape=(16, 128))
print(f'The unit drift rate is {fr.unit_drift_rate:.2f} Hz/s.')
drift_rate = drift_factor * fr.unit_drift_rate
print(f'The signal drift rate is {drift_rate:.2f} Hz/s.')
fr.add_constant_signal(f_start=fr.get_frequency(16),
drift_rate=drift_rate,
level=1 / drift_factor,
width=fr.df * drift_factor,
f_profile_type='box')
fr.plot()
plt.show()
###Output
The unit drift rate is 0.15 Hz/s.
The signal drift rate is 0.61 Hz/s.
###Markdown
Adding noise, and comparing the drifting signal with a non-drifting signal of identical starting intensity:
###Code
drift_factor = 4
snr = 100
fr = stg.Frame(shape=(16, 128))
drift_rate = drift_factor * fr.unit_drift_rate
fr.add_noise_from_obs()
fr.add_constant_signal(f_start=fr.get_frequency(16),
drift_rate=drift_rate,
level=fr.get_intensity(snr=snr) / drift_factor,
width=fr.df * drift_factor,
f_profile_type='box')
fr.add_constant_signal(f_start=fr.get_frequency(100),
drift_rate=0,
level=fr.get_intensity(snr=snr),
width=fr.df,
f_profile_type='box')
fr.plot()
plt.show()
###Output
_____no_output_____
###Markdown
This kind of process is straightforward when signals are simple, linear, and assumed to be very narrow (pixel-width). Nevertheless, it is certainly unsophisticated (which might be fine for some applications!).If we assume perfect cosine signals, we can do a bit better: mathematically correct chirp spectrums. Analytically described chirp spectrumIt turns out that the frequency response for a drifting signal can only be analytically expressed for a linear chirp. We follow the derivation here: https://en.wikipedia.org/wiki/Chirp_spectrumLinear_chirp to write a custom `f_profile`:
###Code
def analytic_chirp(drift_rate, df, dt):
"""
Analytic chirp spectrum, based on https://en.wikipedia.org/wiki/Chirp_spectrum#Linear_chirp.
"""
dF = drift_rate * dt
dW = 2 * np.pi * dF
const = np.pi * df / abs(dW)
def f_profile(f, f_center):
dw = 2 * np.pi * (f - f_center)
X1 = (dw) / np.sqrt(np.pi * abs(dW) / dt)
X2 = (dW - dw) / np.sqrt(np.pi * abs(dW) / dt)
S1, C1 = scipy.special.fresnel(X1)
S2, C2 = scipy.special.fresnel(X2)
return const * ((C1 + C2)**2 + (S1 + S2)**2)
return f_profile
###Output
_____no_output_____
###Markdown
There are some important deviations we made from the derivation to faciliate signal injection. Firstly, we care about sample frequency over angular frequency, so we have to be sure to stay in the correct units. Then, the constant in the expression for power doesn't give us useful values for intensity, so we additionally multiply by `frame.df / frame.dt` to get to the right units. Finally, we translate the frequency variable by `dW / 2` so that `f_start` is truly where our drifting signal starts.One note about this is that generally in `setigen`, `f_start` in the path refers to the *center* of the signal at the starting time bin. In this case, the edge of the smeared frequency response will actually be at `f_start`, not the center. This frequency can be easily shifted by preference, but doing it this way actually matches the result when simulating a drifting cosine signal in voltage space with `setigen.voltage`. Trying this out in a synthetic frame:
###Code
fr = stg.Frame(shape=(16, 256))
drift_rate = 4 * fr.unit_drift_rate
fr.add_signal(stg.constant_path(f_start=fr.get_frequency(128),
drift_rate=drift_rate),
stg.constant_t_profile(level=1),
analytic_chirp(drift_rate, fr.df, fr.dt),
stg.constant_bp_profile(level=1))
fr.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Looks good! Let's try a higher drift rate to see the detailed spectrum in adjacent time bins:
###Code
fr = stg.Frame(shape=(16, 1024))
drift_rate = 16 * fr.unit_drift_rate
fr.add_signal(stg.constant_path(f_start=fr.get_frequency(256),
drift_rate=drift_rate),
stg.constant_t_profile(level=1),
analytic_chirp(drift_rate, fr.df, fr.dt),
stg.constant_bp_profile(level=1))
plt.figure(figsize=(14, 4))
plt.subplot(1, 2, 1)
fr.plot()
# Zoom in on individual spectra
plt.subplot(1, 2, 2)
for i in range(16):
plt.plot(fr.data[i][240:540])
# plt.plot(fr.data[1][240:600])
plt.xlabel('Pixel')
plt.ylabel('Intensity')
plt.axhline(fr.unit_drift_rate / drift_rate, ls='--', c='k')
plt.show()
###Output
_____no_output_____
###Markdown
We readily see the squiggle structure in the spectrum, and sharp drop-offs at the edges. As the signal goes to the next time bin, a similar spectra is repeated, starting at the end of the previous one. Also, notice that the average intensity of these spectral responses matches `frame.unit_drift_rate / drift_rate`, as expected!Although it's nice to see the spectral structure without having to simulate the signal in voltage space, this analytic solution only works for linearly drifting, cosine signals. Intrinsically broader or more complex signals can't really be simulated the same way, even though they would also exhibit Doppler smearing to some degree. Doppler smearing by repeated samplingFor complex cases, no single solution handles all signals appropriately. Nevertheless, we can get an approximation when using `frame.add_signal()` by way of the `doppler_smearing` and `smearing_subsamples` parameters. If `doppler_smearing=True`, `frame.add_signal()` will take the mean of signal copies evenly spaced between the center frequencies at times `t` and `t+1`. The `smearing_subsamples` parameter controls how many such copies to include in the calculation; the default is 10. To get enough coverage, a good rule of thumb is to have `smearing_subsamples >= drift_factor = drift_rate / frame.unit_drift_rate`.This way, the spectral profile within a single time bin will be intrinsically broadened by nature of the sampling. For example, if the width of the signal is 1 pixel and the signal is smeared over 4 pixels with `smearing_subsamples=4`, the intensity in each pixel will go down by 4, since we're taking the mean. So, we end up with a result similar to that of our first method, but without having to manually input the attenuation!As an example:
###Code
drift_factor = 4
fr = stg.Frame(shape=(16, 128))
drift_rate = drift_factor * fr.unit_drift_rate
fr.add_signal(stg.constant_path(f_start=fr.get_frequency(16),
drift_rate=drift_rate),
stg.constant_t_profile(level=1),
stg.box_f_profile(width=fr.df),
doppler_smearing=True,
smearing_subsamples=drift_factor)
fr.plot()
plt.show()
###Output
_____no_output_____
###Markdown
We can also do the same with `add_constant_signal()`, with the `doppler_smearing` boolean parameter:
###Code
drift_factor = 4
fr = stg.Frame(shape=(16, 128))
drift_rate = drift_factor * fr.unit_drift_rate
fr.add_constant_signal(f_start=fr.get_frequency(16),
drift_rate=drift_rate,
level=1,
width=fr.df,
f_profile_type='box',
doppler_smearing=True)
fr.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Finally, for a more complex example, let's make a quadratic-path signal:
###Code
fr = stg.Frame(shape=(16, 1024))
fr.add_noise(1)
fr.add_signal(stg.squared_path(f_start=fr.get_frequency(200),
drift_rate=0.05*u.Hz/u.s),
stg.constant_t_profile(level=fr.get_intensity(snr=1e6)),
stg.sinc2_f_profile(width=fr.df*10),
doppler_smearing=True,
smearing_subsamples=10)
fr.plot()
plt.show()
fr.bl_plot()
plt.show()
###Output
_____no_output_____ |
src/MXNet Demo.ipynb | ###Markdown
Content Recommendations With MXNet
###Code
import pandas as pd
import numpy as np
import mxnet as mx
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
###Output
/home/greg/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Import sample data
###Code
df = pd.read_csv("../data/sample-data.csv")
print(df.head())
print(df.shape)
###Output
id description
0 1 Active classic boxers - There's a reason why o...
1 2 Active sport boxer briefs - Skinning up Glory ...
2 3 Active sport briefs - These superbreathable no...
3 4 Alpine guide pants - Skin in, climb ice, switc...
4 5 Alpine wind jkt - On high ridges, steep ice an...
(500, 2)
###Markdown
For this content recommendation systems, you'll use the TF-IDF Vectorizer.TF-IDF decides, for each term in a document and a given collection, the weights for each one of the components of a vector that can be used for cosine similarity (among other things).
###Code
tf = TfidfVectorizer(analyzer='word',
ngram_range=(1, 3),
min_df=0,
stop_words='english')
tfidf_matrix = tf.fit_transform(df['description'])
###Output
_____no_output_____
###Markdown
Compute Cosine Similarities (CPU) Cosine similarity measures the angle between two different vectors in a Euclidean space, independently of how the weights have been calculated.First, you can build the model using the `linear_kernal` method, which is CPU bound.
###Code
%%timeit
cosine_similarities = linear_kernel(tfidf_matrix, tfidf_matrix)
###Output
28.7 ms ± 3.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Running on my system, this takes `54ms`. This isn't an issue for small datasets like this, but becomes a pain point when dealing with millions of records. Compute Cosine Similarity With MXNet (GPU) Using MXNet, we'll be able to build a content recommender the same way, but even faster than before.First, we'll convert the TFIDF matrix to an MXNet NDArray. We'll also set a context to have the matrix exist on the GPU, using the `mx.gpu()` context.
###Code
mx_tfidf = mx.nd.sparse.array(tfidf_matrix, ctx=mx.gpu())
###Output
_____no_output_____
###Markdown
As a sanity check you can look at the `mx_tfidf` context. This ensures the data is living on the GPU.
###Code
mx_tfidf.context
###Output
_____no_output_____
###Markdown
Compute Cosine Similarities (GPU) Now we can compute the cosine similarity of the TFIDF matrix on the GPU. We'll use the `mx_cosine_distance` function below.
###Code
def mx_cosine_distance(arr):
return mx.nd.dot(arr, arr.T)
%%timeit
mx.nd.dot(mx_tfidf, mx_tfidf.T)
mx.nd.waitall()
%%timeit
mx_cosine_sim = mx.nd.dot(mx_tfidf, mx_tfidf.T)
mx.nd.waitall()
###Output
_____no_output_____
###Markdown
The wall time on my computer is 693 microseconds!For comparison, you can evaluate the speedup by taking the wall time of the CPU model against the GPU model. Sanity Checks To make sure the cosine similarity matrices are identical, we can review the output of both. Let's print the first 10 values of the first array from the `linear_kernel` implementation.
###Code
cosine_similarities[0, 0:10]
###Output
_____no_output_____
###Markdown
Now print the first 10 values of the MXNet implementation. Notice the context shows our object is still on the GPU!
###Code
mx_cosine_sim[0, 0:10]
###Output
_____no_output_____
###Markdown
Get Recommendations
###Code
def get_recommendations(df, item_id, cosine_sim):
# Function that takes in item ID as input and outputs most similar users
indices = pd.Series(df.index, index=df['id']).drop_duplicates()
# Get the index of the item that matches the id
idx = indices[item_id]
# Get the pairwsie similarity scores of all items
sim_scores = list(enumerate(cosine_sim[idx]))
# Sort the items based on the similarity scores
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
# Get the scores of the 10 most similar items
sim_scores = sim_scores[1:11]
# Get the item indices
item_indices = [i[0] for i in sim_scores]
# Return the top 10 most similar items
return df.iloc[item_indices]
###Output
_____no_output_____
###Markdown
Review an item of interest; the ID will be used in the recommendations.
###Code
df[df.id == 5]
###Output
_____no_output_____
###Markdown
Get the recommendations using `mx_cosine_sim` GPU matrix.
###Code
get_recommendations(df, 5, mx_cosine_sim)
###Output
_____no_output_____ |
Resources/starter_code/VacationPy.ipynb | ###Markdown
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map. Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values. Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
# Display figure
###Output
_____no_output_____ |
models/.ipynb_checkpoints/NN LSTM - Merge Experiments 11.05-checkpoint.ipynb | ###Markdown
https://www.kaggle.com/lystdo/quora-question-pairs/lstm-with-word2vec-embeddings
###Code
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import re
import csv
import codecs
import numpy as np
import pandas as pd
import gc
import sys
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
from string import punctuation
import gensim
from gensim.models import KeyedVectors
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers.merge import concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers.advanced_activations import PReLU
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils.np_utils import to_categorical
from keras.layers import Dense, Input, Flatten, Concatenate, LSTM, Lambda, Dropout, Multiply
from keras.layers import Conv1D, MaxPooling1D, Embedding, SpatialDropout1D, GRU
from keras.layers.merge import _Merge
from keras.models import Model
from keras.layers.wrappers import TimeDistributed, Bidirectional
from keras.layers.normalization import BatchNormalization
from keras import backend as K
from keras.utils import plot_model
from keras.callbacks import EarlyStopping, ModelCheckpoint
def lstm_model(ncols):
embedding_layer = Embedding(nb_words,
embedding_dim,
weights=[word_embedding_matrix],
input_length=ncols,
trainable=False)
lstm_layer = LSTM(num_lstm, dropout=rate_drop_lstm, recurrent_dropout=rate_drop_lstm,
go_backwards = False, implementation = 2)
sequence_1_input = Input(shape=(ncols,), dtype='int32')
embedded_sequences_1 = embedding_layer(sequence_1_input)
x1 = lstm_layer(embedded_sequences_1)
sequence_2_input = Input(shape=(ncols,), dtype='int32')
embedded_sequences_2 = embedding_layer(sequence_2_input)
y1 = lstm_layer(embedded_sequences_2)
merged = concatenate([x1, y1])
merged = Dropout(rate_drop_dense)(merged)
merged = BatchNormalization()(merged)
merged = Dense(num_dense, activation=act)(merged)
merged = Dropout(rate_drop_dense)(merged)
merged = BatchNormalization()(merged)
preds = Dense(1, activation='sigmoid')(merged)
model = Model(inputs=[sequence_1_input, sequence_2_input], outputs=preds)
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['acc'])
return model
def deep_lstm_model():
embedding_layer = Embedding(nb_words,
embedding_dim,
weights=[word_embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
lstm_layer = LSTM(num_lstm, dropout=rate_drop_lstm, recurrent_dropout=rate_drop_lstm,
go_backwards = False, implementation = 2, return_sequences = True)
lstm_layer2 = LSTM(96, dropout=rate_drop_lstm, recurrent_dropout=rate_drop_lstm,
go_backwards = False, implementation = 2)
sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_1 = embedding_layer(sequence_1_input)
x1 = lstm_layer(embedded_sequences_1)
x2 = lstm_layer2(x1)
sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_2 = embedding_layer(sequence_2_input)
y1 = lstm_layer(embedded_sequences_2)
y2 = lstm_layer2(y1)
merged = concatenate([x2, y2])
merged = Dropout(rate_drop_dense)(merged)
merged = BatchNormalization()(merged)
merged = Dense(num_dense, activation=act)(merged)
merged = Dropout(rate_drop_dense)(merged)
merged = BatchNormalization()(merged)
preds = Dense(1, activation='sigmoid')(merged)
model = Model(inputs=[sequence_1_input, sequence_2_input], outputs=preds)
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['acc'])
return model
def merged_lstm():
embedding_layer = Embedding(nb_words,
embedding_dim,
weights=[word_embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
lstm_layer = LSTM(num_lstm, dropout=rate_drop_lstm, recurrent_dropout=rate_drop_lstm,
go_backwards = False, implementation = 2)
sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_1 = embedding_layer(sequence_1_input)
x1 = lstm_layer(embedded_sequences_1)
sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_2 = embedding_layer(sequence_2_input)
y1 = lstm_layer(embedded_sequences_2)
dense_input = Input(shape = (ncols,))
d = Dense(256, kernel_initializer = 'he_normal')(dense_input)
d = PReLU()(d)
d = BatchNormalization()(d)
d = Dropout(0.4)(d)
d2 = Dense(512, kernel_initializer = 'he_normal')(d)
d2 = PReLU()(d2)
d2 = BatchNormalization()(d2)
d2 = Dropout(0.2)(d2)
d3 = Dense(512, kernel_initializer = 'he_normal')(d2)
d3 = PReLU()(d3)
d3 = BatchNormalization()(d3)
d3 = Dropout(0.2)(d3)
merged = concatenate([x1, y1, d3])
merged = Dropout(rate_drop_dense)(merged)
merged = BatchNormalization()(merged)
merged = Dense(num_dense)(merged)
merged = PReLU()(merged)
merged = Dropout(rate_drop_dense)(merged)
merged = BatchNormalization()(merged)
preds = Dense(1, activation='sigmoid')(merged)
model = Model(inputs=[sequence_1_input, sequence_2_input, dense_input], outputs=preds)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
return model
class Subtract(_Merge):
"""Layer that adds a list of inputs.
It takes as input a list of tensors,
all of the same shape, and returns
a single tensor (also of the same shape).
"""
def _merge_function(self, inputs):
return K.square(inputs[0] - inputs[1])
def siamese_architecture(seq_len, embed_len, state_len):
inputs = Input(shape=(seq_len, embed_len))
x = Bidirectional(GRU(units=state_len, dropout=rate_drop_dense, recurrent_dropout=rate_drop_lstm,
implementation=2, return_sequences=True))(inputs)
x = Bidirectional(GRU(units=state_len, dropout=rate_drop_dense, recurrent_dropout=rate_drop_lstm,
implementation=2))(x)
return Model(inputs=inputs, outputs=x)
def create_model():
embedding_layer = Embedding(nb_words, 300, weights=[word_embedding_matrix],
input_length=170, trainable=False)
siamese_arch = siamese_architecture(MAX_SEQUENCE_LENGTH, EMBEDDING_DIM, num_lstm)
sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_1 = embedding_layer(sequence_1_input)
x1 = siamese_arch(embedded_sequences_1)
sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences_2 = embedding_layer(sequence_2_input)
y1 = siamese_arch(embedded_sequences_2)
# merged = Concatenate()([x1, y1])
merged_sub = Subtract()([x1, y1])
merged_mult = Multiply()([x1, y1])
merged_comb = Concatenate()([x1, y1, merged_sub, merged_mult])
merged = BatchNormalization()(merged_comb)
merged = Dense(512, activation='relu')(merged)
merged = BatchNormalization()(merged)
merged = Dense(128, activation='relu')(merged)
# merged = Dropout(DROP)(merged)
merged = BatchNormalization()(merged)
preds = Dense(1, activation='sigmoid')(merged)
model = Model(inputs=[sequence_1_input, sequence_2_input], outputs=preds)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
return model
def create_mergevalidset(data_1, data_2, datafeats, labels):
np.random.seed(1234)
perm = np.random.permutation(len(data_1))
idx_train = perm[:int(len(data_1)*(1-VALIDATION_SPLIT))]
idx_val = perm[int(len(data_1)*(1-VALIDATION_SPLIT)):]
data_1_train = np.vstack((data_1[idx_train], data_2[idx_train]))
data_2_train = np.vstack((data_2[idx_train], data_1[idx_train]))
labels_train = np.concatenate((labels[idx_train], labels[idx_train]))
dataf_train = np.vstack((datafeats[idx_train], datafeats[idx_train]))
data_1_val = np.vstack((data_1[idx_val], data_2[idx_val]))
data_2_val = np.vstack((data_2[idx_val], data_1[idx_val]))
labels_val = np.concatenate((labels[idx_val], labels[idx_val]))
dataf_val = np.vstack((datafeats[idx_val], datafeats[idx_val]))
return data_1_train, data_2_train, dataf_train, labels_train, data_1_val, data_2_val, dataf_val, labels_val
def create_stratified_split(data_1, data_2, labels):
data1_tr, data1_val, y1_tr, y1_val = train_test_split(data_1, labels, stratify = labels,
test_size = 0.2, random_state = 111)
data2_tr, data2_val, y2_tr, y2_val = train_test_split(data_2, labels, stratify = labels,
test_size = 0.2, random_state = 111)
return data1_tr, data2_tr, y1_tr, data2_tr, data2_val, y1_val
BASE_DIR = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/data/'
TRAIN_DATA_FILE = BASE_DIR + 'train.csv'
TEST_DATA_FILE = BASE_DIR + 'test.csv'
CHECK_DIR = BASE_DIR + '../scripts/models/checkpoints/'
MAX_SEQUENCE_LENGTH = 128
MAX_NB_WORDS = 200000
EMBEDDING_DIM = 300
VALIDATION_SPLIT = 0.1
embedding_dim = 300
nb_words = 120594
src_train = '../features/df_train_spacylemmat_fullclean.csv'
src_test = '../features/df_test_spacylemmat_fullclean.csv'
df_train = pd.read_csv(src_train)
df_test = pd.read_csv(src_test)
df_train.fillna('NULL', inplace = True)
df_test.fillna('NULL', inplace = True)
test_ids = df_test['test_id']
test_ids = np.array(test_ids)
data_src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/data/transformed/keras_tokenizer/'
word_embedding_matrix = np.load(data_src + 'embedding_matrix.npy')
q_src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/data/features/NER/'
feats_src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/data/features/uncleaned/'
q1 = np.load(q_src + 'q1train_NER_128len.npy')
q2 = np.load(q_src + 'q2train_NER_128len.npy')
X = pd.read_pickle('Xtrain_500bestCols.pkl')
y = pd.read_csv(feats_src + '/the_1owl/owl_train.csv')['is_duplicate']
del df_train, df_test
gc.collect()
data_1_train, data_2_train, dataf_train, labels_train, data_1_val, data_2_val, dataf_val, labels_val = create_mergevalidset(q1, q2, X.values, y)
del q1, q2, X, y
gc.collect()
###Output
_____no_output_____
###Markdown
data_1_train, data_2_train, labels_train, data_1_val, data_2_val, labels_val = create_stratified_split(data_1, data_2, labels)del data_1, data_2, labelsgc.collect()
###Code
re_weight = True
weight_val = np.ones(len(labels_val))
if re_weight:
weight_val *= 0.472001959
weight_val[labels_val==0] = 1.309028344
if re_weight:
class_weight = {0: 1.309028344, 1: 0.472001959}
else:
class_weight = None
num_lstm = np.random.randint(175, 275)
num_dense = np.random.randint(100, 150)
rate_drop_lstm = 0.15 + np.random.rand() * 0.25
rate_drop_dense = 0.15 + np.random.rand() * 0.25
act = 'relu'
ncols = dataf_train.shape[1]
STAMP = 'BiGRU_fred_%d_%d_%.2f_%.2f'%(num_lstm, num_dense, rate_drop_lstm, \
rate_drop_dense)
print('Model stamp:', STAMP)
early_stopping = EarlyStopping(monitor='val_loss', patience = 2)
check_path = CHECK_DIR + STAMP + '.h5'
model_checkpoint = ModelCheckpoint(check_path, save_best_only=True, save_weights_only=True)
###Output
_____no_output_____
###Markdown
model = merged_lstm()hist = model.fit([data_1_train, data_2_train, dataf_train], labels_train, \ validation_data=([data_1_val, data_2_val, dataf_val], labels_val, weight_val), \ epochs=200, batch_size=1024, shuffle=True, \ class_weight=class_weight, callbacks=[early_stopping, model_checkpoint])
###Code
model = create_model()
hist = model.fit([data_1_train, data_2_train], labels_train, \
validation_data=([data_1_val, data_2_val], labels_val, weight_val), \
epochs=200, batch_size=512, shuffle=True, \
class_weight=class_weight, callbacks=[early_stopping, model_checkpoint])
###Output
_____no_output_____
###Markdown
model = lstm_model(ncols)hist = model.fit([data_1_train, data_2_train], labels_train, \ validation_data=([data_1_val, data_2_val], labels_val, weight_val), \ epochs=200, batch_size=2048, shuffle=True, \ class_weight=class_weight, callbacks=[early_stopping, model_checkpoint])
###Code
model.load_weights(bst_model_path)
bst_val_score = min(hist.history['val_loss'])
print('Start making the submission before fine-tuning')
preds = model.predict([test_data_1, test_data_2], batch_size=8192, verbose=1)
preds += model.predict([test_data_2, test_data_1], batch_size=8192, verbose=1)
preds /= 2
submission = pd.DataFrame({'test_id':test_ids, 'is_duplicate':preds.ravel()})
submission.to_csv('%.4f_'%(bst_val_score)+STAMP+'.csv', index=False)
sub1 = pd.read_csv('0.2615_lstm_223_141_0.31_0.20.csv')
sub2 = pd.read_csv('0.2647_lstm_198_127_0.20_0.19.csv')
sub_avg = sub1.copy()
sub_avg['is_duplicate'] = (sub1['is_duplicate'] + sub2['is_duplicate'] ) / 2
sub_avg['test_id'] = sub1['test_id']
sub_avg.to_csv('submission_first_two_avg.csv', index = False)
###Output
_____no_output_____ |
5- Big Data/A Data Science Framework for Quora.ipynb | ###Markdown
A Data Science Framework for Quora Quite Practical and Far from any Theoretical Conceptslast update: 11/19/2018You can Fork and Run this kernel on Github:> [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist) 1- Introduction**Quora** has defined a competition in **Kaggle**. A realistic and attractive data set for data scientists.on this notebook, I will provide a **comprehensive** approach to solve Quora classification problem.I am open to getting your feedback for improving this **kernel** Notebook Content1. [Introduction](1)1. [Data Science Workflow for Quora](2)1. [Problem Definition](3) 1. [Business View](4) 1. [Real world Application Vs Competitions](31) 1. [What is a insincere question?](5) 1. [How can we find insincere question?](6)1. [Problem feature](7) 1. [Aim](8) 1. [Variables](9) 1. [ Inputs & Outputs](10)1. [Select Framework](11) 1. [Import](12) 1. [Version](13) 1. [Setup](14)1. [Exploratory data analysis](15) 1. [Data Collection](16) 1. [Features](17) 1. [Explorer Dataset](18) 1. [Data Cleaning](19) 1. [Data Preprocessing](20) 1. [Is data set imbalance?](21) 1. [Some Feature Engineering](22) 1. [Data Visualization](23) 1. [countplot](61) 1. [pie plot](62) 1. [Histogram](63) 1. [violin plot](64) 1. [kdeplot](65) 1. [Data Cleaning](24)1. [Model Deployment](24)1. [Conclusion](25)1. [References](26) ------------------------------------------------------------------------------------------------------------- **I hope you find this kernel helpful and some UPVOTES would be very much appreciated** ----------- 2- A Data Science Workflow for QuoraOf course, the same solution can not be provided for all problems, so the best way is to create a general framework and adapt it to new problem.**You can see my workflow in the below image** : **you should feel free to adapt this checklist to your needs** [Go to top](top) 3- Problem DefinitionI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)> **we will be predicting whether a question asked on Quora is sincere or not.** 3-1 About QuoraQuora is a platform that empowers people to learn from each other. On Quora, people can ask questions and connect with others who contribute unique insights and quality answers. A key challenge is to weed out insincere questions -- those founded upon false premises, or that intend to make a statement rather than look for helpful answers. 3-2 Business View An existential problem for any major website today is how to handle toxic and divisive content. **Quora** wants to tackle this problem head-on to keep their platform a place where users can feel safe sharing their knowledge with the world.**Quora** is a platform that empowers people to learn from each other. On Quora, people can ask questions and connect with others who contribute unique insights and quality answers. A key challenge is to weed out insincere questions -- those founded upon false premises, or that intend to make a statement rather than look for helpful answers.In this kernel, I will develop models that identify and flag insincere questions.we Help Quora uphold their policy of “Be Nice, Be Respectful” and continue to be a place for sharing and growing the world’s knowledge. 3-2-1 Real world Application Vs CompetitionsJust a simple comparison between real-world apps with competitions: 3-3 What is a insincere question?is defined as a question intended to make a **statement** rather than look for **helpful answers**. 3-4 How can we find insincere question?Some characteristics that can signify that a question is insincere:1. **Has a non-neutral tone** 1. Has an exaggerated tone to underscore a point about a group of people 1. Is rhetorical and meant to imply a statement about a group of people1. **Is disparaging or inflammatory** 1. Suggests a discriminatory idea against a protected class of people, or seeks confirmation of a stereotype 1. Makes disparaging attacks/insults against a specific person or group of people 1. Based on an outlandish premise about a group of people 1. Disparages against a characteristic that is not fixable and not measurable1. **Isn't grounded in reality** 1. Based on false information, or contains absurd assumptions 1. Uses sexual content (incest, bestiality, pedophilia) for shock value, and not to seek genuine answers [Go to top](top) 4- Problem FeatureProblem Definition has three steps that have illustrated in the picture below:1. Aim1. Variable1. Inputs & Outputs 4-1 Aimwe will be predicting whether a question asked on Quora is **sincere** or not. 4-2 Variables1. qid - unique question identifier1. question_text - Quora question text1. target - a question labeled "insincere" has a value of 1, otherwise 0 4-3 Inputs & Outputswe use train.csv and test.csv as Input and we should upload a submission.csv as Output**>**> You must answer the following question:How does your company expect to use and benefit from **your model**. [Go to top](top) 5- Select FrameworkAfter problem definition and problem feature, we should select our framework to solve the problem.What we mean by the framework is that the programming languages you use and by what modules the problem will be solved. 5-1 Python Deep Learning Packages*State of open source deep learning frameworks in 2017*1. **keras**[11]>Well known for being minimalistic, the Keras neural network library (with a supporting interface of Python) supports both convolutional and recurrent networks that are capable of running on either TensorFlow or Theano. The library is written in Python and was developed keeping quick experimentation as its USP.1. **TensorFlow**> TensorFlow is arguably one of the best deep learning frameworks and has been adopted by several giants such as Airbus, Twitter, IBM, and others mainly due to its highly flexible system architecture.1. **Caffe**> Caffe is a deep learning framework that is supported with interfaces like C, C++, Python, and MATLAB as well as the command line interface. It is well known for its speed and transposability and its applicability in modeling convolution neural networks (CNN).1. **Microsoft Cognitive Toolkit/CNTK**> Popularly known for easy training and the combination of popular model types across servers, the Microsoft Cognitive Toolkit (previously known as CNTK) is an open-source deep learning framework to train deep learning models. It performs efficient convolution neural networks and training for image, speech, and text-based data. Similar to Caffe, it is supported by interfaces such as Python, C++, and the command line interface.1. **Torch/PyTorch**> Torch is a scientific computing framework that offers wide support for machine learning algorithms. It is a Lua-based deep learning framework and is used widely amongst industry giants such as Facebook, Twitter, and Google. It employs CUDA along with C/C++ libraries for processing and was basically made to scale the production of building models and provide overall flexibility.1. **MXNet**> Designed specifically for the purpose of high efficiency, productivity, and flexibility, MXNet(pronounced as mix-net) is a deep learning framework supported by Python, R, C++, and Julia.1. **Chainer**>Highly powerful, dynamic and intuitive, Chainer is a Python-based deep learning framework for neural networks that is designed by the run strategy. Compared to other frameworks that use the same strategy, you can modify the networks during runtime, allowing you to execute arbitrary control flow statements.1. **Deeplearning4j**>Parallel training through iterative reduce, microservice architecture adaptation, and distributed CPUs and GPUs are some of the salient features of the Deeplearning4j deep learning framework. It is developed in Java as well as Scala and supports other JVM languages, too.1. **Theano**>Theano is beautiful. Without Theano, we wouldn’t have anywhere near the amount of deep learning libraries (specifically in Python) that we do today. In the same way that without NumPy, we couldn’t have SciPy, scikit-learn, and scikit-image, the same can be said about Theano and higher-level abstractions of deep learning.1. **Lasagne**>Lasagne is a lightweight library used to construct and train networks in Theano. The key term here is lightweight — it is not meant to be a heavy wrapper around Theano like Keras is. While this leads to your code being more verbose, it does free you from any restraints, while still giving you modular building blocks based on Theano.1. **PaddlePaddle**>PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. [Go to top](top) 5-2 Import
###Code
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from nltk.corpus import stopwords
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
from pandas import get_dummies
import matplotlib as mpl
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import sklearn
import string
import scipy
import numpy
import nltk
import json
import sys
import csv
import os
###Output
_____no_output_____
###Markdown
5-3 version
###Code
print('matplotlib: {}'.format(matplotlib.__version__))
print('sklearn: {}'.format(sklearn.__version__))
print('scipy: {}'.format(scipy.__version__))
print('seaborn: {}'.format(sns.__version__))
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
###Output
matplotlib: 2.2.3
sklearn: 0.20.0
scipy: 1.1.0
seaborn: 0.9.0
pandas: 0.23.4
numpy: 1.15.4
Python: 3.6.6 |Anaconda, Inc.| (default, Oct 9 2018, 12:34:16)
[GCC 7.3.0]
###Markdown
5-4 SetupA few tiny adjustments for better **code readability**
###Code
sns.set(style='white', context='notebook', palette='deep')
pylab.rcParams['figure.figsize'] = 12,8
warnings.filterwarnings('ignore')
mpl.style.use('ggplot')
sns.set_style('white')
%matplotlib inline
###Output
_____no_output_____
###Markdown
6- EDA In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data. * Which variables suggest interesting relationships?* Which observations are unusual?* Analysis of the features!By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:1. Data Collection1. Visualization1. Data Cleaning1. Data Preprocessing [Go to top](top) 6-1 Data Collection**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]I start Collection Data by the training and testing datasets into **Pandas DataFrames** [Go to top](top)
###Code
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
###Output
_____no_output_____
###Markdown
**>*** Each **row** is an observation (also known as : sample, example, instance, record)* Each **column** is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate) [Go to top](top)
###Code
train.sample(1)
test.sample(1)
###Output
_____no_output_____
###Markdown
6-1-1 FeaturesFeatures can be from following types:* numeric* categorical* ordinal* datetime* coordinatesFind the type of features in **Qoura dataset**?!for getting some information about the dataset you can use **info()** command
###Code
print(train.info())
print(test.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 56370 entries, 0 to 56369
Data columns (total 2 columns):
qid 56370 non-null object
question_text 56370 non-null object
dtypes: object(2)
memory usage: 880.9+ KB
None
###Markdown
6-1-2 Explorer Dataset1- Dimensions of the dataset.2- Peek at the data itself.3- Statistical summary of all attributes.4- Breakdown of the data by the class variable.Don’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects. [Go to top](top)
###Code
# shape
print('Shape of train:',train.shape)
print('Shape of train:',test.shape)
#columns*rows
train.size
###Output
_____no_output_____
###Markdown
After loading the data via **pandas**, we should checkout what the content is, description and via the following:
###Code
type(train)
type(test)
###Output
_____no_output_____
###Markdown
to pop up 5 random rows from the data set, we can use **sample(5)** function and find the type of features
###Code
train.sample(5)
###Output
_____no_output_____
###Markdown
6-2 Data CleaningWhen dealing with real-world data, dirty data is the norm rather than the exception. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.The primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making. While it has been the focus of many researchers for several years, individual problems have been addressed separately. These include missing value imputation, outliers detection, transformations, integrity constraints violations detection and repair, consistent query answering, deduplication, and many other related problems such as profiling and constraints mining.[4] [Go to top](top) how many NA elements in every column!!Good news, it is Zero!to check out how many null info are on the dataset, we can use **isnull().sum()**
###Code
train.isnull().sum()
###Output
_____no_output_____
###Markdown
But if we had , we can just use **dropna()**(be careful sometimes you should not do this!)
###Code
# remove rows that have NA's
print('Before Droping',train.shape)
train = train.dropna()
print('After Droping',train.shape)
###Output
Before Droping (1306122, 3)
After Droping (1306122, 3)
###Markdown
We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property. to print dataset **columns**, we can use columns atribute
###Code
train.columns
###Output
_____no_output_____
###Markdown
you see number of unique item for Target with command below:
###Code
train_target = train['target'].values
np.unique(train_target)
###Output
_____no_output_____
###Markdown
YES, quora problem is a **binary classification**! :) to check the first 5 rows of the data set, we can use head(5).
###Code
train.head(5)
###Output
_____no_output_____
###Markdown
or to check out last 5 row of the data set, we use tail() function
###Code
train.tail()
###Output
_____no_output_____
###Markdown
to give a **statistical summary** about the dataset, we can use **describe()
###Code
train.describe()
###Output
_____no_output_____
###Markdown
As you can see, the statistical information that this command gives us is not suitable for this type of data**describe() is more useful for numerical data sets** 6-3 Data Preprocessing**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.there are plenty of steps for data preprocessing and we just listed some of them in general(Not just for Quora) :* removing Target column (id)* Sampling (without replacement)* Making part of iris unbalanced and balancing (with undersampling and SMOTE)* Introducing missing values and treating them (replacing by average values)* Noise filtering* Data discretization* Normalization and standardization* PCA analysis* Feature selection (filter, embedded, wrapper) [Go to top](top) **>**in pandas's data frame you can perform some query such as "where"
###Code
train.where(train ['target']==1).count()
###Output
_____no_output_____
###Markdown
as you can see in the below in python, it is so easy perform some query on the dataframe:
###Code
train[train['target']>1]
###Output
_____no_output_____
###Markdown
Some examples of questions that they are insincere
###Code
train[train['target']==1].head(5)
###Output
_____no_output_____
###Markdown
6-3-1 Is data set imbalance?
###Code
train_target.mean()
###Output
_____no_output_____
###Markdown
A large part of the data is unbalanced, but **how can we solve it?**
###Code
train["target"].value_counts()
# data is imbalance
###Output
_____no_output_____
###Markdown
Imbalanced dataset is relevant primarily in the context of supervised machine learning involving two or more classes. Imbalance means that the number of data points available for different the classes is different:If there are two classes, then balanced data would mean 50% points for each of the class. For most machine learning techniques, little imbalance is not a problem. So, if there are 60% points for one class and 40% for the other class, it should not cause any significant performance degradation. Only when the class imbalance is high, e.g. 90% points for one class and 10% for the other, standard optimization criteria or performance measures may not be as effective and would need modification.[Image source](http://api.ning.com/files/vvHEZw33BGqEUW8aBYm4epYJWOfSeUBPVQAsgz7aWaNe0pmDBsjgggBxsyq*8VU1FdBshuTDdL2-bp2ALs0E-0kpCV5kVdwu/imbdata.png)A typical example of imbalanced data is encountered in e-mail classification problem where emails are classified into ham or spam. The number of spam emails is usually lower than the number of relevant (ham) emails. So, using the original distribution of two classes leads to imbalanced dataset.Using accuracy as a performace measure for highly imbalanced datasets is not a good idea. For example, if 90% points belong to the true class in a binary classification problem, a default prediction of true for all data poimts leads to a classifier which is 90% accurate, even though the classifier has not learnt anything about the classification problem at hand![9] 6-3-2 Some Feature Engineering [NLTK](https://www.nltk.org/) is one of the leading platforms for working with human language data and Python, the module NLTK is used for natural language processing. NLTK is literally an acronym for Natural Language Toolkit.We get a set of **English stop** words using the line
###Code
#from nltk.corpus import stopwords
eng_stopwords = set(stopwords.words("english"))
###Output
_____no_output_____
###Markdown
The returned list stopWords contains **179 stop words** on my computer.You can view the length or contents of this array with the lines:
###Code
print(len(eng_stopwords))
print(eng_stopwords)
###Output
179
{'them', "aren't", 'been', "you'll", "doesn't", 'under', 'so', 'no', 'we', 'into', "haven't", 'in', 'yourselves', "hadn't", 'be', 'few', 'own', 'both', 'couldn', 'then', 'here', 'now', 'more', 'ours', 'to', 'nor', 're', 'but', 'up', 'hasn', 'shan', "mustn't", 'through', "you'd", 'was', 'herself', 'too', 'weren', 'other', "you've", 'are', 'because', 'not', 'very', 'of', 'there', 'll', 'doing', 'they', 'against', 'you', 's', 'were', 'shouldn', 'or', 'above', 'who', 'do', 'have', 'your', 'can', 'just', 'had', 'itself', 'myself', 'wouldn', 'any', 'after', 'our', 'this', 'on', 'if', 'during', 'why', "should've", "she's", 'their', 'these', 'where', "couldn't", 'has', 'isn', 'a', 'didn', 'i', 'which', 'aren', 'over', 'once', 'his', 'm', 'her', 'as', 'its', 'an', "it's", "don't", 'o', 'should', "you're", "mightn't", 'doesn', 'he', 'that', 'before', 'same', 'what', 'most', "shan't", 'again', 'off', 'my', 'himself', 'further', 'am', 'will', 'those', "isn't", 'by', 'does', 'while', 'is', "shouldn't", "that'll", 'the', 'from', 'did', 'with', 'don', 'hadn', 'hers', 'some', 'ourselves', 'such', 'him', 'themselves', 'she', "hasn't", 'it', 'needn', 'theirs', 'between', 'at', 'ma', "wasn't", "weren't", 'ain', 'having', 'yours', 'out', 'each', 'than', 'haven', 'how', 'mightn', 'when', 'and', 'being', 'won', 'until', 'down', 'mustn', 'me', 'all', 'd', 't', 'only', 'y', 'yourself', 've', 'wasn', 'whom', "won't", 'below', "wouldn't", 'for', "needn't", "didn't", 'about'}
###Markdown
The metafeatures that we'll create based on SRK's EDAs, [sudalairajkumar](http://http://www.kaggle.com/sudalairajkumar/simple-feature-engg-notebook-spooky-author) and [tunguz](https://www.kaggle.com/tunguz/just-some-simple-eda) are:1. Number of words in the text1. Number of unique words in the text1. Number of characters in the text1. Number of stopwords1. Number of punctuations1. Number of upper case words1. Number of title case words1. Average length of the words [Go to top](top) Number of words in the text
###Code
train["num_words"] = train["question_text"].apply(lambda x: len(str(x).split()))
test["num_words"] = test["question_text"].apply(lambda x: len(str(x).split()))
print('maximum of num_words in train',train["num_words"].max())
print('min of num_words in train',train["num_words"].min())
print("maximum of num_words in test",test["num_words"].max())
print('min of num_words in train',test["num_words"].min())
###Output
maximum of num_words in train 134
min of num_words in train 1
maximum of num_words in test 87
min of num_words in train 2
###Markdown
Number of unique words in the text
###Code
train["num_unique_words"] = train["question_text"].apply(lambda x: len(set(str(x).split())))
test["num_unique_words"] = test["question_text"].apply(lambda x: len(set(str(x).split())))
print('maximum of num_unique_words in train',train["num_unique_words"].max())
print('mean of num_unique_words in train',train["num_unique_words"].mean())
print("maximum of num_unique_words in test",test["num_unique_words"].max())
print('mean of num_unique_words in train',test["num_unique_words"].mean())
###Output
maximum of num_unique_words in train 96
mean of num_unique_words in train 12.135776749798257
maximum of num_unique_words in test 61
mean of num_unique_words in train 12.096363313819408
###Markdown
Number of characters in the text
###Code
train["num_chars"] = train["question_text"].apply(lambda x: len(str(x)))
test["num_chars"] = test["question_text"].apply(lambda x: len(str(x)))
print('maximum of num_chars in train',train["num_chars"].max())
print("maximum of num_chars in test",test["num_chars"].max())
###Output
maximum of num_chars in train 1017
maximum of num_chars in test 588
###Markdown
Number of stopwords in the text
###Code
train["num_stopwords"] = train["question_text"].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
test["num_stopwords"] = test["question_text"].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
print('maximum of num_stopwords in train',train["num_stopwords"].max())
print("maximum of num_stopwords in test",test["num_stopwords"].max())
###Output
maximum of num_stopwords in train 56
maximum of num_stopwords in test 47
###Markdown
Number of punctuations in the text
###Code
train["num_punctuations"] =train['question_text'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
test["num_punctuations"] =test['question_text'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
print('maximum of num_punctuations in train',train["num_punctuations"].max())
print("maximum of num_punctuations in test",test["num_punctuations"].max())
###Output
maximum of num_punctuations in train 411
maximum of num_punctuations in test 260
###Markdown
Number of title case words in the text
###Code
train["num_words_upper"] = train["question_text"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
test["num_words_upper"] = test["question_text"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
print('maximum of num_words_upper in train',train["num_words_upper"].max())
print("maximum of num_words_upper in test",test["num_words_upper"].max())
###Output
maximum of num_words_upper in train 37
maximum of num_words_upper in test 36
###Markdown
Number of title case words in the text
###Code
train["num_words_title"] = train["question_text"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
test["num_words_title"] = test["question_text"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
print('maximum of num_words_title in train',train["num_words_title"].max())
print("maximum of num_words_title in test",test["num_words_title"].max())
###Output
maximum of num_words_title in train 37
maximum of num_words_title in test 24
###Markdown
Average length of the words in the text
###Code
train["mean_word_len"] = train["question_text"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
test["mean_word_len"] = test["question_text"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
print('mean_word_len in train',train["mean_word_len"].max())
print("mean_word_len in test",test["mean_word_len"].max())
###Output
mean_word_len in train 57.666666666666664
mean_word_len in test 29.333333333333332
###Markdown
we add some new feature to train and test data set now, print columns agains
###Code
print(train.columns)
train.head(1)
###Output
Index(['qid', 'question_text', 'target', 'num_words', 'num_unique_words',
'num_chars', 'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len'],
dtype='object')
###Markdown
**>**>**Preprocessing and generation pipelines depend on a model type** 6-4 Data Visualization**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.> * Two** important rules** for Data visualization:> 1. Do not put too little information> 1. Do not put too much information [Go to top](top) 6-4-1 countplot
###Code
ax=sns.countplot(x='target',hue="target", data=train ,linewidth=5,edgecolor=sns.color_palette("dark", 3))
plt.title('Is data set imbalance?');
ax = sns.countplot(y="target", hue="target", data=train)
plt.title('Is data set imbalance?');
###Output
_____no_output_____
###Markdown
6-4-2 pie plot
###Code
ax=train['target'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%' ,shadow=True)
ax.set_title('target')
ax.set_ylabel('')
plt.show()
#plt.pie(train['target'],autopct='%1.1f%%')
#plt.axis('equal')
#plt.show()
###Output
_____no_output_____
###Markdown
6-4-3 Histogram
###Code
f,ax=plt.subplots(1,2,figsize=(20,10))
train[train['target']==0].num_words.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('target= 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
train[train['target']==1].num_words.plot.hist(ax=ax[1],color='green',bins=20,edgecolor='black')
ax[1].set_title('target= 1')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
train[['target','num_words']].groupby(['target']).mean().plot.bar(ax=ax[0])
ax[0].set_title('num_words vs target')
sns.countplot('num_words',hue='target',data=train,ax=ax[1])
ax[1].set_title('num_words:target=0 vs target=1')
plt.show()
# histograms
train.hist(figsize=(15,20))
plt.figure()
train["num_words"].hist();
###Output
_____no_output_____
###Markdown
6-4-4 violin plot
###Code
sns.violinplot(data=train,x="target", y="num_words")
sns.violinplot(data=train,x="target", y="num_words_upper")
###Output
_____no_output_____
###Markdown
6-4-5 kdeplot
###Code
sns.FacetGrid(train, hue="target", size=5).map(sns.kdeplot, "num_words").add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
A Data Science Framework for Quora Quite Practical and Far from any Theoretical Conceptslast update: 11/24/2018You can Fork and Run this kernel on **Github**:> [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist) 1- Introduction**Quora** has defined a competition in **Kaggle**. A realistic and attractive data set for data scientists.on this notebook, I will provide a **comprehensive** approach to solve Quora classification problem.I am open to getting your feedback for improving this **kernel**. Notebook Content1. [Introduction](1)1. [Data Science Workflow for Quora](2)1. [Problem Definition](3) 1. [Business View](4) 1. [Real world Application Vs Competitions](31) 1. [What is a insincere question?](5) 1. [How can we find insincere question?](6)1. [Problem feature](7) 1. [Aim](8) 1. [Variables](9) 1. [ Inputs & Outputs](10)1. [Select Framework](11) 1. [Import](12) 1. [Version](13) 1. [Setup](14)1. [Exploratory data analysis](15) 1. [Data Collection](16) 1. [Features](17) 1. [Explorer Dataset](18) 1. [Data Cleaning](19) 1. [Data Preprocessing](20) 1. [Is data set imbalance?](21) 1. [Some Feature Engineering](22) 1. [Data Visualization](23) 1. [countplot](61) 1. [pie plot](62) 1. [Histogram](63) 1. [violin plot](64) 1. [kdeplot](65)1. [Apply Learning](24)1. [Conclusion](25)1. [References](26) ------------------------------------------------------------------------------------------------------------- **I hope you find this kernel helpful and some UPVOTES would be very much appreciated** ----------- 2- A Data Science Workflow for QuoraOf course, the same solution can not be provided for all problems, so the best way is to create a **general framework** and adapt it to new problem.**You can see my workflow in the below image** : **You should feel free to adjust this checklist to your needs** [Go to top](top) 3- Problem DefinitionI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)> **we will be predicting whether a question asked on Quora is sincere or not.** 3-1 About QuoraQuora is a platform that empowers people to learn from each other. On Quora, people can ask questions and connect with others who contribute unique insights and quality answers. A key challenge is to weed out insincere questions -- those founded upon false premises, or that intend to make a statement rather than look for helpful answers. 3-2 Business View An existential problem for any major website today is how to handle toxic and divisive content. **Quora** wants to tackle this problem head-on to keep their platform a place where users can feel safe sharing their knowledge with the world.**Quora** is a platform that empowers people to learn from each other. On Quora, people can ask questions and connect with others who contribute unique insights and quality answers. A key challenge is to weed out insincere questions -- those founded upon false premises, or that intend to make a statement rather than look for helpful answers.In this kernel, I will develop models that identify and flag insincere questions.we Help Quora uphold their policy of “Be Nice, Be Respectful” and continue to be a place for sharing and growing the world’s knowledge. 3-2-1 Real world Application Vs CompetitionsJust a simple comparison between real-world apps with competitions: 3-3 What is a insincere question?Is defined as a question intended to make a **statement** rather than look for **helpful answers**. 3-4 How can we find insincere question?Some characteristics that can signify that a question is insincere:1. **Has a non-neutral tone** 1. Has an exaggerated tone to underscore a point about a group of people 1. Is rhetorical and meant to imply a statement about a group of people1. **Is disparaging or inflammatory** 1. Suggests a discriminatory idea against a protected class of people, or seeks confirmation of a stereotype 1. Makes disparaging attacks/insults against a specific person or group of people 1. Based on an outlandish premise about a group of people 1. Disparages against a characteristic that is not fixable and not measurable1. **Isn't grounded in reality** 1. Based on false information, or contains absurd assumptions 1. Uses sexual content (incest, bestiality, pedophilia) for shock value, and not to seek genuine answers [Go to top](top) 4- Problem FeatureProblem Definition has three steps that have illustrated in the picture below:1. Aim1. Variable1. Inputs & Outputs 4-1 AimWe will be predicting whether a question asked on Quora is **sincere** or not. 4-2 Variables1. qid - unique question identifier1. question_text - Quora question text1. target - a question labeled "insincere" has a value of 1, otherwise 0 4-3 Inputs & Outputswe use train.csv and test.csv as Input and we should upload a submission.csv as Output**>**> You must answer the following question:How does your company expect to use and benefit from **your model**. [Go to top](top) 5- Select FrameworkAfter problem definition and problem feature, we should select our framework to solve the problem.What we mean by the framework is that the programming languages you use and by what modules the problem will be solved. 5-1 Python Deep Learning Packages*State of open source deep learning frameworks in 2017*1. **keras**[11]>Well known for being minimalistic, the Keras neural network library (with a supporting interface of Python) supports both convolutional and recurrent networks that are capable of running on either TensorFlow or Theano. The library is written in Python and was developed keeping quick experimentation as its USP.1. **TensorFlow**> TensorFlow is arguably one of the best deep learning frameworks and has been adopted by several giants such as Airbus, Twitter, IBM, and others mainly due to its highly flexible system architecture.1. **Caffe**> Caffe is a deep learning framework that is supported with interfaces like C, C++, Python, and MATLAB as well as the command line interface. It is well known for its speed and transposability and its applicability in modeling convolution neural networks (CNN).1. **Microsoft Cognitive Toolkit/CNTK**> Popularly known for easy training and the combination of popular model types across servers, the Microsoft Cognitive Toolkit (previously known as CNTK) is an open-source deep learning framework to train deep learning models. It performs efficient convolution neural networks and training for image, speech, and text-based data. Similar to Caffe, it is supported by interfaces such as Python, C++, and the command line interface.1. **Torch/PyTorch**> Torch is a scientific computing framework that offers wide support for machine learning algorithms. It is a Lua-based deep learning framework and is used widely amongst industry giants such as Facebook, Twitter, and Google. It employs CUDA along with C/C++ libraries for processing and was basically made to scale the production of building models and provide overall flexibility.1. **MXNet**> Designed specifically for the purpose of high efficiency, productivity, and flexibility, MXNet(pronounced as mix-net) is a deep learning framework supported by Python, R, C++, and Julia.1. **Chainer**>Highly powerful, dynamic and intuitive, Chainer is a Python-based deep learning framework for neural networks that is designed by the run strategy. Compared to other frameworks that use the same strategy, you can modify the networks during runtime, allowing you to execute arbitrary control flow statements.1. **Deeplearning4j**>Parallel training through iterative reduce, microservice architecture adaptation, and distributed CPUs and GPUs are some of the salient features of the Deeplearning4j deep learning framework. It is developed in Java as well as Scala and supports other JVM languages, too.1. **Theano**>Theano is beautiful. Without Theano, we wouldn’t have anywhere near the amount of deep learning libraries (specifically in Python) that we do today. In the same way that without NumPy, we couldn’t have SciPy, scikit-learn, and scikit-image, the same can be said about Theano and higher-level abstractions of deep learning.1. **Lasagne**>Lasagne is a lightweight library used to construct and train networks in Theano. The key term here is lightweight — it is not meant to be a heavy wrapper around Theano like Keras is. While this leads to your code being more verbose, it does free you from any restraints, while still giving you modular building blocks based on Theano.1. **PaddlePaddle**>PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. [Go to top](top) 5-2 Import
###Code
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from wordcloud import WordCloud as wc
from nltk.corpus import stopwords
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
from pandas import get_dummies
import matplotlib as mpl
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import sklearn
import string
import scipy
import numpy
import nltk
import json
import sys
import csv
import os
###Output
_____no_output_____
###Markdown
5-3 version
###Code
print('matplotlib: {}'.format(matplotlib.__version__))
print('sklearn: {}'.format(sklearn.__version__))
print('scipy: {}'.format(scipy.__version__))
print('seaborn: {}'.format(sns.__version__))
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
###Output
matplotlib: 2.2.3
sklearn: 0.20.0
scipy: 1.1.0
seaborn: 0.9.0
pandas: 0.23.4
numpy: 1.15.4
Python: 3.6.6 |Anaconda, Inc.| (default, Oct 9 2018, 12:34:16)
[GCC 7.3.0]
###Markdown
5-4 SetupA few tiny adjustments for better **code readability** 5-5 NLTKThe Natural Language Toolkit (NLTK) is one of the leading platforms for working with human language data and Python, the module NLTK is used for natural language processing. NLTK is literally an acronym for Natural Language Toolkit. with it you can tokenizing words and sentences.NLTK is a library of Python that can mine (scrap and upload data) and analyse very large amounts of textual data using computational methods.
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
data = "All work and no play makes jack a dull boy, all work and no play"
print(word_tokenize(data))
###Output
_____no_output_____
###Markdown
All of them are words except the comma. Special characters are treated as separate tokens. 5-5-1 Tokenizing sentencesThe same principle can be applied to sentences. Simply change the to sent_tokenize()We have added two sentences to the variable data:
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy."
print(sent_tokenize(data))
###Output
_____no_output_____
###Markdown
5-5-2 NLTK and arraysIf you wish to you can store the words and sentences in arrays
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy."
phrases = sent_tokenize(data)
words = word_tokenize(data)
print(phrases)
print(words)
###Output
_____no_output_____
###Markdown
5-5-3 NLTK stop wordsNatural language processing (nlp) is a research field that presents many challenges such as natural language understanding.Text may contain stop words like ‘the’, ‘is’, ‘are’. Stop words can be filtered from the text to be processed. There is no universal list of stop words in nlp research, however the nltk module contains a list of stop words.In this article you will learn how to remove stop words with the nltk module.
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy."
stopWords = set(stopwords.words('english'))
words = word_tokenize(data)
wordsFiltered = []
for w in words:
if w not in stopWords:
wordsFiltered.append(w)
print(wordsFiltered)
###Output
_____no_output_____
###Markdown
A module has been imported:
###Code
from nltk.corpus import stopwords
###Output
_____no_output_____
###Markdown
We get a set of English stop words using the line:
###Code
stopWords = set(stopwords.words('english'))
###Output
_____no_output_____
###Markdown
The returned list stopWords contains 153 stop words on my computer.You can view the length or contents of this array with the lines:
###Code
print(len(stopWords))
print(stopWords)
###Output
_____no_output_____
###Markdown
We create a new list called wordsFiltered which contains all words which are not stop words.To create it we iterate over the list of words and only add it if its not in the stopWords list.
###Code
for w in words:
if w not in stopWords:
wordsFiltered.append(w)
###Output
_____no_output_____
###Markdown
5-5-4 NLTK – stemmingStart by defining some words:
###Code
words = ["game","gaming","gamed","games"]
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
###Output
_____no_output_____
###Markdown
And stem the words in the list using:
###Code
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
words = ["game","gaming","gamed","games"]
ps = PorterStemmer()
for word in words:
print(ps.stem(word))
###Output
_____no_output_____
###Markdown
5-5-5 NLTK speech taggingThe module NLTK can automatically tag speech.Given a sentence or paragraph, it can label words such as verbs, nouns and so on.NLTK – speech tagging exampleThe example below automatically tags words with a corresponding class.
###Code
import nltk
from nltk.tokenize import PunktSentenceTokenizer
document = 'Whether you\'re new to programming or an experienced developer, it\'s easy to learn and use Python.'
sentences = nltk.sent_tokenize(document)
for sent in sentences:
print(nltk.pos_tag(nltk.word_tokenize(sent)))
###Output
_____no_output_____
###Markdown
We can filter this data based on the type of word:
###Code
import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer
document = 'Today the Netherlands celebrates King\'s Day. To honor this tradition, the Dutch embassy in San Francisco invited me to'
sentences = nltk.sent_tokenize(document)
data = []
for sent in sentences:
data = data + nltk.pos_tag(nltk.word_tokenize(sent))
for word in data:
if 'NNP' in word[1]:
print(word)
sns.set(style='white', context='notebook', palette='deep')
pylab.rcParams['figure.figsize'] = 12,8
warnings.filterwarnings('ignore')
mpl.style.use('ggplot')
sns.set_style('white')
%matplotlib inline
###Output
_____no_output_____
###Markdown
6- EDA In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data. * Which variables suggest interesting relationships?* Which observations are unusual?* Analysis of the features!By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:1. Data Collection1. Visualization1. Data Cleaning1. Data Preprocessing [Go to top](top) 6-1 Data Collection**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]I start Collection Data by the training and testing datasets into **Pandas DataFrames**. [Go to top](top)
###Code
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
###Output
_____no_output_____
###Markdown
**>*** Each **row** is an observation (also known as : sample, example, instance, record).* Each **column** is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate). [Go to top](top)
###Code
train.sample(1)
test.sample(1)
###Output
_____no_output_____
###Markdown
Or you can use others command to explorer dataset, such as
###Code
train.tail(1)
###Output
_____no_output_____
###Markdown
6-1-1 FeaturesFeatures can be from following types:* numeric* categorical* ordinal* datetime* coordinatesFind the type of features in **Qoura dataset**?!For getting some information about the dataset you can use **info()** command.
###Code
print(train.info())
print(test.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 56370 entries, 0 to 56369
Data columns (total 2 columns):
qid 56370 non-null object
question_text 56370 non-null object
dtypes: object(2)
memory usage: 880.9+ KB
None
###Markdown
6-1-2 Explorer Dataset1- Dimensions of the dataset.2- Peek at the data itself.3- Statistical summary of all attributes.4- Breakdown of the data by the class variable.Don’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects. [Go to top](top)
###Code
# shape for train and test
print('Shape of train:',train.shape)
print('Shape of test:',test.shape)
#columns*rows
train.size
###Output
_____no_output_____
###Markdown
After loading the data via **pandas**, we should checkout what the content is, description and via the following:
###Code
type(train)
type(test)
train.describe()
###Output
_____no_output_____
###Markdown
To pop up 5 random rows from the data set, we can use **sample(5)** function and find the type of features.
###Code
train.sample(5)
###Output
_____no_output_____
###Markdown
6-2 Data CleaningWhen dealing with real-world data, dirty data is the norm rather than the exception. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.The primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making. While it has been the focus of many researchers for several years, individual problems have been addressed separately. These include missing value imputation, outliers detection, transformations, integrity constraints violations detection and repair, consistent query answering, deduplication, and many other related problems such as profiling and constraints mining.[4] [Go to top](top) How many NA elements in every column!!Good news, it is Zero!To check out how many null info are on the dataset, we can use **isnull().sum()**.
###Code
train.isnull().sum()
###Output
_____no_output_____
###Markdown
But if we had , we can just use **dropna()**(be careful sometimes you should not do this!)
###Code
# remove rows that have NA's
print('Before Droping',train.shape)
train = train.dropna()
print('After Droping',train.shape)
###Output
Before Droping (1306122, 3)
After Droping (1306122, 3)
###Markdown
We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property. To print dataset **columns**, we can use columns atribute.
###Code
train.columns
###Output
_____no_output_____
###Markdown
You see number of unique item for Target with command below:
###Code
train_target = train['target'].values
np.unique(train_target)
###Output
_____no_output_____
###Markdown
YES, quora problem is a **binary classification**! :) To check the first 5 rows of the data set, we can use head(5).
###Code
train.head(5)
###Output
_____no_output_____
###Markdown
Or to check out last 5 row of the data set, we use tail() function.
###Code
train.tail()
###Output
_____no_output_____
###Markdown
To give a **statistical summary** about the dataset, we can use **describe()**
###Code
train.describe()
###Output
_____no_output_____
###Markdown
As you can see, the statistical information that this command gives us is not suitable for this type of data**describe() is more useful for numerical data sets** 6-3 Data Preprocessing**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.there are plenty of steps for data preprocessing and we just listed some of them in general(Not just for Quora) :1. removing Target column (id)1. Sampling (without replacement)1. Making part of iris unbalanced and balancing (with undersampling and SMOTE)1. Introducing missing values and treating them (replacing by average values)1. Noise filtering1. Data discretization1. Normalization and standardization1. PCA analysis1. Feature selection (filter, embedded, wrapper)1. Etc.What methods of preprocessing can we run on Quora?! [Go to top](top) **>**in pandas's data frame you can perform some query such as "where"
###Code
train.where(train ['target']==1).count()
###Output
_____no_output_____
###Markdown
As you can see in the below in python, it is so easy perform some query on the dataframe:
###Code
train[train['target']>1]
###Output
_____no_output_____
###Markdown
Some examples of questions that they are insincere
###Code
train[train['target']==1].head(5)
###Output
_____no_output_____
###Markdown
6-3-1 Is data set imbalance?
###Code
train_target.mean()
###Output
_____no_output_____
###Markdown
A large part of the data is unbalanced, but **how can we solve it?**
###Code
train["target"].value_counts()
# data is imbalance
###Output
_____no_output_____
###Markdown
**Imbalanced dataset** is relevant primarily in the context of supervised machine learning involving two or more classes. **Imbalance** means that the number of data points available for different the classes is different:If there are two classes, then balanced data would mean 50% points for each of the class. For most machine learning techniques, little imbalance is not a problem. So, if there are 60% points for one class and 40% for the other class, it should not cause any significant performance degradation. Only when the class imbalance is high, e.g. 90% points for one class and 10% for the other, standard optimization criteria or performance measures may not be as effective and would need modification.[Image source](http://api.ning.com/files/vvHEZw33BGqEUW8aBYm4epYJWOfSeUBPVQAsgz7aWaNe0pmDBsjgggBxsyq*8VU1FdBshuTDdL2-bp2ALs0E-0kpCV5kVdwu/imbdata.png)A typical example of imbalanced data is encountered in e-mail classification problem where emails are classified into ham or spam. The number of spam emails is usually lower than the number of relevant (ham) emails. So, using the original distribution of two classes leads to imbalanced dataset.Using accuracy as a performace measure for highly imbalanced datasets is not a good idea. For example, if 90% points belong to the true class in a binary classification problem, a default prediction of true for all data poimts leads to a classifier which is 90% accurate, even though the classifier has not learnt anything about the classification problem at hand![9] 6-3-2 Exploreing Question
###Code
question = train['question_text']
i=0
for q in question[:5]:
i=i+1
print('sample '+str(i)+':' ,q)
text_withnumber = train['question_text']
result = ''.join([i for i in text_withnumber if not i.isdigit()])
###Output
_____no_output_____
###Markdown
6-3-2 Some Feature Engineering [NLTK](https://www.nltk.org/) is one of the leading platforms for working with human language data and Python, the module NLTK is used for natural language processing. NLTK is literally an acronym for Natural Language Toolkit.We get a set of **English stop** words using the line
###Code
#from nltk.corpus import stopwords
eng_stopwords = set(stopwords.words("english"))
###Output
_____no_output_____
###Markdown
The returned list stopWords contains **179 stop words** on my computer.You can view the length or contents of this array with the lines:
###Code
print(len(eng_stopwords))
print(eng_stopwords)
###Output
179
{'can', "mustn't", "wasn't", 'mightn', 'herself', 'most', 'on', 'why', 'how', 'ours', "you'd", 'are', 'other', 'my', 'himself', "it's", "that'll", 'their', 'but', 'same', 'was', 'for', 'is', 'of', 'him', 'under', 'each', 'because', 'a', 'by', 'am', 'about', 'so', 'i', 't', 'nor', 'myself', "haven't", 'the', 'after', "wouldn't", 'that', "don't", 'needn', 'or', 'wasn', 'which', 'all', "doesn't", 'being', 'only', 'few', 'will', 'our', "you'll", 'where', 's', 're', 'both', 'shan', "mightn't", 'whom', 'if', 'some', 'o', 'hasn', 'while', 'own', 'having', 'such', 'above', 'what', 'be', 'then', 'very', 'been', 'it', 'any', 'during', 'your', 'do', "weren't", 'he', "should've", 'have', 'hadn', 'should', 'with', 'once', 'not', 'm', 'further', "aren't", 'too', "won't", 'weren', 'more', 'against', 'his', 'again', 'her', 'through', 'until', 'mustn', 'yourself', 'here', 'did', 'than', 'me', 'won', 'yours', 'ma', 'has', "she's", 'we', 'between', 'she', 'ourselves', 'wouldn', 'below', 'does', 'aren', 'as', 'had', 'and', 'them', "shouldn't", 'doesn', 'shouldn', 'hers', 'no', 'this', 'themselves', 'when', "couldn't", 'its', 'll', 'yourselves', 'don', 'at', 'theirs', 'to', 'from', 've', 'itself', 'just', 'over', 'off', "needn't", 'out', 'you', 'y', 'in', 'didn', 'these', 'those', 'before', "hadn't", 'up', "shan't", "isn't", "didn't", 'an', 'd', 'were', 'now', 'down', "you're", "hasn't", "you've", 'doing', 'into', 'haven', 'isn', 'couldn', 'ain', 'who', 'there', 'they'}
###Markdown
The metafeatures that we'll create based on SRK's EDAs, [sudalairajkumar](http://http://www.kaggle.com/sudalairajkumar/simple-feature-engg-notebook-spooky-author) and [tunguz](https://www.kaggle.com/tunguz/just-some-simple-eda) are:1. Number of words in the text1. Number of unique words in the text1. Number of characters in the text1. Number of stopwords1. Number of punctuations1. Number of upper case words1. Number of title case words1. Average length of the words [Go to top](top) Number of words in the text
###Code
train["num_words"] = train["question_text"].apply(lambda x: len(str(x).split()))
test["num_words"] = test["question_text"].apply(lambda x: len(str(x).split()))
print('maximum of num_words in train',train["num_words"].max())
print('min of num_words in train',train["num_words"].min())
print("maximum of num_words in test",test["num_words"].max())
print('min of num_words in train',test["num_words"].min())
###Output
maximum of num_words in train 134
min of num_words in train 1
maximum of num_words in test 87
min of num_words in train 2
###Markdown
Number of unique words in the text
###Code
train["num_unique_words"] = train["question_text"].apply(lambda x: len(set(str(x).split())))
test["num_unique_words"] = test["question_text"].apply(lambda x: len(set(str(x).split())))
print('maximum of num_unique_words in train',train["num_unique_words"].max())
print('mean of num_unique_words in train',train["num_unique_words"].mean())
print("maximum of num_unique_words in test",test["num_unique_words"].max())
print('mean of num_unique_words in train',test["num_unique_words"].mean())
###Output
maximum of num_unique_words in train 96
mean of num_unique_words in train 12.135776749798257
maximum of num_unique_words in test 61
mean of num_unique_words in train 12.096363313819408
###Markdown
Number of characters in the text
###Code
train["num_chars"] = train["question_text"].apply(lambda x: len(str(x)))
test["num_chars"] = test["question_text"].apply(lambda x: len(str(x)))
print('maximum of num_chars in train',train["num_chars"].max())
print("maximum of num_chars in test",test["num_chars"].max())
###Output
maximum of num_chars in train 1017
maximum of num_chars in test 588
###Markdown
Number of stopwords in the text
###Code
train["num_stopwords"] = train["question_text"].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
test["num_stopwords"] = test["question_text"].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
print('maximum of num_stopwords in train',train["num_stopwords"].max())
print("maximum of num_stopwords in test",test["num_stopwords"].max())
###Output
maximum of num_stopwords in train 56
maximum of num_stopwords in test 47
###Markdown
Number of punctuations in the text
###Code
train["num_punctuations"] =train['question_text'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
test["num_punctuations"] =test['question_text'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
print('maximum of num_punctuations in train',train["num_punctuations"].max())
print("maximum of num_punctuations in test",test["num_punctuations"].max())
###Output
maximum of num_punctuations in train 411
maximum of num_punctuations in test 260
###Markdown
Number of title case words in the text
###Code
train["num_words_upper"] = train["question_text"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
test["num_words_upper"] = test["question_text"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
print('maximum of num_words_upper in train',train["num_words_upper"].max())
print("maximum of num_words_upper in test",test["num_words_upper"].max())
###Output
maximum of num_words_upper in train 37
maximum of num_words_upper in test 36
###Markdown
Number of title case words in the text
###Code
train["num_words_title"] = train["question_text"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
test["num_words_title"] = test["question_text"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
print('maximum of num_words_title in train',train["num_words_title"].max())
print("maximum of num_words_title in test",test["num_words_title"].max())
###Output
maximum of num_words_title in train 37
maximum of num_words_title in test 24
###Markdown
Average length of the words in the text
###Code
train["mean_word_len"] = train["question_text"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
test["mean_word_len"] = test["question_text"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
print('mean_word_len in train',train["mean_word_len"].max())
print("mean_word_len in test",test["mean_word_len"].max())
###Output
mean_word_len in train 57.666666666666664
mean_word_len in test 29.333333333333332
###Markdown
We add some new feature to train and test data set now, print columns agains
###Code
print(train.columns)
train.head(1)
###Output
Index(['qid', 'question_text', 'target', 'num_words', 'num_unique_words',
'num_chars', 'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len'],
dtype='object')
###Markdown
**>**>**Preprocessing and generation pipelines depend on a model type** 6-4 Data Visualization**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.> * Two** important rules** for Data visualization:> 1. Do not put too little information> 1. Do not put too much information [Go to top](top) 6-4-1 CountPlot
###Code
ax=sns.countplot(x='target',hue="target", data=train ,linewidth=5,edgecolor=sns.color_palette("dark", 3))
plt.title('Is data set imbalance?');
ax = sns.countplot(y="target", hue="target", data=train)
plt.title('Is data set imbalance?');
###Output
_____no_output_____
###Markdown
6-4-2 Pie Plot
###Code
ax=train['target'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%' ,shadow=True)
ax.set_title('target')
ax.set_ylabel('')
plt.show()
#plt.pie(train['target'],autopct='%1.1f%%')
#plt.axis('equal')
#plt.show()
###Output
_____no_output_____
###Markdown
6-4-3 Histogram
###Code
f,ax=plt.subplots(1,2,figsize=(20,10))
train[train['target']==0].num_words.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('target= 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
train[train['target']==1].num_words.plot.hist(ax=ax[1],color='green',bins=20,edgecolor='black')
ax[1].set_title('target= 1')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
train[['target','num_words']].groupby(['target']).mean().plot.bar(ax=ax[0])
ax[0].set_title('num_words vs target')
sns.countplot('num_words',hue='target',data=train,ax=ax[1])
ax[1].set_title('num_words:target=0 vs target=1')
plt.show()
# histograms
train.hist(figsize=(15,20))
plt.figure()
train["num_words"].hist();
###Output
_____no_output_____
###Markdown
6-4-4 Violin Plot
###Code
sns.violinplot(data=train,x="target", y="num_words")
sns.violinplot(data=train,x="target", y="num_words_upper")
###Output
_____no_output_____
###Markdown
6-4-5 KdePlot
###Code
sns.FacetGrid(train, hue="target", size=5).map(sns.kdeplot, "num_words").add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
6-4-6 BoxPlot
###Code
train['num_words'].loc[train['num_words']>60] = 60 #truncation for better visuals
axes= sns.boxplot(x='target', y='num_words', data=train)
axes.set_xlabel('Target', fontsize=12)
axes.set_title("Number of words in each class", fontsize=15)
plt.show()
train['num_chars'].loc[train['num_chars']>350] = 350 #truncation for better visuals
axes= sns.boxplot(x='target', y='num_chars', data=train)
axes.set_xlabel('Target', fontsize=12)
axes.set_title("Number of num_chars in each class", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
6-4-6 WordCloud
###Code
def generate_wordcloud(text):
wordcloud = wc(relative_scaling = 1.0,stopwords = eng_stopwords).generate(text)
fig,ax = plt.subplots(1,1,figsize=(10,10))
ax.imshow(wordcloud, interpolation='bilinear')
ax.axis("off")
ax.margins(x=0, y=0)
plt.show()
text =" ".join(train.question_text)
generate_wordcloud(text)
###Output
_____no_output_____
###Markdown
A Data Science Framework for Quora Quite Practical and Far from any Theoretical Conceptslast update: 11/24/2018You can Fork and Run this kernel on **Github**:> [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist) 1- Introduction**Quora** has defined a competition in **Kaggle**. A realistic and attractive data set for data scientists.on this notebook, I will provide a **comprehensive** approach to solve Quora classification problem.I am open to getting your feedback for improving this **kernel**. Notebook Content1. [Introduction](1)1. [Data Science Workflow for Quora](2)1. [Problem Definition](3) 1. [Business View](4) 1. [Real world Application Vs Competitions](31) 1. [What is a insincere question?](5) 1. [How can we find insincere question?](6)1. [Problem feature](7) 1. [Aim](8) 1. [Variables](9) 1. [ Inputs & Outputs](10)1. [Select Framework](11) 1. [Import](12) 1. [Version](13) 1. [Setup](14)1. [Exploratory data analysis](15) 1. [Data Collection](16) 1. [Features](17) 1. [Explorer Dataset](18) 1. [Data Cleaning](19) 1. [Data Preprocessing](20) 1. [Is data set imbalance?](21) 1. [Some Feature Engineering](22) 1. [Data Visualization](23) 1. [countplot](61) 1. [pie plot](62) 1. [Histogram](63) 1. [violin plot](64) 1. [kdeplot](65)1. [Apply Learning](24)1. [Conclusion](25)1. [References](26) ------------------------------------------------------------------------------------------------------------- **I hope you find this kernel helpful and some UPVOTES would be very much appreciated** ----------- 2- A Data Science Workflow for QuoraOf course, the same solution can not be provided for all problems, so the best way is to create a **general framework** and adapt it to new problem.**You can see my workflow in the below image** : **You should feel free to adjust this checklist to your needs** [Go to top](top) 3- Problem DefinitionI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)> **we will be predicting whether a question asked on Quora is sincere or not.** 3-1 About QuoraQuora is a platform that empowers people to learn from each other. On Quora, people can ask questions and connect with others who contribute unique insights and quality answers. A key challenge is to weed out insincere questions -- those founded upon false premises, or that intend to make a statement rather than look for helpful answers. 3-2 Business View An existential problem for any major website today is how to handle toxic and divisive content. **Quora** wants to tackle this problem head-on to keep their platform a place where users can feel safe sharing their knowledge with the world.**Quora** is a platform that empowers people to learn from each other. On Quora, people can ask questions and connect with others who contribute unique insights and quality answers. A key challenge is to weed out insincere questions -- those founded upon false premises, or that intend to make a statement rather than look for helpful answers.In this kernel, I will develop models that identify and flag insincere questions.we Help Quora uphold their policy of “Be Nice, Be Respectful” and continue to be a place for sharing and growing the world’s knowledge. 3-2-1 Real world Application Vs CompetitionsJust a simple comparison between real-world apps with competitions: 3-3 What is a insincere question?Is defined as a question intended to make a **statement** rather than look for **helpful answers**. 3-4 How can we find insincere question?Some characteristics that can signify that a question is insincere:1. **Has a non-neutral tone** 1. Has an exaggerated tone to underscore a point about a group of people 1. Is rhetorical and meant to imply a statement about a group of people1. **Is disparaging or inflammatory** 1. Suggests a discriminatory idea against a protected class of people, or seeks confirmation of a stereotype 1. Makes disparaging attacks/insults against a specific person or group of people 1. Based on an outlandish premise about a group of people 1. Disparages against a characteristic that is not fixable and not measurable1. **Isn't grounded in reality** 1. Based on false information, or contains absurd assumptions 1. Uses sexual content (incest, bestiality, pedophilia) for shock value, and not to seek genuine answers [Go to top](top) 4- Problem FeatureProblem Definition has three steps that have illustrated in the picture below:1. Aim1. Variable1. Inputs & Outputs 4-1 AimWe will be predicting whether a question asked on Quora is **sincere** or not. 4-2 Variables1. qid - unique question identifier1. question_text - Quora question text1. target - a question labeled "insincere" has a value of 1, otherwise 0 4-3 Inputs & Outputswe use train.csv and test.csv as Input and we should upload a submission.csv as Output**>**> You must answer the following question:How does your company expect to use and benefit from **your model**. [Go to top](top) 5- Select FrameworkAfter problem definition and problem feature, we should select our framework to solve the problem.What we mean by the framework is that the programming languages you use and by what modules the problem will be solved. 5-1 Python Deep Learning Packages*State of open source deep learning frameworks in 2017*1. **keras**[11]>Well known for being minimalistic, the Keras neural network library (with a supporting interface of Python) supports both convolutional and recurrent networks that are capable of running on either TensorFlow or Theano. The library is written in Python and was developed keeping quick experimentation as its USP.1. **TensorFlow**> TensorFlow is arguably one of the best deep learning frameworks and has been adopted by several giants such as Airbus, Twitter, IBM, and others mainly due to its highly flexible system architecture.1. **Caffe**> Caffe is a deep learning framework that is supported with interfaces like C, C++, Python, and MATLAB as well as the command line interface. It is well known for its speed and transposability and its applicability in modeling convolution neural networks (CNN).1. **Microsoft Cognitive Toolkit/CNTK**> Popularly known for easy training and the combination of popular model types across servers, the Microsoft Cognitive Toolkit (previously known as CNTK) is an open-source deep learning framework to train deep learning models. It performs efficient convolution neural networks and training for image, speech, and text-based data. Similar to Caffe, it is supported by interfaces such as Python, C++, and the command line interface.1. **Torch/PyTorch**> Torch is a scientific computing framework that offers wide support for machine learning algorithms. It is a Lua-based deep learning framework and is used widely amongst industry giants such as Facebook, Twitter, and Google. It employs CUDA along with C/C++ libraries for processing and was basically made to scale the production of building models and provide overall flexibility.1. **MXNet**> Designed specifically for the purpose of high efficiency, productivity, and flexibility, MXNet(pronounced as mix-net) is a deep learning framework supported by Python, R, C++, and Julia.1. **Chainer**>Highly powerful, dynamic and intuitive, Chainer is a Python-based deep learning framework for neural networks that is designed by the run strategy. Compared to other frameworks that use the same strategy, you can modify the networks during runtime, allowing you to execute arbitrary control flow statements.1. **Deeplearning4j**>Parallel training through iterative reduce, microservice architecture adaptation, and distributed CPUs and GPUs are some of the salient features of the Deeplearning4j deep learning framework. It is developed in Java as well as Scala and supports other JVM languages, too.1. **Theano**>Theano is beautiful. Without Theano, we wouldn’t have anywhere near the amount of deep learning libraries (specifically in Python) that we do today. In the same way that without NumPy, we couldn’t have SciPy, scikit-learn, and scikit-image, the same can be said about Theano and higher-level abstractions of deep learning.1. **Lasagne**>Lasagne is a lightweight library used to construct and train networks in Theano. The key term here is lightweight — it is not meant to be a heavy wrapper around Theano like Keras is. While this leads to your code being more verbose, it does free you from any restraints, while still giving you modular building blocks based on Theano.1. **PaddlePaddle**>PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu. [Go to top](top) 5-2 Import
###Code
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from wordcloud import WordCloud as wc
from nltk.corpus import stopwords
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
from pandas import get_dummies
import matplotlib as mpl
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import sklearn
import string
import scipy
import numpy
import nltk
import json
import sys
import csv
import os
###Output
_____no_output_____
###Markdown
5-3 version
###Code
print('matplotlib: {}'.format(matplotlib.__version__))
print('sklearn: {}'.format(sklearn.__version__))
print('scipy: {}'.format(scipy.__version__))
print('seaborn: {}'.format(sns.__version__))
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
###Output
matplotlib: 2.2.3
sklearn: 0.20.0
scipy: 1.1.0
seaborn: 0.9.0
pandas: 0.23.4
numpy: 1.15.4
Python: 3.6.6 |Anaconda, Inc.| (default, Oct 9 2018, 12:34:16)
[GCC 7.3.0]
###Markdown
5-4 SetupA few tiny adjustments for better **code readability** 5-5 NLTKThe Natural Language Toolkit (NLTK) is one of the leading platforms for working with human language data and Python, the module NLTK is used for natural language processing. NLTK is literally an acronym for Natural Language Toolkit. with it you can tokenizing words and sentences.NLTK is a library of Python that can mine (scrap and upload data) and analyse very large amounts of textual data using computational methods.
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
data = "All work and no play makes jack a dull boy, all work and no play"
print(word_tokenize(data))
###Output
_____no_output_____
###Markdown
All of them are words except the comma. Special characters are treated as separate tokens. 5-5-1 Tokenizing sentencesThe same principle can be applied to sentences. Simply change the to sent_tokenize()We have added two sentences to the variable data:
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy."
print(sent_tokenize(data))
###Output
_____no_output_____
###Markdown
5-5-2 NLTK and arraysIf you wish to you can store the words and sentences in arrays
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy."
phrases = sent_tokenize(data)
words = word_tokenize(data)
print(phrases)
print(words)
###Output
_____no_output_____
###Markdown
5-5-3 NLTK stop wordsNatural language processing (nlp) is a research field that presents many challenges such as natural language understanding.Text may contain stop words like ‘the’, ‘is’, ‘are’. Stop words can be filtered from the text to be processed. There is no universal list of stop words in nlp research, however the nltk module contains a list of stop words.In this article you will learn how to remove stop words with the nltk module.
###Code
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy."
stopWords = set(stopwords.words('english'))
words = word_tokenize(data)
wordsFiltered = []
for w in words:
if w not in stopWords:
wordsFiltered.append(w)
print(wordsFiltered)
###Output
_____no_output_____
###Markdown
A module has been imported:
###Code
from nltk.corpus import stopwords
###Output
_____no_output_____
###Markdown
We get a set of English stop words using the line:
###Code
stopWords = set(stopwords.words('english'))
###Output
_____no_output_____
###Markdown
The returned list stopWords contains 153 stop words on my computer.You can view the length or contents of this array with the lines:
###Code
print(len(stopWords))
print(stopWords)
###Output
_____no_output_____
###Markdown
We create a new list called wordsFiltered which contains all words which are not stop words.To create it we iterate over the list of words and only add it if its not in the stopWords list.
###Code
for w in words:
if w not in stopWords:
wordsFiltered.append(w)
###Output
_____no_output_____
###Markdown
5-5-4 NLTK – stemmingStart by defining some words:
###Code
words = ["game","gaming","gamed","games"]
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
###Output
_____no_output_____
###Markdown
And stem the words in the list using:
###Code
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
words = ["game","gaming","gamed","games"]
ps = PorterStemmer()
for word in words:
print(ps.stem(word))
###Output
_____no_output_____
###Markdown
5-5-5 NLTK speech taggingThe module NLTK can automatically tag speech.Given a sentence or paragraph, it can label words such as verbs, nouns and so on.NLTK – speech tagging exampleThe example below automatically tags words with a corresponding class.
###Code
import nltk
from nltk.tokenize import PunktSentenceTokenizer
document = 'Whether you\'re new to programming or an experienced developer, it\'s easy to learn and use Python.'
sentences = nltk.sent_tokenize(document)
for sent in sentences:
print(nltk.pos_tag(nltk.word_tokenize(sent)))
###Output
_____no_output_____
###Markdown
We can filter this data based on the type of word:
###Code
import nltk
from nltk.corpus import state_union
from nltk.tokenize import PunktSentenceTokenizer
document = 'Today the Netherlands celebrates King\'s Day. To honor this tradition, the Dutch embassy in San Francisco invited me to'
sentences = nltk.sent_tokenize(document)
data = []
for sent in sentences:
data = data + nltk.pos_tag(nltk.word_tokenize(sent))
for word in data:
if 'NNP' in word[1]:
print(word)
sns.set(style='white', context='notebook', palette='deep')
pylab.rcParams['figure.figsize'] = 12,8
warnings.filterwarnings('ignore')
mpl.style.use('ggplot')
sns.set_style('white')
%matplotlib inline
###Output
_____no_output_____
###Markdown
6- EDA In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data. * Which variables suggest interesting relationships?* Which observations are unusual?* Analysis of the features!By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:1. Data Collection1. Visualization1. Data Cleaning1. Data Preprocessing [Go to top](top) 6-1 Data Collection**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]I start Collection Data by the training and testing datasets into **Pandas DataFrames**. [Go to top](top)
###Code
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
###Output
_____no_output_____
###Markdown
**>*** Each **row** is an observation (also known as : sample, example, instance, record).* Each **column** is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate). [Go to top](top)
###Code
train.sample(1)
test.sample(1)
###Output
_____no_output_____
###Markdown
Or you can use others command to explorer dataset, such as
###Code
train.tail(1)
###Output
_____no_output_____
###Markdown
6-1-1 FeaturesFeatures can be from following types:* numeric* categorical* ordinal* datetime* coordinatesFind the type of features in **Qoura dataset**?!For getting some information about the dataset you can use **info()** command.
###Code
print(train.info())
print(test.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 56370 entries, 0 to 56369
Data columns (total 2 columns):
qid 56370 non-null object
question_text 56370 non-null object
dtypes: object(2)
memory usage: 880.9+ KB
None
###Markdown
6-1-2 Explorer Dataset1- Dimensions of the dataset.2- Peek at the data itself.3- Statistical summary of all attributes.4- Breakdown of the data by the class variable.Don’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects. [Go to top](top)
###Code
# shape for train and test
print('Shape of train:',train.shape)
print('Shape of test:',test.shape)
#columns*rows
train.size
###Output
_____no_output_____
###Markdown
After loading the data via **pandas**, we should checkout what the content is, description and via the following:
###Code
type(train)
type(test)
train.describe()
###Output
_____no_output_____
###Markdown
To pop up 5 random rows from the data set, we can use **sample(5)** function and find the type of features.
###Code
train.sample(5)
###Output
_____no_output_____
###Markdown
6-2 Data CleaningWhen dealing with real-world data, dirty data is the norm rather than the exception. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.The primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making. While it has been the focus of many researchers for several years, individual problems have been addressed separately. These include missing value imputation, outliers detection, transformations, integrity constraints violations detection and repair, consistent query answering, deduplication, and many other related problems such as profiling and constraints mining.[4] [Go to top](top) How many NA elements in every column!!Good news, it is Zero!To check out how many null info are on the dataset, we can use **isnull().sum()**.
###Code
train.isnull().sum()
###Output
_____no_output_____
###Markdown
But if we had , we can just use **dropna()**(be careful sometimes you should not do this!)
###Code
# remove rows that have NA's
print('Before Droping',train.shape)
train = train.dropna()
print('After Droping',train.shape)
###Output
Before Droping (1306122, 3)
After Droping (1306122, 3)
###Markdown
We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property. To print dataset **columns**, we can use columns atribute.
###Code
train.columns
###Output
_____no_output_____
###Markdown
You see number of unique item for Target with command below:
###Code
train_target = train['target'].values
np.unique(train_target)
###Output
_____no_output_____
###Markdown
YES, quora problem is a **binary classification**! :) To check the first 5 rows of the data set, we can use head(5).
###Code
train.head(5)
###Output
_____no_output_____
###Markdown
Or to check out last 5 row of the data set, we use tail() function.
###Code
train.tail()
###Output
_____no_output_____
###Markdown
To give a **statistical summary** about the dataset, we can use **describe()**
###Code
train.describe()
###Output
_____no_output_____
###Markdown
As you can see, the statistical information that this command gives us is not suitable for this type of data**describe() is more useful for numerical data sets** 6-3 Data Preprocessing**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.there are plenty of steps for data preprocessing and we just listed some of them in general(Not just for Quora) :1. removing Target column (id)1. Sampling (without replacement)1. Making part of iris unbalanced and balancing (with undersampling and SMOTE)1. Introducing missing values and treating them (replacing by average values)1. Noise filtering1. Data discretization1. Normalization and standardization1. PCA analysis1. Feature selection (filter, embedded, wrapper)1. Etc.What methods of preprocessing can we run on Quora?! [Go to top](top) **>**in pandas's data frame you can perform some query such as "where"
###Code
train.where(train ['target']==1).count()
###Output
_____no_output_____
###Markdown
As you can see in the below in python, it is so easy perform some query on the dataframe:
###Code
train[train['target']>1]
###Output
_____no_output_____
###Markdown
Some examples of questions that they are insincere
###Code
train[train['target']==1].head(5)
###Output
_____no_output_____
###Markdown
6-3-1 Is data set imbalance?
###Code
train_target.mean()
###Output
_____no_output_____
###Markdown
A large part of the data is unbalanced, but **how can we solve it?**
###Code
train["target"].value_counts()
# data is imbalance
###Output
_____no_output_____
###Markdown
**Imbalanced dataset** is relevant primarily in the context of supervised machine learning involving two or more classes. **Imbalance** means that the number of data points available for different the classes is different:If there are two classes, then balanced data would mean 50% points for each of the class. For most machine learning techniques, little imbalance is not a problem. So, if there are 60% points for one class and 40% for the other class, it should not cause any significant performance degradation. Only when the class imbalance is high, e.g. 90% points for one class and 10% for the other, standard optimization criteria or performance measures may not be as effective and would need modification.[Image source](http://api.ning.com/files/vvHEZw33BGqEUW8aBYm4epYJWOfSeUBPVQAsgz7aWaNe0pmDBsjgggBxsyq*8VU1FdBshuTDdL2-bp2ALs0E-0kpCV5kVdwu/imbdata.png)A typical example of imbalanced data is encountered in e-mail classification problem where emails are classified into ham or spam. The number of spam emails is usually lower than the number of relevant (ham) emails. So, using the original distribution of two classes leads to imbalanced dataset.Using accuracy as a performace measure for highly imbalanced datasets is not a good idea. For example, if 90% points belong to the true class in a binary classification problem, a default prediction of true for all data poimts leads to a classifier which is 90% accurate, even though the classifier has not learnt anything about the classification problem at hand![9] 6-3-2 Exploreing Question
###Code
question = train['question_text']
i=0
for q in question[:5]:
i=i+1
print('sample '+str(i)+':' ,q)
text_withnumber = train['question_text']
result = ''.join([i for i in text_withnumber if not i.isdigit()])
###Output
_____no_output_____
###Markdown
6-3-2 Some Feature Engineering [NLTK](https://www.nltk.org/) is one of the leading platforms for working with human language data and Python, the module NLTK is used for natural language processing. NLTK is literally an acronym for Natural Language Toolkit.We get a set of **English stop** words using the line
###Code
#from nltk.corpus import stopwords
eng_stopwords = set(stopwords.words("english"))
###Output
_____no_output_____
###Markdown
The returned list stopWords contains **179 stop words** on my computer.You can view the length or contents of this array with the lines:
###Code
print(len(eng_stopwords))
print(eng_stopwords)
###Output
179
{'can', "mustn't", "wasn't", 'mightn', 'herself', 'most', 'on', 'why', 'how', 'ours', "you'd", 'are', 'other', 'my', 'himself', "it's", "that'll", 'their', 'but', 'same', 'was', 'for', 'is', 'of', 'him', 'under', 'each', 'because', 'a', 'by', 'am', 'about', 'so', 'i', 't', 'nor', 'myself', "haven't", 'the', 'after', "wouldn't", 'that', "don't", 'needn', 'or', 'wasn', 'which', 'all', "doesn't", 'being', 'only', 'few', 'will', 'our', "you'll", 'where', 's', 're', 'both', 'shan', "mightn't", 'whom', 'if', 'some', 'o', 'hasn', 'while', 'own', 'having', 'such', 'above', 'what', 'be', 'then', 'very', 'been', 'it', 'any', 'during', 'your', 'do', "weren't", 'he', "should've", 'have', 'hadn', 'should', 'with', 'once', 'not', 'm', 'further', "aren't", 'too', "won't", 'weren', 'more', 'against', 'his', 'again', 'her', 'through', 'until', 'mustn', 'yourself', 'here', 'did', 'than', 'me', 'won', 'yours', 'ma', 'has', "she's", 'we', 'between', 'she', 'ourselves', 'wouldn', 'below', 'does', 'aren', 'as', 'had', 'and', 'them', "shouldn't", 'doesn', 'shouldn', 'hers', 'no', 'this', 'themselves', 'when', "couldn't", 'its', 'll', 'yourselves', 'don', 'at', 'theirs', 'to', 'from', 've', 'itself', 'just', 'over', 'off', "needn't", 'out', 'you', 'y', 'in', 'didn', 'these', 'those', 'before', "hadn't", 'up', "shan't", "isn't", "didn't", 'an', 'd', 'were', 'now', 'down', "you're", "hasn't", "you've", 'doing', 'into', 'haven', 'isn', 'couldn', 'ain', 'who', 'there', 'they'}
###Markdown
The metafeatures that we'll create based on SRK's EDAs, [sudalairajkumar](http://http://www.kaggle.com/sudalairajkumar/simple-feature-engg-notebook-spooky-author) and [tunguz](https://www.kaggle.com/tunguz/just-some-simple-eda) are:1. Number of words in the text1. Number of unique words in the text1. Number of characters in the text1. Number of stopwords1. Number of punctuations1. Number of upper case words1. Number of title case words1. Average length of the words [Go to top](top) Number of words in the text
###Code
train["num_words"] = train["question_text"].apply(lambda x: len(str(x).split()))
test["num_words"] = test["question_text"].apply(lambda x: len(str(x).split()))
print('maximum of num_words in train',train["num_words"].max())
print('min of num_words in train',train["num_words"].min())
print("maximum of num_words in test",test["num_words"].max())
print('min of num_words in train',test["num_words"].min())
###Output
maximum of num_words in train 134
min of num_words in train 1
maximum of num_words in test 87
min of num_words in train 2
###Markdown
Number of unique words in the text
###Code
train["num_unique_words"] = train["question_text"].apply(lambda x: len(set(str(x).split())))
test["num_unique_words"] = test["question_text"].apply(lambda x: len(set(str(x).split())))
print('maximum of num_unique_words in train',train["num_unique_words"].max())
print('mean of num_unique_words in train',train["num_unique_words"].mean())
print("maximum of num_unique_words in test",test["num_unique_words"].max())
print('mean of num_unique_words in train',test["num_unique_words"].mean())
###Output
maximum of num_unique_words in train 96
mean of num_unique_words in train 12.135776749798257
maximum of num_unique_words in test 61
mean of num_unique_words in train 12.096363313819408
###Markdown
Number of characters in the text
###Code
train["num_chars"] = train["question_text"].apply(lambda x: len(str(x)))
test["num_chars"] = test["question_text"].apply(lambda x: len(str(x)))
print('maximum of num_chars in train',train["num_chars"].max())
print("maximum of num_chars in test",test["num_chars"].max())
###Output
maximum of num_chars in train 1017
maximum of num_chars in test 588
###Markdown
Number of stopwords in the text
###Code
train["num_stopwords"] = train["question_text"].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
test["num_stopwords"] = test["question_text"].apply(lambda x: len([w for w in str(x).lower().split() if w in eng_stopwords]))
print('maximum of num_stopwords in train',train["num_stopwords"].max())
print("maximum of num_stopwords in test",test["num_stopwords"].max())
###Output
maximum of num_stopwords in train 56
maximum of num_stopwords in test 47
###Markdown
Number of punctuations in the text
###Code
train["num_punctuations"] =train['question_text'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
test["num_punctuations"] =test['question_text'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]) )
print('maximum of num_punctuations in train',train["num_punctuations"].max())
print("maximum of num_punctuations in test",test["num_punctuations"].max())
###Output
maximum of num_punctuations in train 411
maximum of num_punctuations in test 260
###Markdown
Number of title case words in the text
###Code
train["num_words_upper"] = train["question_text"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
test["num_words_upper"] = test["question_text"].apply(lambda x: len([w for w in str(x).split() if w.isupper()]))
print('maximum of num_words_upper in train',train["num_words_upper"].max())
print("maximum of num_words_upper in test",test["num_words_upper"].max())
###Output
maximum of num_words_upper in train 37
maximum of num_words_upper in test 36
###Markdown
Number of title case words in the text
###Code
train["num_words_title"] = train["question_text"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
test["num_words_title"] = test["question_text"].apply(lambda x: len([w for w in str(x).split() if w.istitle()]))
print('maximum of num_words_title in train',train["num_words_title"].max())
print("maximum of num_words_title in test",test["num_words_title"].max())
###Output
maximum of num_words_title in train 37
maximum of num_words_title in test 24
###Markdown
Average length of the words in the text
###Code
train["mean_word_len"] = train["question_text"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
test["mean_word_len"] = test["question_text"].apply(lambda x: np.mean([len(w) for w in str(x).split()]))
print('mean_word_len in train',train["mean_word_len"].max())
print("mean_word_len in test",test["mean_word_len"].max())
###Output
mean_word_len in train 57.666666666666664
mean_word_len in test 29.333333333333332
###Markdown
We add some new feature to train and test data set now, print columns agains
###Code
print(train.columns)
train.head(1)
###Output
Index(['qid', 'question_text', 'target', 'num_words', 'num_unique_words',
'num_chars', 'num_stopwords', 'num_punctuations', 'num_words_upper',
'num_words_title', 'mean_word_len'],
dtype='object')
###Markdown
**>**>**Preprocessing and generation pipelines depend on a model type** 6-4 Data Visualization**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.> * Two** important rules** for Data visualization:> 1. Do not put too little information> 1. Do not put too much information [Go to top](top) 6-4-1 CountPlot
###Code
ax=sns.countplot(x='target',hue="target", data=train ,linewidth=5,edgecolor=sns.color_palette("dark", 3))
plt.title('Is data set imbalance?');
ax = sns.countplot(y="target", hue="target", data=train)
plt.title('Is data set imbalance?');
###Output
_____no_output_____
###Markdown
6-4-2 Pie Plot
###Code
ax=train['target'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%' ,shadow=True)
ax.set_title('target')
ax.set_ylabel('')
plt.show()
#plt.pie(train['target'],autopct='%1.1f%%')
#plt.axis('equal')
#plt.show()
###Output
_____no_output_____
###Markdown
6-4-3 Histogram
###Code
f,ax=plt.subplots(1,2,figsize=(20,10))
train[train['target']==0].num_words.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('target= 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
train[train['target']==1].num_words.plot.hist(ax=ax[1],color='green',bins=20,edgecolor='black')
ax[1].set_title('target= 1')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
train[['target','num_words']].groupby(['target']).mean().plot.bar(ax=ax[0])
ax[0].set_title('num_words vs target')
sns.countplot('num_words',hue='target',data=train,ax=ax[1])
ax[1].set_title('num_words:target=0 vs target=1')
plt.show()
# histograms
train.hist(figsize=(15,20))
plt.figure()
train["num_words"].hist();
###Output
_____no_output_____
###Markdown
6-4-4 Violin Plot
###Code
sns.violinplot(data=train,x="target", y="num_words")
sns.violinplot(data=train,x="target", y="num_words_upper")
###Output
_____no_output_____
###Markdown
6-4-5 KdePlot
###Code
sns.FacetGrid(train, hue="target", size=5).map(sns.kdeplot, "num_words").add_legend()
plt.show()
###Output
_____no_output_____
###Markdown
6-4-6 BoxPlot
###Code
train['num_words'].loc[train['num_words']>60] = 60 #truncation for better visuals
axes= sns.boxplot(x='target', y='num_words', data=train)
axes.set_xlabel('Target', fontsize=12)
axes.set_title("Number of words in each class", fontsize=15)
plt.show()
train['num_chars'].loc[train['num_chars']>350] = 350 #truncation for better visuals
axes= sns.boxplot(x='target', y='num_chars', data=train)
axes.set_xlabel('Target', fontsize=12)
axes.set_title("Number of num_chars in each class", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
6-4-6 WordCloud
###Code
def generate_wordcloud(text):
wordcloud = wc(relative_scaling = 1.0,stopwords = eng_stopwords).generate(text)
fig,ax = plt.subplots(1,1,figsize=(10,10))
ax.imshow(wordcloud, interpolation='bilinear')
ax.axis("off")
ax.margins(x=0, y=0)
plt.show()
text =" ".join(train.question_text)
generate_wordcloud(text)
###Output
_____no_output_____ |
DL/a1/4_convolutions.ipynb | ###Markdown
Deep Learning=============Assignment 4------------Previously in `2_fullyconnected.ipynb` and `3_regularization.ipynb`, we trained fully connected networks to classify [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) characters.The goal of this assignment is make the neural network convolutional.
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
###Output
Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (10000, 28, 28) (10000,)
###Markdown
Reformat into a TensorFlow-friendly shape:- convolutions need the image data formatted as a cube (width by height by channels)- labels as float 1-hot encodings.
###Code
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
###Output
_____no_output_____
###Markdown
Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
###Code
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 1001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
###Output
Initialized
Minibatch loss at step 0: 4.086415
Minibatch accuracy: 12.5%
Validation accuracy: 10.0%
Minibatch loss at step 50: 2.121948
Minibatch accuracy: 43.8%
Validation accuracy: 50.4%
Minibatch loss at step 100: 0.974287
Minibatch accuracy: 75.0%
Validation accuracy: 67.4%
Minibatch loss at step 150: 0.859524
Minibatch accuracy: 62.5%
Validation accuracy: 76.5%
Minibatch loss at step 200: 0.784441
Minibatch accuracy: 81.2%
Validation accuracy: 79.1%
Minibatch loss at step 250: 0.652312
Minibatch accuracy: 81.2%
Validation accuracy: 76.8%
Minibatch loss at step 300: 0.547076
Minibatch accuracy: 87.5%
Validation accuracy: 79.3%
Minibatch loss at step 350: 0.436444
Minibatch accuracy: 93.8%
Validation accuracy: 80.5%
Minibatch loss at step 400: 0.609286
Minibatch accuracy: 75.0%
Validation accuracy: 81.2%
Minibatch loss at step 450: 0.476673
Minibatch accuracy: 87.5%
Validation accuracy: 80.8%
Minibatch loss at step 500: 0.403005
Minibatch accuracy: 81.2%
Validation accuracy: 81.9%
Minibatch loss at step 550: 0.708369
Minibatch accuracy: 81.2%
Validation accuracy: 82.6%
Minibatch loss at step 600: 1.003435
Minibatch accuracy: 68.8%
Validation accuracy: 82.1%
Minibatch loss at step 650: 0.696089
Minibatch accuracy: 81.2%
Validation accuracy: 82.7%
Minibatch loss at step 700: 0.569869
Minibatch accuracy: 81.2%
Validation accuracy: 82.5%
Minibatch loss at step 750: 0.803250
Minibatch accuracy: 81.2%
Validation accuracy: 79.4%
Minibatch loss at step 800: 0.192654
Minibatch accuracy: 93.8%
Validation accuracy: 82.6%
Minibatch loss at step 850: 0.109053
Minibatch accuracy: 100.0%
Validation accuracy: 83.4%
Minibatch loss at step 900: 0.681304
Minibatch accuracy: 81.2%
Validation accuracy: 83.6%
Minibatch loss at step 950: 1.194071
Minibatch accuracy: 62.5%
Validation accuracy: 83.3%
Minibatch loss at step 1000: 0.423575
Minibatch accuracy: 87.5%
Validation accuracy: 83.5%
Test accuracy: 89.7%
###Markdown
---Problem 1---------The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (`nn.max_pool()`) of stride 2 and kernel size 2.---
###Code
batch_size = 16
patch_size = 1
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
# print("data", data.shape)
conv = tf.nn.conv2d(data, layer1_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
pool = tf.nn.max_pool(hidden, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
# print("hidden1",hidden.get_shape().as_list())
conv = tf.nn.conv2d(pool, layer2_weights, [1, 1, 1, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
pool = tf.nn.max_pool(hidden , ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
# print("hidden2",hidden.get_shape().as_list())
shape = pool.get_shape().as_list()
reshape = tf.reshape(pool, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 1001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
###Output
Initialized
Minibatch loss at step 0: 3.230863
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 50: 2.308229
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 100: 2.300793
Minibatch accuracy: 25.0%
Validation accuracy: 10.0%
Minibatch loss at step 150: 2.299454
Minibatch accuracy: 12.5%
Validation accuracy: 10.0%
Minibatch loss at step 200: 2.305176
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 250: 2.300831
Minibatch accuracy: 0.0%
Validation accuracy: 10.0%
Minibatch loss at step 300: 2.310798
Minibatch accuracy: 0.0%
Validation accuracy: 10.0%
Minibatch loss at step 350: 2.297060
Minibatch accuracy: 18.8%
Validation accuracy: 10.0%
Minibatch loss at step 400: 2.311630
Minibatch accuracy: 0.0%
Validation accuracy: 10.0%
Minibatch loss at step 450: 2.296654
Minibatch accuracy: 18.8%
Validation accuracy: 10.0%
Minibatch loss at step 500: 2.312327
Minibatch accuracy: 0.0%
Validation accuracy: 10.0%
Minibatch loss at step 550: 2.301101
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 600: 2.306224
Minibatch accuracy: 0.0%
Validation accuracy: 10.0%
Minibatch loss at step 650: 2.309343
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 700: 2.302724
Minibatch accuracy: 12.5%
Validation accuracy: 10.0%
Minibatch loss at step 750: 2.316160
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 800: 2.299171
Minibatch accuracy: 12.5%
Validation accuracy: 10.0%
Minibatch loss at step 850: 2.310692
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 900: 2.298215
Minibatch accuracy: 12.5%
Validation accuracy: 10.0%
Minibatch loss at step 950: 2.311249
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Minibatch loss at step 1000: 2.293489
Minibatch accuracy: 12.5%
Validation accuracy: 10.0%
Test accuracy: 10.0%
|
nbs/00_datasets.ipynb | ###Markdown
DataSets
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
#import cv2
#import numpy as np
#import re
#import random
from tqdm import tqdm
from matplotlib import pyplot as plt
import PIL
#export
from fastai.vision import *
###Output
_____no_output_____
###Markdown
Original datasets
###Code
#export
div2k_path = Path('/home/jovyan/notebook/datasets/DIV2K')
div2k_train_lr_path = div2k_path/'DIV2K_train_LR_bicubic'
div2k_train_lr_x2 = div2k_train_lr_path/'X2'
div2k_train_lr_x3 = div2k_train_lr_path/'X3'
div2k_train_lr_x4 = div2k_train_lr_path/'X4'
div2k_train_hr = div2k_path/'DIV2K_train_HR'
div2k_test_lr_path = div2k_path/'DIV2K_test_LR_bicubic'
div2k_test_lr_x2 = div2k_test_lr_path/'X2'
div2k_test_lr_x3 = div2k_test_lr_path/'X3'
div2k_test_lr_x4 = div2k_test_lr_path/'X4'
#export
set5_path = Path('/home/jovyan/notebook/datasets/Set14')
set5_lr_path = set5_path/'LR_bicubic'
set5_lr_x2 = set5_lr_path/'X2'
set5_lr_x3 = set5_lr_path/'X3'
set5_lr_x4 = set5_lr_path/'X4'
set5_hr = set5_path/'HR'
#export
# https://paperswithcode.com/sota/image-super-resolution-on-set14-4x-upscaling
set14_path = Path('/home/jovyan/notebook/datasets/Set14')
set14_lr_path = set14_path/'LR_bicubic'
set14_lr_x2 = set14_lr_path/'X2'
set14_lr_x3 = set14_lr_path/'X3'
set14_lr_x4 = set14_lr_path/'X4'
set14_hr = set14_path/'HR'
###Output
_____no_output_____
###Markdown
Data Augmentation 元データを事前に 256X256 に分割しておく。本来なら transformer で 画像の様々な箇所をクロップできれば良いのだが、そうしない理由は以下の2点である。* 画像数は 900 である。transformer を使用したクロップでは、learn.fit_one_cycle() の 1 epoch で使用するデータサイズは 900 のままである。しかし、事前に256X256に分割しておけば、1 epoch の学習で使用するサイズは 31556 になり、transformer を使用するより良い学習が行える。* transformer でランダムクロップする場合、x のクロップで使用した座標を、yのクロップ処理に引き継げないため、x と y を同じ座標でクロップすることができない。そのため、現実的に使用できるクロップはセンタークロップとなってしまう。
###Code
#export
div2k_train_hr_crop = div2k_path/'DIV2K_train_HR_crop'
div2k_train_hr_crop_256 = div2k_train_hr_crop/'256'
div2k_train_hr_crop_256s4 = div2k_train_hr_crop/'256s4'
os.makedirs(div2k_train_hr_crop_256, exist_ok=True)
images = div2k_train_hr.ls()
images[:3]
img = PIL.Image.open(images[0])
plt.imshow(img)
w, h = img.size
w, h
#export
def crop_image(fname:pathlib.PosixPath, out_path:pathlib.PosixPath, size:int=256, sliding:int=4):
""" 画像を指定されたサイズでクロップして、out_path に保存する """
img = PIL.Image.open(fname)
w, h = img.size
basename, ext = os.path.splitext(os.path.basename(fname))
slide = int(size // sliding)
upper, i = 0, 0
while upper + size <= h:
left, j = 0, 0
while left + size <= w:
save_fname = out_path/f'{basename}_{i:0>2}{j:0>2}{ext}'
if not save_fname.exists():
#box: (left, upper, right, lower)
c = img.crop((left, upper, left+size, upper+size))
c.save(save_fname)
left += slide
j += 1
upper += slide
i += 1
#export
def crop_images(images:list, out_path:pathlib.PosixPath, size:int=256, sliding:int=4):
""" データセットを指定されたサイズでクロップして、out_path に保存する """
for fname in tqdm(images):
crop_image(fname, out_path=out_path, size=size, sliding=sliding)
# div2k_train_hr データセットを 256x256 のsliding=1で分割する
crop_images(div2k_train_hr.ls(), div2k_train_hr_crop_256, size=256, sliding=1)
# div2k_train_hr データセットを 256x256 のsliding4に分割する
#crop_images(div2k_train_hr.ls(), div2k_train_hr_crop_256s4, size=256, sliding=4)
###Output
_____no_output_____ |
linear_regress_m1_ex2_v4-newV.ipynb | ###Markdown
Linear RegressionWelcome to your second assignment. This exercise gives you a brief introduction to linear regression. The exercise is to be implemented in Python. Even if you've used Python before, this will help familiarize you with functions we'll need. **Instructions:**- You will be using Python 3.- Avoid using for-loops and while-loops, unless you are explicitly told to do so.- Do not modify the ( GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.- After coding your function, run the cell right below it to check if your result is correct.- The token generated by Coursera (COURSERA_TOKEN) expires every 30 minutes. It is advisable to always work with the most recent generated token so as to avoid any submission related errors. If you receive such error messages, rerun the cells containing your code and the GRADED FUNCTION in the same order. **After this assignment you will:**- Be able to implement linear regression model using statsmodels, scikit-learn, and tensorflow- Work with simulated non-linear dataset- Compare model performance (quality of fit) of both models- - The blue button "Submit Assignment" does not work. After running all the cells, please go directly to Assignment-> My submission to see your results.Let's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the START CODE HERE and END CODE HERE comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
###Code
import os
import numpy as np
import sys
sys.path.append("..")
import grading
try:
import matplotlib.pyplot as plt
%matplotlib inline
except: pass
import pandas as pd
import tensorflow as tf
from tensorflow.python.layers import core as core_layers
try:
from mpl_toolkits.mplot3d import Axes3D
except: pass
### ONLY FOR GRADING. DO NOT EDIT ###
submissions=dict()
assignment_key="QNZTAPW2Eeeg_w5MCivhhg"
all_parts=["dtA5d", "2inmf", "FCpek","78aDd","qlQVj"]
### ONLY FOR GRADING. DO NOT EDIT ###
COURSERA_TOKEN = "UH0MvFn8j1PwFhxf" # the key provided to the Student under his/her email on submission page
COURSERA_EMAIL = "[email protected]"# the email
def reset_graph(seed=42):
"""
Utility function to reset current tensorflow computation graph
and set the random seed
"""
# to make results reproducible across runs
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
###Output
_____no_output_____
###Markdown
We use artificial data for the following two specifications of regression: Linear Regression$ y(x) = a + b_1 \cdot X_1 + b_2 \cdot X_2 + b_3 \cdot X_3 + \sigma \cdot \varepsilon $ where $ \varepsilon \sim N(0, 1) $ is a Gaussian noise, and $ \sigma $ is its volatility, with the following choice of parameters:$ a = 1.0 $$ b_1, b_2, b_3 = (0.5, 0.2, 0.1) $$ \sigma = 0.1 $$ X_1, X_2, X_3 $ will be uniformally distributed in $ [-1,1] $ Non-Linear Regression$ y(x) = a + w_{00} \cdot X_1 + w_{01} \cdot X_2 + w_{02} \cdot X_3 + + w_{10} \cdot X_1^2 + w_{11} \cdot X_2^2 + w_{12} \cdot X_3^2 + \sigma \cdot \varepsilon $ where$ w = [[1.0, 0.5, 0.2],[0.5, 0.3, 0.15]] $and the rest of parameters is as above, with the same values of $ X_i $ Generate Data
###Code
def generate_data(n_points=10000, n_features=3, use_nonlinear=True,
noise_std=0.1, train_test_split = 4):
"""
Arguments:
n_points - number of data points to generate
n_features - a positive integer - number of features
use_nonlinear - if True, generate non-linear data
train_test_split - an integer - what portion of data to use for testing
Return:
X_train, Y_train, X_test, Y_test, n_train, n_features
"""
# Linear data or non-linear data?
if use_nonlinear:
weights = np.array([[1.0, 0.5, 0.2],[0.5, 0.3, 0.15]])
else:
weights = np.array([1.0, 0.5, 0.2])
bias = np.ones(n_points).reshape((-1,1))
low = - np.ones((n_points,n_features),'float')
high = np.ones((n_points,n_features),'float')
np.random.seed(42)
X = np.random.uniform(low=low, high=high)
np.random.seed(42)
noise = np.random.normal(size=(n_points, 1))
noise_std = 0.1
if use_nonlinear:
Y = (weights[0,0] * bias + np.dot(X, weights[0, :]).reshape((-1,1)) +
np.dot(X*X, weights[1, :]).reshape([-1,1]) +
noise_std * noise)
else:
Y = (weights[0] * bias + np.dot(X, weights[:]).reshape((-1,1)) +
noise_std * noise)
n_test = int(n_points/train_test_split)
n_train = n_points - n_test
X_train = X[:n_train,:]
Y_train = Y[:n_train].reshape((-1,1))
X_test = X[n_train:,:]
Y_test = Y[n_train:].reshape((-1,1))
return X_train, Y_train, X_test, Y_test, n_train, n_features
X_train, Y_train, X_test, Y_test, n_train, n_features = generate_data(use_nonlinear=False)
X_train.shape, Y_train.shape
###Output
_____no_output_____
###Markdown
Linear Regression with Numpy
###Code
# GRADED FUNCTION: numpy_lin_regress
def numpy_lin_regress(X_train, Y_train):
"""
numpy_lin_regress - Implements linear regression model using numpy module
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
Return:
np.array of size (k+1 by 1) of regression coefficients
"""
### START CODE HERE ### (≈ 3 lines of code)
# number of features
ndim = X_train.shape[1]
# add the column of ones
X_train = np.hstack((np.ones((X_train.shape[0], 1)), X_train))
# default answer, replace this
theta_numpy = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(Y_train)
# theta_numpy = np.array([0.] * (ndim + 1))
### END CODE HERE ###
return theta_numpy
### GRADED PART (DO NOT EDIT) ###
theta_numpy = numpy_lin_regress(X_train, Y_train)
part_1 = list(theta_numpy.squeeze())
try:
part1 = " ".join(map(repr, part_1))
except TypeError:
part1 = repr(part_1)
submissions[all_parts[0]]=part1
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions)
theta_numpy.squeeze()
### GRADED PART (DO NOT EDIT) ###
###Output
Submission successful, please check on the coursera grader page for the status
###Markdown
Linear Regression with Sklearn
###Code
# GRADED FUNCTION: sklearn_lin_regress
def sklearn_lin_regress(X_train, Y_train):
"""
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
Return:
np.array of size (k+1 by 1) of regression coefficients
"""
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
### START CODE HERE ### (≈ 3 lines of code)
# use lin_reg to fit training data
ndim = X_train.shape[1]
try:
from sklearn.linear_model import LinearRegression
except ImportError:
raise("ImportError: No module named sklearn.linear_model found")
X_train = np.hstack((np.ones((X_train.shape[0], 1)), X_train))
reg = LinearRegression(fit_intercept=False)
reg.fit(X_train, Y_train)
theta_sklearn = reg.coef_
### END CODE HERE ###
return theta_sklearn
### GRADED PART (DO NOT EDIT) ###
theta_sklearn = sklearn_lin_regress(X_train, Y_train)
part_2 = list(theta_sklearn.squeeze())
try:
part2 = " ".join(map(repr, part_2))
except TypeError:
part2 = repr(part_2)
submissions[all_parts[1]]=part2
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions)
theta_sklearn.squeeze()
### GRADED PART (DO NOT EDIT) ###
###Output
Submission successful, please check on the coursera grader page for the status
###Markdown
Linear Regression with Tensorflow
###Code
# GRADED FUNCTION: tf_lin_regress
def tf_lin_regress(X_train, Y_train):
"""
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
Return:
np.array of size (k+1 by 1) of regression coefficients
"""
### START CODE HERE ### (≈ 7-8 lines of code)
# add the column of ones
n_train = len(X_train)
x_np = np.hstack((np.ones(n_train).reshape(-1,1),X_train))
# define theta for later evaluation
x = tf.constant(x_np,dtype = tf.float32,name = 'x')
y = tf.constant(Y_train,dtype = tf.float32,name = 'y')
xt = tf.transpose(x)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(xt,x)),xt),y)
### END CODE HERE ###
with tf.Session() as sess:
theta_value = theta.eval()
return theta_value
### GRADED PART (DO NOT EDIT) ###
theta_tf = tf_lin_regress(X_train, Y_train)
part_3 = list(theta_tf.squeeze())
try:
part3 = " ".join(map(repr, part_3))
except TypeError:
part3 = repr(part_3)
submissions[all_parts[2]]=part3
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions)
theta_tf.squeeze()
### GRADED PART (DO NOT EDIT) ###
class LinRegressNormalEq:
"""
class LinRegressNormalEq - implements normal equation, maximum likelihood estimator (MLE) solution
"""
def __init__(self, n_features, learning_rate=0.05, L=0):
import math as m
# input placeholders
self.X = tf.placeholder(tf.float32, [None, n_features], name="X")
self.Y = tf.placeholder(tf.float32, [None, 1], name="Y")
# regression parameters for the analytical solution using the Normal equation
self.theta_in = tf.placeholder(tf.float32, [n_features+1,None])
# Augmented data matrix is obtained by adding a column of ones to the data matrix
data_plus_bias = tf.concat([tf.ones([tf.shape(self.X)[0], 1]), self.X], axis=1)
XT = tf.transpose(data_plus_bias)
#############################################
# The normal equation for Linear Regression
self.theta = tf.matmul(tf.matmul(
tf.matrix_inverse(tf.matmul(XT, data_plus_bias)), XT), self.Y)
# mean square error in terms of theta = theta_in
self.lr_mse = tf.reduce_mean(tf.square(
tf.matmul(data_plus_bias, self.theta_in) - self.Y))
#############################################
# Estimate the model using the Maximum Likelihood Estimation (MLE)
# regression parameters for the Maximum Likelihood method
# Note that there are n_features+2 parameters, as one is added for the intercept,
# and another one for the std of noise
self.weights = tf.Variable(tf.random_normal([n_features+2, 1]))
# prediction from the model
self.output = tf.matmul(data_plus_bias, self.weights[:-1, :])
gauss = tf.distributions.Normal(loc=0.0, scale=1.0)
# Standard deviation of the Gaussian noise is modelled as a square of the
# last model weight
sigma = 0.0001 + tf.square(self.weights[-1])
# though a constant sqrt(2*pi) is not needed to find the best parameters, here we keep it
# to get the value of the log-LL right
pi = tf.constant(m.pi)
log_LL = tf.log(0.00001 + (1/( tf.sqrt(2*pi)*sigma)) * gauss.prob((self.Y - self.output) / sigma ))
self.loss = - tf.reduce_mean(log_LL)
self.train_step = (tf.train.AdamOptimizer(learning_rate).minimize(self.loss), -self.loss)
# GRADED FUNCTION: run_normal_eq
def run_normal_eq(X_train, Y_train, X_test, Y_test, learning_rate=0.05):
"""
Implements normal equation using tensorflow, trains the model using training data set
Tests the model quality by computing mean square error (MSE) of the test data set
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
X_test - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_test - np.array of size (n by 1) where n is the number of observations of dependend variable
Return a tuple of:
- np.array of size (k+1 by 1) of regression coefficients
- mean square error (MSE) of the test data set
- mean square error (MSE) of the training data set
"""
# create an instance of the Linear Regression model class
n_features = X_train.shape[1]
model = LinRegressNormalEq(n_features=n_features, learning_rate=learning_rate)
### START CODE HERE ### (≈ 10-15 lines of code)
# train the model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Normal equation for Linear Regression
theta_value = sess.run(model.theta,feed_dict={
model.X : X_train,
model.Y : Y_train
})
lr_mse_train = sess.run(model.lr_mse,feed_dict={
model.X : X_train,
model.Y : Y_train,
model.theta_in : theta_value
})
lr_mse_test = sess.run(model.lr_mse,feed_dict = {
model.X : X_test,
model.Y : Y_test,
model.theta_in : theta_value
})
### END CODE HERE ###
return theta_value, lr_mse_train, lr_mse_test
### (DO NOT EDIT) ###
theta_value, lr_mse_train, lr_mse_test = run_normal_eq(X_train, Y_train, X_test, Y_test)
### (DO NOT EDIT) ###
### GRADED PART (DO NOT EDIT) ###
part_4 = list(theta_value.squeeze())
try:
part4 = " ".join(map(repr, part_4))
except TypeError:
part4 = repr(part_4)
submissions[all_parts[3]]=part4
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions)
theta_value.squeeze()
### GRADED PART (DO NOT EDIT) ###
# GRADED FUNCTION: run_mle# GRADED
def run_mle(X_train, Y_train, X_test, Y_test, learning_rate=0.05, num_iter=5000):
"""
Maximum likelihood Estimate (MLE)
Tests the model quality by computing mean square error (MSE) of the test data set
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
X_test - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_test - np.array of size (n by 1) where n is the number of observations of dependend variable
Return a tuple of:
- np.array of size (k+1 by 1) of regression coefficients
- mean square error (MSE) of the test data set
- mean square error (MSE) of the training data set
"""
# create an instance of the Linear Regression model class
n_features = X_train.shape[1]
model = LinRegressNormalEq(n_features=n_features, learning_rate=learning_rate)
# train the model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Now train the MLE parameters
for _ in range(num_iter):
(_ , loss), weights = sess.run((model.train_step, model.weights), feed_dict={
model.X: X_train,
model.Y: Y_train
})
# make test_prediction
Y_test_predicted = sess.run(model.output, feed_dict={model.X: X_test})
# output std sigma is a square of the last weight
std_model = weights[-1]**2
sess.close()
return weights[0:-1].squeeze(), loss, std_model
weights, loss, std_model = run_mle(X_train, Y_train, X_test, Y_test)
### GRADED PART (DO NOT EDIT) ###
part_5 = list(weights.squeeze())
try:
part5 = " ".join(map(repr, part_5))
except TypeError:
part5 = repr(part_5)
submissions[all_parts[4]]=part5
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:5],all_parts,submissions)
weights.squeeze()
### GRADED PART (DO NOT EDIT) ###
###Output
Submission successful, please check on the coursera grader page for the status
###Markdown
Linear RegressionWelcome to your second assignment. This exercise gives you a brief introduction to linear regression. The exercise is to be implemented in Python. Even if you've used Python before, this will help familiarize you with functions we'll need. **Instructions:**- You will be using Python 3.- Avoid using for-loops and while-loops, unless you are explicitly told to do so.- Do not modify the ( GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.- After coding your function, run the cell right below it to check if your result is correct.- The token generated by Coursera (COURSERA_TOKEN) expires every 30 minutes. It is advisable to always work with the most recent generated token so as to avoid any submission related errors. If you receive such error messages, rerun the cells containing your code and the GRADED FUNCTION in the same order. **After this assignment you will:**- Be able to implement linear regression model using statsmodels, scikit-learn, and tensorflow- Work with simulated non-linear dataset- Compare model performance (quality of fit) of both models- - The blue button "Submit Assignment" does not work. After running all the cells, please go directly to Assignment-> My submission to see your results.Let's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the START CODE HERE and END CODE HERE comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
###Code
import os
import numpy as np
import sys
sys.path.append("..")
import grading
try:
import matplotlib.pyplot as plt
%matplotlib inline
except: pass
import pandas as pd
import tensorflow as tf
from tensorflow.python.layers import core as core_layers
try:
from mpl_toolkits.mplot3d import Axes3D
except: pass
### ONLY FOR GRADING. DO NOT EDIT ###
submissions=dict()
assignment_key="QNZTAPW2Eeeg_w5MCivhhg"
all_parts=["dtA5d", "2inmf", "FCpek","78aDd","qlQVj"]
### ONLY FOR GRADING. DO NOT EDIT ###
COURSERA_TOKEN = 'KCTCtHVXbZzKLZys'# the key provided to the Student under his/her email on submission page
COURSERA_EMAIL = '[email protected]' # the email
def reset_graph(seed=42):
"""
Utility function to reset current tensorflow computation graph
and set the random seed
"""
# to make results reproducible across runs
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
###Output
_____no_output_____
###Markdown
We use artificial data for the following two specifications of regression: Linear Regression$ y(x) = a + b_1 \cdot X_1 + b_2 \cdot X_2 + b_3 \cdot X_3 + \sigma \cdot \varepsilon $ where $ \varepsilon \sim N(0, 1) $ is a Gaussian noise, and $ \sigma $ is its volatility, with the following choice of parameters:$ a = 1.0 $$ b_1, b_2, b_3 = (0.5, 0.2, 0.1) $$ \sigma = 0.1 $$ X_1, X_2, X_3 $ will be uniformally distributed in $ [-1,1] $ Non-Linear Regression$ y(x) = a + w_{00} \cdot X_1 + w_{01} \cdot X_2 + w_{02} \cdot X_3 + + w_{10} \cdot X_1^2 + w_{11} \cdot X_2^2 + w_{12} \cdot X_3^2 + \sigma \cdot \varepsilon $ where$ w = [[1.0, 0.5, 0.2],[0.5, 0.3, 0.15]] $and the rest of parameters is as above, with the same values of $ X_i $ Generate Data
###Code
def generate_data(n_points=10000, n_features=3, use_nonlinear=True,
noise_std=0.1, train_test_split = 4):
"""
Arguments:
n_points - number of data points to generate
n_features - a positive integer - number of features
use_nonlinear - if True, generate non-linear data
train_test_split - an integer - what portion of data to use for testing
Return:
X_train, Y_train, X_test, Y_test, n_train, n_features
"""
# Linear data or non-linear data?
if use_nonlinear:
weights = np.array([[1.0, 0.5, 0.2],[0.5, 0.3, 0.15]])
else:
weights = np.array([1.0, 0.5, 0.2])
bias = np.ones(n_points).reshape((-1,1))
low = - np.ones((n_points,n_features),'float')
high = np.ones((n_points,n_features),'float')
np.random.seed(42)
X = np.random.uniform(low=low, high=high)
np.random.seed(42)
noise = np.random.normal(size=(n_points, 1))
noise_std = 0.1
if use_nonlinear:
Y = (weights[0,0] * bias + np.dot(X, weights[0, :]).reshape((-1,1)) +
np.dot(X*X, weights[1, :]).reshape([-1,1]) +
noise_std * noise)
else:
Y = (weights[0] * bias + np.dot(X, weights[:]).reshape((-1,1)) +
noise_std * noise)
n_test = int(n_points/train_test_split)
n_train = n_points - n_test
X_train = X[:n_train,:]
Y_train = Y[:n_train].reshape((-1,1))
X_test = X[n_train:,:]
Y_test = Y[n_train:].reshape((-1,1))
return X_train, Y_train, X_test, Y_test, n_train, n_features
X_train, Y_train, X_test, Y_test, n_train, n_features = generate_data(use_nonlinear=False)
X_train.shape, Y_train.shape
###Output
_____no_output_____
###Markdown
Linear Regression with Numpy
###Code
# GRADED FUNCTION: numpy_lin_regress
def numpy_lin_regress(X_train, Y_train):
"""
numpy_lin_regress - Implements linear regression model using numpy module
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
Return:
np.array of size (k+1 by 1) of regression coefficients
"""
### START CODE HERE ### (≈ 3 lines of code)
# number of features
n, k = X_train.shape
# add the column of ones
column_of_ones = np.ones(n).reshape((-1,1))
Design_Matrix = np.concatenate((column_of_ones, X_train), axis=1)
# default answer, replace this
# theta_numpy = np.array([0.] * (ndim + 1))
X = Design_Matrix
Y = Y_train
theta_numpy = np.linalg.pinv(X.T@X)@X.T@Y_train
### END CODE HERE ###
return theta_numpy
### GRADED PART (DO NOT EDIT) ###
theta_numpy = numpy_lin_regress(X_train, Y_train)
part_1 = list(theta_numpy.squeeze())
try:
part1 = " ".join(map(repr, part_1))
except TypeError:
part1 = repr(part_1)
submissions[all_parts[0]]=part1
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions)
theta_numpy.squeeze()
### GRADED PART (DO NOT EDIT) ###
###Output
Something went wrong, please have a look at the reponse of the grader
-------------------------
{"errorCode":"invalidEmailOrToken","message":"Invalid email or token.","details":null}
-------------------------
###Markdown
Linear Regression with Sklearn
###Code
# GRADED FUNCTION: sklearn_lin_regress
def sklearn_lin_regress(X_train, Y_train):
"""
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
Return:
np.array of size (k+1 by 1) of regression coefficients
"""
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
### START CODE HERE ### (≈ 3 lines of code)
# use lin_reg to fit training data
lin_reg.fit(X_train, Y_train)
theta_sklearn = np.concatenate((lin_reg.intercept_.reshape((-1,1)), lin_reg.coef_.reshape((-1,1))), axis=0)
### END CODE HERE ###
return theta_sklearn
### GRADED PART (DO NOT EDIT) ###
theta_sklearn = sklearn_lin_regress(X_train, Y_train)
part_2 = list(theta_sklearn.squeeze())
try:
part2 = " ".join(map(repr, part_2))
except TypeError:
part2 = repr(part_2)
submissions[all_parts[1]]=part2
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions)
theta_sklearn.squeeze()
### GRADED PART (DO NOT EDIT) ###
###Output
Something went wrong, please have a look at the reponse of the grader
-------------------------
{"errorCode":"invalidEmailOrToken","message":"Invalid email or token.","details":null}
-------------------------
###Markdown
Linear Regression with Tensorflow
###Code
# GRADED FUNCTION: tf_lin_regress
def tf_lin_regress(X_train, Y_train):
"""
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
Return:
np.array of size (k+1 by 1) of regression coefficients
"""
### START CODE HERE ### (≈ 7-8 lines of code)
# add the column of ones
# define theta for later evaluation
ndim = X_train.shape[1]
### START CODE HERE ### (≈ 7-8 lines of code)
X_train = np.hstack((np.ones((X_train.shape[0], 1)), X_train))
X = tf.constant(X_train, dtype=tf.float32, name="X")
Y = tf.constant(Y_train, dtype=tf.float32, name="Y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), Y)
### END CODE HERE ###
with tf.Session() as sess:
theta_value = theta.eval()
return theta_value
### GRADED PART (DO NOT EDIT) ###
theta_tf = tf_lin_regress(X_train, Y_train)
part_3 = list(theta_tf.squeeze())
try:
part3 = " ".join(map(repr, part_3))
except TypeError:
part3 = repr(part_3)
submissions[all_parts[2]]=part3
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions)
theta_tf.squeeze()
### GRADED PART (DO NOT EDIT) ###
class LinRegressNormalEq:
"""
class LinRegressNormalEq - implements normal equation, maximum likelihood estimator (MLE) solution
"""
def __init__(self, n_features, learning_rate=0.05, L=0):
import math as m
# input placeholders
self.X = tf.placeholder(tf.float32, [None, n_features], name="X")
self.Y = tf.placeholder(tf.float32, [None, 1], name="Y")
# regression parameters for the analytical solution using the Normal equation
self.theta_in = tf.placeholder(tf.float32, [n_features+1,None])
# Augmented data matrix is obtained by adding a column of ones to the data matrix
data_plus_bias = tf.concat([tf.ones([tf.shape(self.X)[0], 1]), self.X], axis=1)
XT = tf.transpose(data_plus_bias)
#############################################
# The normal equation for Linear Regression
self.theta = tf.matmul(tf.matmul(
tf.matrix_inverse(tf.matmul(XT, data_plus_bias)), XT), self.Y)
# mean square error in terms of theta = theta_in
self.lr_mse = tf.reduce_mean(tf.square(
tf.matmul(data_plus_bias, self.theta_in) - self.Y))
#############################################
# Estimate the model using the Maximum Likelihood Estimation (MLE)
# regression parameters for the Maximum Likelihood method
# Note that there are n_features+2 parameters, as one is added for the intercept,
# and another one for the std of noise
self.weights = tf.Variable(tf.random_normal([n_features+2, 1]))
# prediction from the model
self.output = tf.matmul(data_plus_bias, self.weights[:-1, :])
gauss = tf.distributions.Normal(loc=0.0, scale=1.0)
# Standard deviation of the Gaussian noise is modelled as a square of the
# last model weight
sigma = 0.0001 + tf.square(self.weights[-1])
# though a constant sqrt(2*pi) is not needed to find the best parameters, here we keep it
# to get the value of the log-LL right
pi = tf.constant(m.pi)
log_LL = tf.log(0.00001 + (1/( tf.sqrt(2*pi)*sigma)) * gauss.prob((self.Y - self.output) / sigma ))
self.loss = - tf.reduce_mean(log_LL)
self.train_step = (tf.train.AdamOptimizer(learning_rate).minimize(self.loss), -self.loss)
# GRADED FUNCTION: run_normal_eq
def run_normal_eq(X_train, Y_train, X_test, Y_test, learning_rate=0.05):
"""
Implements normal equation using tensorflow, trains the model using training data set
Tests the model quality by computing mean square error (MSE) of the test data set
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
X_test - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_test - np.array of size (n by 1) where n is the number of observations of dependend variable
Return a tuple of:
- np.array of size (k+1 by 1) of regression coefficients
- mean square error (MSE) of the test data set
- mean square error (MSE) of the training data set
"""
# create an instance of the Linear Regression model class
n_features = X_train.shape[1]
model = LinRegressNormalEq(n_features=n_features, learning_rate=learning_rate)
### START CODE HERE ### (≈ 10-15 lines of code)
# train the model
ndim = X_train.shape[1]
lr_mse_train = 0.
lr_mse_test = 0.
X = tf.placeholder(tf.float32, [None, ndim], name="X")
Y = tf.placeholder(tf.float32, [None, 1], name="Y")
theta_in = tf.placeholder(tf.float32, [ndim + 1, None])
data_plus_bias = tf.concat([tf.ones([tf.shape(X)[0], 1]), X], axis=1)
XT = tf.transpose(data_plus_bias)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, data_plus_bias)), XT), Y)
lr_mse = tf.reduce_mean(tf.square(tf.matmul(data_plus_bias, theta_in) - Y))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
theta_value = sess.run(theta, feed_dict={X: X_train, Y: Y_train})
lr_mse_train = sess.run(lr_mse, feed_dict={X: X_train, Y: Y_train, theta_in: theta_value})
lr_mse_test = sess.run(lr_mse, feed_dict={X: X_train, Y: Y_train, theta_in: theta_value})
# Normal equation for Linear Regression
### END CODE HERE ###
return theta_value, lr_mse_train, lr_mse_test
### (DO NOT EDIT) ###
theta_value, lr_mse_train, lr_mse_test = run_normal_eq(X_train, Y_train, X_test, Y_test)
### (DO NOT EDIT) ###
### GRADED PART (DO NOT EDIT) ###
part_4 = list(theta_value.squeeze())
try:
part4 = " ".join(map(repr, part_4))
except TypeError:
part4 = repr(part_4)
submissions[all_parts[3]]=part4
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions)
theta_value.squeeze()
### GRADED PART (DO NOT EDIT) ###
# GRADED FUNCTION: run_mle# GRADED
def run_mle(X_train, Y_train, X_test, Y_test, learning_rate=0.05, num_iter=5000):
"""
Maximum likelihood Estimate (MLE)
Tests the model quality by computing mean square error (MSE) of the test data set
Arguments:
X_train - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_train - np.array of size (n by 1) where n is the number of observations of dependend variable
X_test - np.array of size (n by k) where n is number of observations
of independent variables and k is number of variables
Y_test - np.array of size (n by 1) where n is the number of observations of dependend variable
Return a tuple of:
- np.array of size (k+1 by 1) of regression coefficients
- mean square error (MSE) of the test data set
- mean square error (MSE) of the training data set
"""
# create an instance of the Linear Regression model class
n_features = X_train.shape[1]
model = LinRegressNormalEq(n_features=n_features, learning_rate=learning_rate)
# train the model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Now train the MLE parameters
for _ in range(num_iter):
(_ , loss), weights = sess.run((model.train_step, model.weights), feed_dict={
model.X: X_train,
model.Y: Y_train
})
# make test_prediction
Y_test_predicted = sess.run(model.output, feed_dict={model.X: X_test})
# output std sigma is a square of the last weight
std_model = weights[-1]**2
sess.close()
return weights[0:-1].squeeze(), loss, std_model
weights, loss, std_model = run_mle(X_train, Y_train, X_test, Y_test)
### GRADED PART (DO NOT EDIT) ###
part_5 = list(weights.squeeze())
try:
part5 = " ".join(map(repr, part_5))
except TypeError:
part5 = repr(part_5)
submissions[all_parts[4]]=part5
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:5],all_parts,submissions)
weights.squeeze()
### GRADED PART (DO NOT EDIT) ###
###Output
Something went wrong, please have a look at the reponse of the grader
-------------------------
{"errorCode":"invalidEmailOrToken","message":"Invalid email or token.","details":null}
-------------------------
|
Tutorial C - Modeling Module's Performance Advanced.ipynb | ###Markdown
![tutorialpromo](images/tutorial_banner.PNG) Tutorial C - PV Module Performance AdvancedNow that we know how to obtain the plane of array (POA) irradiance and cell temperature, let's calculate a module's performance assuming a subset of irradiance and temperature conditions. The objectives for this tutorial are to use pvlib python to do the following:1. Retrieve a set of module CEC parameters from the latest [NREL System Advisor Model (SAM)](https://sam.nrel.gov/) library.2. Calculate the single diode model (SDM) parameters at standard test conditions (STC) and for a set of PV module test conditions known as IEC61853.3. Produce an IV curve for each of the IEC61853 test conditions.4. Derive the California Energy Commission (CEC) model parameters based on standard CEC test measurements.![Overview](images/tutorial_4_overview.PNG) PV Concepts:- STC Parameters- IV Curve- PV module energy conversion models - Point Models - [PVWatts](https://pvwatts.nrel.gov/) - [Sandia Array Performance Model (SAPM)](https://energy.sandia.gov/wp-content/gallery/uploads/043535.pdf) - Continuous Models - CEC - [PVSyst](https://www.pvsyst.com/) - [DeSoto](https://doi.org/10.1016/j.solener.2005.06.010)- Low light and temperature module performance- IEC 61853 standard Python Concepts:- [`numpy.meshgrid`](https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html)- `try: except` clauses to catch errors- Transpose a Pandas dataframe- [Pandas series and index string methods](https://pandas.pydata.org/pandas-docs/stable/user_guide/text.htmlstring-methods): _e.g._ `df.index.str.startswith('Canadian')` where `df` is a dataframe of PV modules. Standard Test Conditions (STC)The most basic condition is called "standard test conditions" or STC, which is considered the reference for most PV modules. For example, all of the PV modules in the SAM CEC module database list their nameplate power at STC.* irradiance: 1000-W/m²* cell temperature: 25°C* angle of incidence (AOI): 0°* spectrum: AM1.5g (ASTM G-173) Air mass (AM)The standard reference AM1.5g (ASTM G-173) is defined as the solar spectrum of global irradiance that passes through 1.5 atmospheres. For more information see [NREL Solar Spectra](https://www.nrel.gov/grid/solar-resource/spectra.html). IEC 61853 test conditionsAnother common set of test conditions is the IEC 61853 standard which provides a PV module test matrix that covers the expected range of inicident irradiance and cell temperatures for PV modules assuming that the irradiance is normal and the solar spectrum is similar to AM1.5g.* irradiance (W/m²): 100, 200, 400, 600, 800, 1000, 1100* module temperature (°C): 15, 25, 50, 75* angle of incidence: 0°* spectrum: AM1.5g (ASTM G-173)The figure below shows IEC 61853 test results performed at CFV labs:![Overview](images/t4_PANOverview.PNG) Certain combinations are excluded because they're unlikely: (1100-W/m², 15°C), (400-W/m², 75°C), (200-W/m², 50°C), (200-W/m², 75°C), (100-W/m², 50°C), and (100-W/m², 75°C). The figure below shows SAM parametrs fit to IEC61853 test results for two different cell technologies. The white space shows combinations of irradiance and temperature which were intentionally excluded from testing.![Overview](images/t4_SingleDiodeParameterBehavior_to_TemperatureandIrradiance.PNG)Attribution: [NREL Dobos, Freeman 2019](https://www.nrel.gov/docs/fy19osti/68637.pdf)
###Code
# import pvlib and other useful python packages
import pvlib
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
# set STC reference conditions
E0 = 1000 # W/m^2
T0 = 25 # degC
# set the IEC61853 test matrix
E_IEC61853 = [100, 200, 400, 600, 800, 1000, 1100] # irradiances [W/m^2]
T_IEC61853 = [15, 25, 50, 75] # temperatures [degC]
# create a meshgrid of temperatures and irradiances
# for all 28 combinations in the test matrix
IEC61853 = np.meshgrid(T_IEC61853, E_IEC61853)
# meshgrid returns two 2-D arrays in the same order as the input arguments
# so the first item in the output is a 2-D array of temperatures and
# the second item is the mesh of irradiances
# display temperature and irradiance test matrices
IEC61853
###Output
_____no_output_____
###Markdown
Single Diode Model (SDM) & IV CurvePV module performance can be modeled using *point* or *continuous* IV-curve models. Point models like PVWatts and The Sandia Array Performance Model (SAPM, _aka_ King model) yield the current (I), voltage (V), and power (P) at a single or discrete set of points. PVWatts only yields the performance at the max power point (MPP) of the module, whereas the SAPM also yields the short circuit current (Isc), open circuit voltage (Voc).Continuous IV curve models like the CEC, PVsyst, and DeSoto models yield a relation between current and voltage called an IV curve, and therefore yield a continuous set of (V, I) points spanning from Isc to Voc and beyond. The domain of the IV curve is in quadrants 1, 2, and 4 where voltage is on the horizontal and current is on the vertical axis. The figure below from [PV Education PVCDROM](https://www.pveducation.org/pvcdrom/welcome-to-pvcdrom) shows an IV curve of an "ideal" cell.[![IV curve](images/t4_PVEducation_IVCurve.PNG)](https://www.pveducation.org/pvcdrom/modules-and-arrays/mismatch-effects)Attribution: [PV Education, UNSW, ASU, _et al._](https://www.pveducation.org/pvcdrom/modules-and-arrays/mismatch-effects) The IV curve relationship is based on an electrical analog called the "single diode model" or SDM which is defined by 5 parameters: the light current ($I_L$), shunt resistance ($R_{sh}$), series resistance ($R_s$), diode saturation current ($I_o$ or $I_{sat}$), and the diode ideality factor ($n$). Other symbols for diode ideality factor are ($\gamma$) used by PVsyst and ($a$) used by SAM, but ($\gamma$) is also frequently used for power temperature coefficient. This "ideal" cell is described by the electrical schematic drawing below.[![single diode model](https://pvpmc.sandia.gov/wp-content/uploads/2012/04/Single-Diode-EC2.png)](https://pvpmc.sandia.gov/modeling-steps/2-dc-module-iv/diode-equivalent-circuit-models/)Attribution: [Sandia NL PV Performance Modeling Collaborative](https://pvpmc.sandia.gov/modeling-steps/2-dc-module-iv/diode-equivalent-circuit-models/)Combining the components in the SDM using Ohm's and Kirchhoff's laws yields the following equation, which is implicit because current (I) is on both sides of the equation, and cannot be solved explicitly:$$ I = I_L - I_o \left( \exp \left( \frac{V + I R_s}{n V_T} \right) - 1 \right) - \frac{V + I R_s}{R_{sh}} $$with the diode voltage ($V_D = V + I R_s$), the diode current ($I_D$) given by the ideal diode equation:$$ I_D = I_o \left( \exp \left( \frac{V_D}{n V_T} \right) - 1 \right) $$the thermal voltage ($V_T = k_T / q_e$), elementary charge ($q_e$), Boltzmann constant ($k_T$), and the shunt current ($I_{sh} = V_D / R_{sh}$) CEC Model (_aka_ SAM or 6-parameter model)The California Energy Commision (CEC) contracted authorized testing labs to measure at STC the nameplate power (Pmp), Isc, Voc, and the MPP voltage and current (Vmp, Imp), as well as Isc and Voc temperature coefficients, the module dimensions, the number of series cells (Ns), parallel substrings (Np), module area in m² (Ac), and more. Tables of the CEC module parameters are available from the [Solar Equipment Lists](https://www.energy.ca.gov/programs-and-topics/programs/solar-equipment-lists). These measurements have been fit to the SDM by the NREL System Advisor Model (SAM) and stored in a CSV file that is bundled with SAM. You can access the [SAM library on GitHub](https://github.com/NREL/SAM/tree/develop/deploy/libraries). This SAM library of module coefficients derived from the CEC measurements are collectively called CEC modules and the SAM model that uses the derived SDM coefficients is called the CEC model. The CEC model used in SAM is sometimes also called the 6-parameter model because of the `Adjust` additional parameter which differentiates it from the DeSoto model. pvlib pythonThere are several functions we can use in pvlib python:* [`pvlib.pvsystem.retrieve_sam()`](https://pvlib-python.readthedocs.io/en/latest/generated/pvlib.pvsystem.retrieve_sam.htmlpvlib.pvsystem.retrieve_sam)* [`pvlib.pvsystem.calcparams_cec()`](https://pvlib-python.readthedocs.io/en/latest/generated/pvlib.pvsystem.calcparams_cec.html)* [`pvlib.pvsystem.singlediode()`](https://pvlib-python.readthedocs.io/en/latest/generated/pvlib.pvsystem.singlediode.html)* [`pvlib.ivtools.sdm.fit_cec_sam()`](https://pvlib-python.readthedocs.io/en/latest/generated/pvlib.ivtools.sdm.fit_cec_sam.html)
###Code
# use pvlib python to retrieve CEC module parameters from the SAM libraries
# with the "name" argument set to "CECMod"
CECMODS = pvlib.pvsystem.retrieve_sam(name='CECMod')
# the CEC modules are a pandas DataFrame oriented as columns, transpose to arrange
# as indices
CECMODS.T.head()
###Output
_____no_output_____
###Markdown
CEC modules libraryPeriodically a static copy of CEC module parameters is copied from the SAM library to pvlib python. The modules are roughly named according the following scheme: Whitespace, dashes, and other non-alphanumerical characters are all replaced by underscores in pvlib python.EG: "Canadian Solar Inc. CS6X-300M" becomes Canadian_Solar_Inc__CS6X_300M Let's Explore: Click here to see the panel's Datasheet. The main CEC module parameters are defined as follows:| parameter | data type | description and units || ------------ | --------- | --------------------------------------------------------------------------------------------- || `Technology` | string | one of "Mono-c-Si", "Multi-c-Si", "Thin Film", "CdTe", or "CIGS" families of cells || `Bifacial` | boolean | is bifacial? || `STC` | float | nameplate in W at STC || `PTC` | float | nameplate in W at PVUSA test conditions (1-sun, 20° ambient temperature, 1-m/s windspeed) || `A_c` | float | module area in m² || `Length` | float | module length in m; || `Width` | float | module width in m; || `N_s` | int | number of cells in series || `I_sc_ref` | float | short circuit current in A at reference condition || `V_oc_ref` | float | open circuit voltage in V at reference condition || `I_mp_ref` | float | max power current in A at reference condition || `V_mp_ref` | float | max power voltage in V at reference condition || `alpha_sc` | float | short circuit current temperature coefficient in A/Δ°C || `beta_oc` | float | open circuit voltage temperature coefficient in V/Δ°C || `T_NOCT` | float | normal operating cell temperature in °C || `a_ref` | float | diode ideality factor || `I_L_ref` | float | light or photogenerated current at reference condition in A || `I_o_ref` | float | diode saturation current at reference condition in A || `R_s` | float | series resistance in Ω || `R_sh_ref` | float | shunt resistance at reference condition in Ω || `Adjust` | float | adjustment to short circuit temperature coefficient in % || `gamma_r` | float | power temperature coefficient at reference condition in %/Δ°C || `BIPV` | boolean | is building integrated PV? |
###Code
# One trick to find the modules is to search the indices using string filters
# For example: find all Candian Solar 220-W mono-Si modules
cs_220m_mods = CECMODS.T.index.str.startswith('Canadian_Solar') & CECMODS.T.index.str.contains('220M')
CECMODS.T[cs_220m_mods]
# that was almost too easy, let's use the CS5P-220M
# NOTE: don't transpose CECMODS, get column with desired module
CS_220M = CECMODS['Canadian_Solar_Inc__CS5P_220M']
CS_220M
###Output
_____no_output_____
###Markdown
Pop Quiz 1Get any CEC module from the `CECMODS` or pvfree.* Which module did you get?* How did you get it?* What is the module's nameplate power, Isc, Voc, Imp, and Vmp?* Who is the manufacturer?* What cell technology is it?* How does it differ from CS5P-220M?
###Code
# use this cell to search CECMODS.T or pvfree
your_mod = 'your module goes here'
try:
your_mod = CECMODS[your_mod]
except KeyError:
print(f"*** Sorry, '{your_mod}' wasn't found in CECMODS. Please try again. ***")
else:
# display your module
your_mod
###Output
*** Sorry, 'your module goes here' wasn't found in CECMODS. Please try again. ***
###Markdown
Calculate CEC ParametersThe module parameters are given at the reference condition. Use [`pvlib.pvsystem.calcparams_cec()`](https://pvlib-python.readthedocs.io/en/latest/generated/pvlib.pvsystem.calcparams_cec.html) to generate the five SDM parameters at your desired irradiance and temperature to use with [`pvlib.pvsystem.singlediode()`](https://pvlib-python.readthedocs.io/en/latest/generated/pvlib.pvsystem.singlediode.html) to calculate the IV curve information. `nNsVth`, what's this?The diode ideality factor (n) is combined with the number of cells (Ns) and the thermal voltage (Vth) to create a convenience parameter. This is [syntactic sugar](https://en.wikipedia.org/wiki/Syntactic_sugar).
###Code
# finally this is the magic
temp_cell, effective_irradiance = IEC61853
cecparams = pvlib.pvsystem.calcparams_cec(
effective_irradiance=effective_irradiance,
temp_cell=temp_cell,
alpha_sc=CS_220M.alpha_sc,
a_ref=CS_220M.a_ref,
I_L_ref=CS_220M.I_L_ref,
I_o_ref=CS_220M.I_o_ref,
R_sh_ref=CS_220M.R_sh_ref,
R_s=CS_220M.R_s,
Adjust=CS_220M.Adjust,
EgRef=1.121,
dEgdT=-0.0002677)
IL, I0, Rs, Rsh, nNsVth = cecparams
# display the photogenerated current
IL
###Output
_____no_output_____
###Markdown
IV Curve InfoNow that we have the 5 SDM parameters (`IL`, `Io`, `Rs`, `Rsh`, and `nNsVth`) corresponding to each of the test conditions in the IEC61853 test matrix, we can calculate the IV curve information for that irradiance and cell temperature.
###Code
# flatten the meshgrid to allow single diode to broadcast the output
curve_info = pvlib.pvsystem.singlediode(
photocurrent=IL.flatten(),
saturation_current=I0.flatten(),
resistance_series=Rs,
resistance_shunt=Rsh.flatten(),
nNsVth=nNsVth.flatten(),
ivcurve_pnts=101,
method='lambertw')
# display the max power points
curve_info['p_mp']
# plot the calculated curves:
exclude = [(1100, 15), (400, 75), (200, 50), (200, 75), (100, 50), (100, 75)]
kolor = ['#1f77b4', '#2ca02c', '#8c564b', '#9467bd', '#d62728', '#e377c2', '#ff7f0e']
f, ax = plt.subplots(2, 2, figsize=(16, 10), sharex=True, sharey=True)
for m, irr in enumerate(E_IEC61853):
for n, tc in enumerate(T_IEC61853):
if (irr, tc) in exclude:
continue
i = n + 4*m
j = n // 2, n % 2
label = (
"$G_{eff}$ " + f"{irr} $W/m^2$"
)
ax[j].plot(curve_info['v'][i], curve_info['i'][i], label=label, c=kolor[m])
v_mp = curve_info['v_mp'][i]
i_mp = curve_info['i_mp'][i]
# mark the MPP
ax[j].plot(v_mp, i_mp, ls='', marker='o', c=kolor[m])
ax[j].vlines(v_mp, 0, i_mp, linestyle='dashed', color=kolor[m])
# just repeat this every time doesn't hurt anyone
ax[j].legend(loc='right')
if j[0] == 1:
ax[j].set_xlabel('Module voltage [V]')
if j[1] == 0:
ax[j].set_ylabel('Module current [A]')
ax[j].set_title(f"{CS_220M.name}, " + "$T_{cell}$ " + f"{tc} " + "$^{\circ}C$")
ax[j].grid(True)
ax[j].set_xlim([0, 80])
f.tight_layout()
###Output
_____no_output_____ |
Dynamic Programming/0801/647. Palindromic Substrings.ipynb | ###Markdown
给定一个字符串,您的任务是计算该字符串中有多少个回文子字符串。 即使起始索引或终止索引不同的子字符串由相同的字符组成,也将被计为不同的子字符串。 例1: Input: "abc" Output: 3 Explanation: Three palindromic strings: "a", "b", "c". 例2: Input: "aaa" Output: 6 Explanation: Six palindromic strings: "a", "a", "a", "aa", "aa", "aaa".
###Code
class Solution:
def countSubstrings(self, s: str) -> int:
dp = [[0] * len(s) for _ in range(len(s))]
count = 0
for i in range(len(s)):
dp[i][i] = 1
count += 1
for gap in range(1, len(s)):
for start in range(len(s) - gap):
end = start + gap
if s[start] == s[end]:
if gap <= 2:
dp[start][end] = 1
count += 1
elif gap > 2 and dp[start+1][end-1] != 0:
dp[start][end] = 1
count += 1
print(dp)
return count
s_ = "aaaaa"
solution = Solution()
solution.countSubstrings(s_)
[[1, 1, 1, 0, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 1, 1],
[0, 0, 0, 0, 1]]
###Output
_____no_output_____ |
a1/[DATA512A]_A1_Tony_Kim.ipynb | ###Markdown
Step 1: Gathering the data Referenced API code from this [link](https://public.paws.wmcloud.org/User:Jtmorgan/data512_a1_example.ipynb) which is under CC0 license.We will use the following APIs- Pagecounts API ([API documentation](https://wikimedia.org/api/rest_v1//Pagecounts_data_(legacy)/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end) )- The Pageviews API ([API documentation](https://wikimedia.org/api/rest_v1//Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end) API callsHere, we will define API endpoints for both PageCount API (legacy) and PageViews API. Next, we define headers for the API to include my own information
###Code
headers = {
'User-Agent': 'https://github.com/mightydeveloper',
'From': '[email protected]'
}
###Output
_____no_output_____
###Markdown
Next, we will define functions to make API calls more easily. Define PageCount API call, parameterized by `access_type`Note that this is the legacy API
###Code
def pagecount_api_call(access_type):
endpoint_legacy = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
parameters = {
"project" : "en.wikipedia.org",
"access-site" : access_type,
"granularity" : "monthly",
"start" : "2008010100",
# for end use 1st day of month following final month of data
"end" : "2020090100"
}
call = requests.get(endpoint_legacy.format(**parameters), headers=headers)
response = call.json()
return response
###Output
_____no_output_____
###Markdown
Define PageViews API call, parameterized by `access_type`
###Code
def pageviews_api_call(access_type):
endpoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
parameters = {
"project" : "en.wikipedia.org",
"access" : access_type,
"agent" : "user", # This will give only organic (user) traffic, as opposed to traffic by web crawlers or spiders
"granularity" : "monthly",
"start" : "2014100100",
# for end use 1st day of month following final month of data
"end" : '2020090100'
}
call = requests.get(endpoint_pageviews.format(**parameters), headers=headers)
response = call.json()
return response
###Output
_____no_output_____
###Markdown
Make API calls and save results Make API call to PageCounts endpoint and save the resultThis will make 2 API calls to gather `access_type` of `desktop-site` and `mobile_site`
###Code
api_name = 'pagecounts'
first_month = '200801'
last_month = '202008'
for access_type in ['desktop-site', 'mobile-site']:
json_obj = pagecount_api_call(access_type)
filename = f'{api_name}_{access_type}_{first_month}-{last_month}.json'
with open(filename, 'w') as f:
json.dump(json_obj, f)
###Output
_____no_output_____
###Markdown
Make API call to PageViews endpoint and save the resultThis will make 3 API calls to gather `access_type` of `desktop`, `mobile-app`, and `mobile-web`
###Code
api_name = 'pageviews'
first_month = '201410'
last_month = '202008'
for access_type in ['desktop', 'mobile-app', 'mobile-web']:
json_obj = pageviews_api_call(access_type)
filename = f'{api_name}_{access_type}_{first_month}-{last_month}.json'
with open(filename, 'w') as f:
json.dump(json_obj, f)
###Output
_____no_output_____
###Markdown
Step 2: Processing the data Read the data
###Code
with open('pagecounts_desktop-site_200801-202008.json') as f:
pc_desktop = json.load(f)
with open('pagecounts_mobile-site_200801-202008.json') as f:
pc_mobile = json.load(f)
with open('pageviews_desktop_201410-202008.json') as f:
pv_desktop = json.load(f)
with open('pageviews_mobile-app_201410-202008.json') as f:
pv_mobile_app = json.load(f)
with open('pageviews_mobile-web_201410-202008.json') as f:
pv_mobile_web = json.load(f)
###Output
_____no_output_____
###Markdown
Parse PageCounts Result
###Code
pc = defaultdict(Counter)
for obj in pc_desktop['items']:
ts = obj['timestamp'][:6]
pc[ts]['desktop'] += obj['count']
for obj in pc_mobile['items']:
ts = obj['timestamp'][:6]
pc[ts]['mobile'] += obj['count']
###Output
_____no_output_____
###Markdown
Parse PageViews Result
###Code
pv = defaultdict(Counter)
for obj in pv_desktop['items']:
ts = obj['timestamp'][:6]
pv[ts]['desktop'] += obj['views']
for obj in pv_mobile_app['items']:
ts = obj['timestamp'][:6]
pv[ts]['mobile'] += obj['views']
for obj in pv_mobile_web['items']:
ts = obj['timestamp'][:6]
pv[ts]['mobile'] += obj['views']
###Output
_____no_output_____
###Markdown
Write result to csv file
###Code
# Figure out keys for pretty ordering in csv
months = set(pc.keys()) | set(pv.keys())
months = sorted(months)
with open('en-wikipedia_traffic_200801-202008.csv', 'w', newline='') as csvfile:
fieldnames = ['year', 'month',
'pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views',
'pageview_all_views', 'pageview_desktop_views', 'pageview_mobile_views']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for ts in months:
dic = dict()
dic['year'] = int(ts[:4])
dic['month'] = int(ts[4:])
dic['pagecount_all_views'] = pc[ts]['desktop'] + pc[ts]['mobile']
dic['pagecount_desktop_views'] = pc[ts]['desktop']
dic['pagecount_mobile_views'] = pc[ts]['mobile']
dic['pageview_all_views'] = pv[ts]['desktop'] + pv[ts]['mobile']
dic['pageview_desktop_views'] = pv[ts]['desktop']
dic['pageview_mobile_views'] = pv[ts]['mobile']
writer.writerow(dic)
###Output
_____no_output_____
###Markdown
Step 3: Analyze and generate graph Read data
###Code
import pandas as pd
df = pd.read_csv('en-wikipedia_traffic_200801-202008.csv')
###Output
_____no_output_____
###Markdown
Draw graph
###Code
plt.figure(figsize=(20,10))
plt.plot(df)
plt.title('Page Views on English Wikipedia')
plt.ylabel('Views (x 10000000000)')
plt.xlabel('Months since 2008-01')
plt.legend(['pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views',
'pageview_all_views', 'pageview_desktop_views', 'pageview_mobile_views'])
plt.savefig('plot.jpg')
plt.show()
###Output
_____no_output_____ |
notebooks/Chapter 03 - Classification.ipynb | ###Markdown
Lingering questions- confusion matrix intuition- meaning of precision / recall performance measures - **Precision**: out of all positive predictions, how many were accurate - you can pretty much create a classifier with any precision you want. just set the threshold high enough. but your recall will suffer. - True positives / (True positives + false positives) - **Recall**: out of all instances that were supposed to be labeled positive, how many were? - True positives / (True positives + false negatives) Notes- OvO (One vs One) - training a binary classifier on every pair of digits- OvA (One vs All) - training a binary classifier on only 0s, only 1s, etc.. (like we did with 5s). So we train a 1 detector, 4 detector, etc.. then select the detector with the highest score for a given image as the prediction.Figure 3-3. Decision threshold and precision/recall tradeoff![Figure 3-3. Decision threshold and precision/recall tradeoff](images/3-3.png)
###Code
import numpy as np
import os
import warnings
np.random.seed(42)
warnings.filterwarnings(action="ignore", category=FutureWarning)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist
X, y = mnist['data'], mnist['target']
X.shape
y.shape
sample_5 = X[36000]
sample_5_image = np.reshape(sample_5, (28, 28))
plt.imshow(sample_5_image, cmap=matplotlib.cm.binary)
plt.axis('off')
plt.show()
y[36000]
# test and train sets alreaady split by sklearn
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
# shuffle to make sure that when we do cross validation, each fold contains all the numbers
# np.random.permutation takes an int (treated as a range from 0~) or a list and shuffles it
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
###Output
_____no_output_____
###Markdown
Training a Binary Classifier
###Code
# trying to just identify a single digit for now
y_train_5 = y_train == 5
y_test_5 = y_test == 5
from sklearn.linear_model import SGDClassifier
# 'stochastic' means 'random determined'
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([sample_5])
###Output
_____no_output_____
###Markdown
Performance Measures
###Code
# cross validation is one way to measure the accuracy of a model
# K-fold CV does the following
# 1. splits the training set into a given number (K) of folds
# 2. trains a model on the training fold data, and makes predictions on the fold test data
# 3. (?)evaluates those predictions on each fold using a model trained on the other folds
from sklearn.model_selection import cross_val_score
# the 'accuracy' scoring is the ratio of correct predictions
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring='accuracy')
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
# Because only 10% of the images are 5s, we can get 90% accuracy just by guessing 'not 5' every time.
# Accuracy scoring is generally not the preferred way to measure classifier performance.
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring='accuracy')
# Custom implementation of cross validation with accuracy scoring
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X=X_train_folds, y=y_train_folds)
y_pred = clone_clf.predict(X=X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
###Output
0.9502
0.96565
0.96495
###Markdown
Confusion Matrix
###Code
# to compute the confusion matrix, you need a set of predictions to compare to the target values
# cross_val_predict returns the predictions (not the scores) from the cross validation
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
y_train_pred
from sklearn.metrics import confusion_matrix
# a row represents an actual class
# a column represents a predicted class
# 1307 wrongly classified as 5s
# 1077 wrongly classified as non-5s
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
###Output
_____no_output_____
###Markdown
Precision and Recall
###Code
# another metric is the accuracy of the positive predictions (call the "precision classifier")
# it is used alongside "recall" which is the ratio of positive instances that are correctly detected by the classifier
from sklearn.metrics import precision_score, recall_score
precision = precision_score(y_train_5, y_train_pred)
recall = recall_score(y_train_5, y_train_pred)
print(precision) # 4344 / (4344 + 1307)
print(recall) # 4344 / (4344 + 1077)
###Output
0.7687135020350381
0.801328168234643
###Markdown
The harmonic mean of precision and recall gives you the F1 score, which is commonly used to combine precision and recall.$F_1=\dfrac{2}{ \dfrac{1}{precision} + \dfrac{1}{recall} }$
###Code
from sklearn.metrics import f1_score
# computing f1_score with sklearn
print(f1_score(y_train_5, y_train_pred))
# is the same as the following
print(2 * (precision * recall) / (precision + recall))
# try adjusting the threshold ourselves
# decision_function returns a score for each instance
# and lets us make a prediction with any threshold we want
# based on that score
y_scores = sgd_clf.decision_function([sample_5])
y_scores
threshold = 0
y_sample_5_pred = (y_scores > threshold)
y_sample_5_pred
# raising the threshold decreases recall
# when the threshold was lower, the 5 was predicted correctly
# but when we raised the threshold, the 5 was missed
threshold = 200000
y_sample_5_pred = (y_scores > threshold)
y_sample_5_pred
# here is how we can get all the decision function scores
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method='decision_function')
y_scores
from sklearn.metrics import precision_recall_curve
# this will give us what we need to visualize the
# relationship between precision, recall & threshold
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], 'b--', label='Precision')
plt.plot(thresholds, recalls[:-1], 'g-', label='Recall')
plt.xlabel('Threshold')
plt.legend(loc='upper left')
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plt.xlim([-700000, 700000])
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.show()
# let's say we want to aim for 90% precision
# if you look at the graph above, it looks like we should set our threshold
# at around 70,000
y_train_pred_90 = (y_scores > 70000)
precision_score(y_true=y_train_5, y_pred=y_train_pred_90)
recall_score(y_true=y_train_5, y_pred=y_train_pred_90)
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, 'b-', linewidth=2)
plt.xlabel('Recall', fontsize=16)
plt.ylabel('Precision', fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
plt.show()
###Output
_____no_output_____
###Markdown
The ROC Curve
###Code
# receiver operating characteristic (ROC) curve is common tool used with binary classifiers
# it plots the 'true positive rate' (recall) against the 'false positive rate'
# FPR = negative instances that are incorrectly classified as positive (equal to 1 - TNR)
# TNR (also called 'specificity') = ratio of negative instances correctly classified as negative
# so, ROC plots 'sensitivity (ie Recall)' versus (1 - specificity)
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
fpr, tpr, thresholds
# you should use PR curve when the positive class is rare,
# or you care more about the false positives than the false negatives;
# otherwise, use the ROC curve
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
# this line shows what an ROC curve of a completely random ROC classifier would produce
# good classifiers stay as far away from this line as possible (to the top left)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate (Recall)')
plot_roc_curve(fpr, tpr)
plt.show()
# AUC stands for 'area under the curve'
# a perfect classifier score would be 1
# a purely random classifier score would be 0.5
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
# RandomForestClassifier doesn't have 'decision_function';
# instead it uses 'predict_proba' which returns the probability
# that an instance belongs to the given class
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method='predict_proba')
y_probas_forest
# probability of the positive class
# (because we need the scores to plot ROC)
y_scores_forest = y_probas_forest[:, 1]
y_scores_forest
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)
fpr_forest, tpr_forest, thresholds_forest
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, 'b:', linewidth=2, label='SGD')
plot_roc_curve(fpr_forest, tpr_forest, 'Random Forest')
plt.legend(loc='lower right', fontsize=16)
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
###Output
_____no_output_____
###Markdown
As you can see, the Random Forest curve looks better than the SGD curve. It's a lot closer to the top left corner. The ROC AUC score is also better.
###Code
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
y_train_pred_forest
precision = precision_score(y_train_5, y_train_pred_forest)
recall = recall_score(y_train_5, y_train_pred_forest)
precision, recall
###Output
_____no_output_____
###Markdown
Multiclass Classification
###Code
sgd_clf.fit(X_train, y_train) # using y_train (not y_train_5)
sgd_clf.predict([sample_5])
# Under the hood, scikit-learn actually did OvA and trained 10 binary
# classifiers and selected the one with the highest score for the image.
# You can see the scores here! (Look at index 5!)
sample_5_scores = sgd_clf.decision_function([sample_5])
sample_5_scores
np.argmax(sample_5_scores)
sgd_clf.classes_
sgd_clf.classes_[5]
forest_clf.fit(X_train, y_train)
forest_clf.predict([sample_5])
# Random forest does not use OvA or OvO because it classifiest directly
# into multiple classes. You can see the probabilities assigned for each
# class for a given instance.
# Notice how the probability at index 5 is the highest
# the random forest classifier model estimates an 80% probability its' a 5
forest_clf.predict_proba([sample_5])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring='accuracy')
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
X_train[0], X_train_scaled[0]
# Simply scaling the values like we did in Chapter 2 raises our accuracy to 90%
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring='accuracy')
###Output
_____no_output_____
###Markdown
Error Analysis
###Code
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
y_train[:5], y_train_pred[:5]
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
# We can look at the confusion matrix with matplotlib
# This plot looks good because most images are on the main diagonal axis
# meaning they were classified correctly
# Notice how 5 is slightly darker. This could be because there are less 5s in the dataset
# or the classifier does not perform as well on 5s
plt.matshow(conf_mx, cmap=plt.cm.gray)
plt.show()
# Let's focus on the error rate
# To get the error rate, divide each value in the confusion matrix
# by the number of images in the corresponding class (value count)
row_sums = conf_mx.sum(axis=1, keepdims=True)
row_sums
import pandas as pd
pd.Series(y_train).value_counts(sort=False)
norm_conf_mx = conf_mx / row_sums
norm_conf_mx
# https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.fill_diagonal.html
# >>> a = np.zeros((3, 3, 3), int)
# >>> a
# array([[[0, 0, 0],
# [0, 0, 0],
# [0, 0, 0]],
# [[0, 0, 0],
# [0, 0, 0],
# [0, 0, 0]],
# [[0, 0, 0],
# [0, 0, 0],
# [0, 0, 0]]])
# >>> np.fill_diagonal(a, 4)
# >>> a
# array([[[4, 0, 0],
# [0, 0, 0],
# [0, 0, 0]],
# [[0, 0, 0],
# [0, 4, 0],
# [0, 0, 0]],
# [[0, 0, 0],
# [0, 0, 0],
# [0, 0, 4]]])
np.fill_diagonal(norm_conf_mx, 0)
norm_conf_mx
# Rows represent actual classes
# Columns represent predicted classes
#
# Look at row 5, column 8. It is white.
# This means there are quite a few 5s that are classified as 8s
# The opposite is not always true however (row 8 col 5 is not as bright)
#
# 3s and 5s looks like they're confused a lot too
#
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.show()
# Taken directly from
# https://github.com/ageron/handson-ml/blob/master/03_classification.ipynb
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
# Let's analyze individual errors
# TL(221): actual(3) pred(3)
# TR(222): actual(3) pred(5)
# BL(223): actual(5) pred(3)
# BR(224): actual(5) pred(5)
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8, 8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
plt.show()
###Output
_____no_output_____
###Markdown
Multilabel ClassificationOutputting multiple binary labels (class X yes/no) for each instance is called "Multilabel Classification."The example used in the book is identifying multiple people in a single photo.Instead of labeling 1 person for each photo, we want our classifier to output multiplepeople for each photo.Ex. if we trained a classifier to recognize Alice, Bob and Charlie, andonly Alice and Charlie are in the instance photo, the output should be `[1, 0, 1]`
###Code
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
# Create 2 target labels for each image
# 1. whether digit is large (7, 8 or 9)
# 2. whether digit is odd
y_multilabel = np.c_[y_train_large, y_train_odd]
y_multilabel
# Not all classifiers support multilabel classification,
# but KNeighborsClassifier does
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
# Now feed the classifier an instance, and see how it outputs 2 binary labels!
# 1. 5 is not large (correct!)
# 2. 5 is an odd number (correct!)
knn_clf.predict([sample_5])
# One approach to check how well your multilabel classifier performs
# is to take the average of all the inidividual binary label F1 scores.
# y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_train, cv=3, verbose=True, n_jobs=4)
# average='macro' assumes all labels are equally important
# macro_f1_score = f1_score(y_pred=y_train_knn_pred, y_true=y_train, average='macro')
# average='weighted' gives each label a weight equal to its "support"
# (ie. the number of instance with that target label)
# weighted_f1_score = f1_score(y_pred=y_train_knn_pred, y_true=y_train, average='weighted')
# See sklearn docs for more averaging options and multilabel classifier metrics..
###Output
_____no_output_____
###Markdown
Multioutput ClassificationEach label can be multiclass. An image is a good example of how this would be used. Each pixel is a label, and each label can be 0-255. Below is an example of removing noise from an image. The training data is the noise-ified image, and the labels are the original (clean) image.
###Code
noise_X_train = np.random.randint(0, 100, (len(X_train), 784))
noise_X_test = np.random.randint(0, 100, (len(X_test), 784))
X_train_mod = X_train + noise_X_train
X_test_mod = X_test + noise_X_test
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121)
plt.imshow(X_test_mod[some_index].reshape((28, 28)), cmap=matplotlib.cm.binary)
plt.axis('off')
plt.subplot(122)
plt.imshow(y_test_mod[some_index].reshape((28, 28)), cmap=matplotlib.cm.binary)
plt.axis('off')
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
# outputting an image with (28 * 28) labels (each assigned to a class between 0-255)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plt.imshow(clean_digit.reshape((28, 28)), cmap=matplotlib.cm.binary)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Exercises1. Try to build a classifier for the MNIST dataset that achieves over 97% accuracy on the test set. Hint: the KNeighborsClassifier works quite well for this task; you just need to find good hyperparameter values (try a grid search on the weights and n_neighbors hyperparameters).
###Code
import mnist
X_train = mnist.train_images().reshape(-1, 28 ** 2)
X_test = mnist.test_images().reshape(-1, 28 ** 2)
y_train = mnist.train_labels()
y_test = mnist.test_labels()
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Model prepared in tasks/chapter_3_exercise_1.py (training takes up to a full day)
#
# grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=-1, cv=5, verbose=5)
# grid_search.fit(X_train, y_train)
from sklearn.externals import joblib
# joblib file compressed for smaller repo size
from zipfile import ZipFile
import os
if not os.path.isfile('tasks/chapter_3_exercise_1.joblib'):
with ZipFile('tasks/chapter_3_exercise_1.joblib.zip', 'r') as zf:
zf.extractall(path='tasks')
grid_search = joblib.load(open('tasks/chapter_3_exercise_1.joblib', 'rb'))
grid_search
grid_search.best_score_
grid_search.best_params_
###Output
_____no_output_____
###Markdown
2. Write a function that can shift an MNIST image in any direction (left, right, up, or down) by one pixel. Then, for each image in the training set, create four shifted copies (one per direction) and add them to the training set. Finally, train your best model on this expanded training set and measure its accuracy on the test set. You should observe that your model performs even better now! This technique of artificially growing the training set is called data augmentation or training set expansion.
###Code
from utils import functions
import matplotlib.pyplot as plt
sample_ix = 6978 # random.randint(0, len(X_train))
original_img = X_train[sample_ix].reshape((28, 28))
up_img = functions.shift_mnist_image(original_img, 'up', 10)
left_img = functions.shift_mnist_image(original_img, 'left', 10)
right_img = functions.shift_mnist_image(original_img, 'right', 10)
down_img = functions.shift_mnist_image(original_img, 'down', 10)
plt.figure(figsize=(8, 8))
plt.subplot(231)
plt.imshow(original_img, cmap='binary')
plt.title('original')
plt.axis('off')
plt.subplot(232)
plt.imshow(up_img, cmap='binary')
plt.title('shifted up 10px')
plt.axis('off')
plt.subplot(233)
plt.imshow(left_img, cmap='binary')
plt.title('shifted left 10px')
plt.axis('off')
plt.subplot(234)
plt.imshow(right_img, cmap='binary')
plt.title('shifted right 10px')
plt.axis('off')
plt.subplot(235)
plt.imshow(down_img, cmap='binary')
plt.title('shifted down 10px')
plt.axis('off')
X_train_expanded = []
y_train_expanded = []
for i in range(len(X_train)):
train_img = X_train[i]
train_lbl = y_train[i]
X_train_expanded.append(train_img)
y_train_expanded.append(train_lbl)
up = functions.shift_mnist_image(train_img.reshape((28, 28)), 'up').reshape(-1)
right = functions.shift_mnist_image(train_img.reshape((28, 28)), 'right').reshape(-1)
down = functions.shift_mnist_image(train_img.reshape((28, 28)), 'down').reshape(-1)
left = functions.shift_mnist_image(train_img.reshape((28, 28)), 'left').reshape(-1)
X_train_expanded.append(up); y_train_expanded.append(train_lbl);
X_train_expanded.append(right); y_train_expanded.append(train_lbl);
X_train_expanded.append(down); y_train_expanded.append(train_lbl);
X_train_expanded.append(left); y_train_expanded.append(train_lbl);
from sklearn.neighbors import KNeighborsClassifier
knn_clf_params = {**grid_search.best_params_, **{'n_jobs': -1}}
knn_clf = KNeighborsClassifier(**knn_clf_params)
from datetime import datetime
start_time = datetime.now()
print('started at %s' % start_time.strftime('%H:%M:%S'))
knn_clf.fit(X_train_expanded, y_train_expanded)
end_time = datetime.now()
print('finished at %s' % end_time.strftime('%H:%M:%S'))
print('duration: %s seconds' % (end_time - start_time).seconds)
from datetime import datetime
start_time = datetime.now()
print('started at %s' % start_time.strftime('%H:%M:%S'))
y_pred = knn_clf.predict(X_test)
end_time = datetime.now()
print('finished at %s' % end_time.strftime('%H:%M:%S'))
print('duration: %s seconds' % (end_time - start_time).seconds)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
###Output
_____no_output_____ |
Unit_2_Portfolio_Project_Orlando_FL_Housing_Data.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('merged_data.csv', parse_dates=['SOLD DATE'])
df = df.rename(columns={'URL (SEE http://www.redfin.com/buy-a-home/comparative-market-analysis FOR INFO ON PRICING)': 'URL'})
df.shape
# Drop constant columns
df = df.drop(columns='PROPERTY TYPE')
# Drop rows that have NaN value for city
df = df.drop(df[df['CITY'].isnull()].index)
# Drop rows that have NaN value for square feet
df = df.drop(df[df['SQUARE FEET'].isnull()].index)
# Drop rows that have NaN value for lot size
df = df.drop(df[df['LOT SIZE'].isnull()].index)
# Drop high cardinality categorical columns
df = df.drop(columns='LOCATION')
# Drop 'DAYS ON MARKET' column - this is a separate factor not directly impacting price
df = df.drop(columns='DAYS ON MARKET')
# Convert zip code column to categorical data
df['Zip'] = df['Zip'].astype(str)
# fill HOA/MONTH NaN values with zeros
df['HOA/MONTH'] = df['HOA/MONTH'].fillna(0)
# Drop 2 properties that have a year built of 2022
indexes_to_drop = df['YEAR BUILT'].nlargest(2).index
indexes_to_drop_final = []
for each in range(len(indexes_to_drop)):
indexes_to_drop_final.append(indexes_to_drop[each])
indexes_to_drop_final
df = df.drop(indexes_to_drop_final)
# Drop 6 outliers with unreasonable HOA monthly fees
indexes_to_drop = df['HOA/MONTH'].nlargest(6).index
indexes_to_drop_final = []
for each in range(len(indexes_to_drop)):
indexes_to_drop_final.append(indexes_to_drop[each])
indexes_to_drop_final
df = df.drop(indexes_to_drop_final)
df = df.reset_index(drop=True)
df.head()
df.info()
df.to_csv('cleaned_data.csv')
df.describe()
# Fun map to look at
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(12,10)})
sns.scatterplot(x='LONGITUDE', y='LATITUDE', data=df)
plt.show()
import requests
import folium
from folium.plugins import MarkerCluster
# Set google maps API key and base url
API_KEY = '' # This has been removed after running the code
base_url = 'https://maps.googleapis.com/maps/api/geocode/json?'
# Create central lat and lng point for folium map visual
cvs_orlando_address = '1201 E Colonial Dr, Orlando, FL 32803'
params = {
'key': API_KEY,
'address': cvs_orlando_address
}
base_url = 'https://maps.googleapis.com/maps/api/geocode/json?'
response = requests.get(base_url, params).json()
if response['status'] == 'OK':
orlandolat = response['results'][0]['geometry']['location']['lat']
orlandolng = response['results'][0]['geometry']['location']['lng']
# Create map using folium module
m = folium.Map(location = [orlandolat, orlandolng], zoom_start = 7)
# Add property location markers to the map
for each in range(50):
folium.Marker(location=[df['LATITUDE'][each], df['LONGITUDE'][each]], icon=folium.Icon(color='blue')).add_to(m)
m
df.head()
###Output
_____no_output_____
###Markdown
Build Models
###Code
X = df.drop(columns=['SOLD DATE', 'ADDRESS', 'CITY', 'State', 'Zip', 'PRICE', '$/SQUARE FEET', 'URL'])
y = df['PRICE']
X.head()
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=.15, random_state=8675309)
from sklearn.metrics import mean_absolute_error
baseline_predictions = [y_train.mean()] * len(y_train)
baseline_error = mean_absolute_error(y_train, baseline_predictions)
print('The baseline MAE is:', baseline_error)
print('The mean house price is:', y_train.mean())
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
model_rf = make_pipeline(
SimpleImputer(),
RandomForestRegressor()
)
model_rf.fit(X_train, y_train)
from sklearn.linear_model import LinearRegression
model_lr = make_pipeline(
SimpleImputer(),
LinearRegression()
)
model_lr.fit(X_train, y_train)
from sklearn.linear_model import Ridge
model_rr = make_pipeline(
SimpleImputer(),
Ridge()
)
model_rr.fit(X_train, y_train)
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
# define the pipeline and train model
model_pr = Pipeline([('poly', PolynomialFeatures(degree=2)),
('linear', LinearRegression(fit_intercept=False))])
model_pr.fit(X_train, y_train)
# Hyperparameter tuning for the RandomForestRegressor
from sklearn.model_selection import cross_val_score, validation_curve # k-fold CV
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV # Hyperparameter tuning
param_grid = {
"randomforestregressor__max_depth": [5, 10, 15, 20, 25, 30, 35],
"randomforestregressor__n_estimators": [25, 50, 75, 100, 125, 150],
'simpleimputer__strategy': ['mean', 'median']
}
model_rf_tuned = RandomizedSearchCV(model_rf,
param_grid,
n_iter=10,
n_jobs=-1,
cv=3,
verbose=1)
model_rf_tuned.fit(X_train, y_train)
model_rf_tuned.best_estimator_
print('RandomForestRegressor score is:', model_rf.score(X_val, y_val))
print('LinearRegression score is:', model_lr.score(X_val, y_val))
print('Ridge Regression score is:', model_rr.score(X_val, y_val))
print('Polynomial Regression score is:', model_pr.score(X_val, y_val))
print('RandomForestRegressor score with hyperparameter tuning is:', model_rf_tuned.score(X_val, y_val))
import joblib
joblib.dump(model_rf_tuned, 'pipe.joblib')
lr_slopes = pd.DataFrame(data=model_rf.named_steps.randomforestregressor.feature_importances_, index=X_train.columns, columns=['Slope'])
lr_slopes
mean_absolute_error(y_val, model_rf.predict(X_val))
X.head()
coefficients = model_rf.named_steps.randomforestregressor.feature_importances_
features = X.columns
pd.Series(data=coefficients, index=features).sort_values(key=abs).plot(kind='barh')
plt.show()
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('merged_data.csv', parse_dates=['SOLD DATE'])
df = df.rename(columns={'URL (SEE http://www.redfin.com/buy-a-home/comparative-market-analysis FOR INFO ON PRICING)': 'URL'})
df.shape
# Drop constant columns
df = df.drop(columns='PROPERTY TYPE')
# Drop rows that have NaN value for city
df = df.drop(df[df['CITY'].isnull()].index)
# Drop rows that have NaN value for square feet
df = df.drop(df[df['SQUARE FEET'].isnull()].index)
# Drop rows that have NaN value for lot size
df = df.drop(df[df['LOT SIZE'].isnull()].index)
# Drop high cardinality categorical columns
df = df.drop(columns='LOCATION')
# Drop 'DAYS ON MARKET' column - this is a separate factor not directly impacting price
df = df.drop(columns='DAYS ON MARKET')
# Convert zip code column to categorical data
df['Zip'] = df['Zip'].astype(str)
# fill HOA/MONTH NaN values with zeros
df['HOA/MONTH'] = df['HOA/MONTH'].fillna(0)
# Drop 2 properties that have a year built of 2022
indexes_to_drop = df['YEAR BUILT'].nlargest(2).index
indexes_to_drop_final = []
for each in range(len(indexes_to_drop)):
indexes_to_drop_final.append(indexes_to_drop[each])
indexes_to_drop_final
df = df.drop(indexes_to_drop_final)
# Drop 6 outliers with unreasonable HOA monthly fees
indexes_to_drop = df['HOA/MONTH'].nlargest(6).index
indexes_to_drop_final = []
for each in range(len(indexes_to_drop)):
indexes_to_drop_final.append(indexes_to_drop[each])
indexes_to_drop_final
df = df.drop(indexes_to_drop_final)
df = df.reset_index(drop=True)
df.head()
df.info()
df.to_csv('cleaned_data.csv')
df.describe()
# Fun map to look at
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(12,10)})
sns.scatterplot(x='LONGITUDE', y='LATITUDE', data=df)
plt.show()
import requests
import folium
from folium.plugins import MarkerCluster
# Set google maps API key and base url
API_KEY = 'AIzaSyAKrUzefbr_qCgsQVVUcsASya1F-4n4wIg'
base_url = 'https://maps.googleapis.com/maps/api/geocode/json?'
# Create central lat and lng point for folium map visual
cvs_orlando_address = '1201 E Colonial Dr, Orlando, FL 32803'
params = {
'key': API_KEY,
'address': cvs_orlando_address
}
base_url = 'https://maps.googleapis.com/maps/api/geocode/json?'
response = requests.get(base_url, params).json()
if response['status'] == 'OK':
orlandolat = response['results'][0]['geometry']['location']['lat']
orlandolng = response['results'][0]['geometry']['location']['lng']
# Create map using folium module
m = folium.Map(location = [orlandolat, orlandolng], zoom_start = 7)
# Add property location markers to the map
for each in range(50):
folium.Marker(location=[df['LATITUDE'][each], df['LONGITUDE'][each]], icon=folium.Icon(color='blue')).add_to(m)
m
df.head()
###Output
_____no_output_____
###Markdown
Build Models
###Code
X = df.drop(columns=['SOLD DATE', 'ADDRESS', 'CITY', 'State', 'Zip', 'PRICE', '$/SQUARE FEET', 'URL'])
y = df['PRICE']
X.head()
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=.15, random_state=8675309)
from sklearn.metrics import mean_absolute_error
baseline_predictions = [y_train.mean()] * len(y_train)
baseline_error = mean_absolute_error(y_train, baseline_predictions)
print('The baseline MAE is:', baseline_error)
print('The mean house price is:', y_train.mean())
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
model_rf = make_pipeline(
SimpleImputer(),
RandomForestRegressor()
)
model_rf.fit(X_train, y_train)
from sklearn.linear_model import LinearRegression
model_lr = make_pipeline(
SimpleImputer(),
LinearRegression()
)
model_lr.fit(X_train, y_train)
from sklearn.linear_model import Ridge
model_rr = make_pipeline(
SimpleImputer(),
Ridge()
)
model_rr.fit(X_train, y_train)
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
# define the pipeline and train model
model_pr = Pipeline([('poly', PolynomialFeatures(degree=2)),
('linear', LinearRegression(fit_intercept=False))])
model_pr.fit(X_train, y_train)
# Hyperparameter tuning for the RandomForestRegressor
from sklearn.model_selection import cross_val_score, validation_curve # k-fold CV
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV # Hyperparameter tuning
param_grid = {
"randomforestregressor__max_depth": [5, 10, 15, 20, 25, 30, 35],
"randomforestregressor__n_estimators": [25, 50, 75, 100, 125, 150],
'simpleimputer__strategy': ['mean', 'median']
}
model_rf_tuned = RandomizedSearchCV(model_rf,
param_grid,
n_iter=10,
n_jobs=-1,
cv=3,
verbose=1)
model_rf_tuned.fit(X_train, y_train)
model_rf_tuned.best_estimator_
print('RandomForestRegressor score is:', model_rf.score(X_val, y_val))
print('LinearRegression score is:', model_lr.score(X_val, y_val))
print('Ridge Regression score is:', model_rr.score(X_val, y_val))
print('Polynomial Regression score is:', model_pr.score(X_val, y_val))
print('RandomForestRegressor score with hyperparameter tuning is:', model_rf_tuned.score(X_val, y_val))
import joblib
joblib.dump(model_rf_tuned, 'pipe.joblib')
lr_slopes = pd.DataFrame(data=model_rf.named_steps.randomforestregressor.feature_importances_, index=X_train.columns, columns=['Slope'])
lr_slopes
mean_absolute_error(y_val, model_rf.predict(X_val))
X.head()
coefficients = model_rf.named_steps.randomforestregressor.feature_importances_
features = X.columns
pd.Series(data=coefficients, index=features).sort_values(key=abs).plot(kind='barh')
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/Python class-checkpoint.ipynb | ###Markdown
Что сегодня сделаемБиблиотеку для получения курсов валют в 2 строки
###Code
from libs.exchange import Rate
Rate().usd()
Rate('full').AZN()
###Output
_____no_output_____
###Markdown
ЗадачаДана строка со значениями, которые разделены запятыми:
###Code
line = '2019-07-01,organic,4293'
###Output
_____no_output_____
###Markdown
Напишите функцию column_count, которая возвращает число столбцов в такой строке
###Code
len('2019-07-01,organic,4293'.split(','))
def line_count(line, count_empty_lines=True, sep=','):
if count_empty_lines:
return len(line.split(sep))
else:
if line:
return len(line.split(sep))
else:
return 0
line_count(line)
line_count('2019-07-01')
line_count('', count_empty_lines=False)
###Output
_____no_output_____
###Markdown
КлассыПример использования переменной в разных методах
###Code
class AnyName:
def method_1(self):
self.currency = 'usd'
def method_2(self):
print(self.currency)
a = AnyName()
from datetime import datetime
datetime.strptime
a.method_1()
a.currency
b.
b = AnyName()
b.method_2()
a.currency
a.method_2()
###Output
usd
###Markdown
Метод __init__
###Code
# метод __init__ выполняется при вызове класса
class Rate:
def __init__(self):
self.format = 'value'
print(self.format)
r = Rate()
r.format
###Output
_____no_output_____
###Markdown
Класс для курсов валют
###Code
class Rate:
def __init__(self):
self.format = 'value'
def show_current_format(self):
return self.format
r = Rate()
r.show_current_format()
###Output
_____no_output_____
###Markdown
Пример инициализации со значением переменной
###Code
class Rate:
def __init__(self, format_):
self.format = format_
def show_current_format(self):
return self.format
r = Rate(format_='value')
r.show_current_format()
###Output
_____no_output_____
###Markdown
Или сразу со значением по умолчанию
###Code
class Rate:
def __init__(self, format_='value'):
self.format = format_
def show_current_format(self):
return self.format
r = Rate()
r.show_current_format()
r_full = Rate(format_='full')
r_full.show_current_format()
r.format
# значения переменных класса можно менять
r.format = 'full_123'
r.show_current_format()
###Output
_____no_output_____
###Markdown
ЗаданиеСоздайте класс сотрудника Employee. При инициализации класса задается имя сотрудника name и его текущая зарплата salary. Напишите следующие методы:1. Метод up, который увеличивает зарплату сотрудника на 1002. Метод print, который выводит на экран текущую зарплату сотрудника в формате "Сотрудник Иван, зарплата 100"
###Code
class Employee:
def __init__(self, name, salary):
self.name = name
self.salary = salary
def up(self):
self.salary += 100
def print_(self):
print('Сотрудник {}, зарплата {}'.format(self.name, self.salary))
ivan = Employee(name='Иван', salary=50)
ivan.salary
ivan.up()
ivan.print_()
elena = Employee(name='Елена', salary=300)
elena.up()
elena.print_()
###Output
Сотрудник Елена, зарплата 1000
###Markdown
Полная версия класса
###Code
import requests
class Rate:
def __init__(self, format_='value'):
self.format = format_
def exchange_rates(self):
"""
Возвращает ответ сервиса с информацией о валютах в виде:
{
'AMD': {
'CharCode': 'AMD',
'ID': 'R01060',
'Name': 'Армянских драмов',
'Nominal': 100,
'NumCode': '051',
'Previous': 14.103,
'Value': 14.0879
},
...
}
"""
self.r = requests.get('https://www.cbr-xml-daily.ru/daily_json.js')
return self.r.json()['Valute']
def make_format(self, currency):
"""
Возвращает информацию о валюте currency в двух вариантах:
- полная информация о валюте при self.format = 'full':
Rate('full').make_format('EUR')
{
'CharCode': 'EUR',
'ID': 'R01239',
'Name': 'Евро',
'Nominal': 1,
'NumCode': '978',
'Previous': 79.6765,
'Value': 79.4966
}
Rate('value').make_format('EUR')
79.4966
"""
response = self.exchange_rates()
if currency in response:
if self.format == 'full':
return response[currency]
if self.format == 'value':
return response[currency]['Value']
return 'Error'
def eur(self):
"""Возвращает курс евро на сегодня в формате self.format"""
return self.make_format('EUR')
def usd(self):
"""Возвращает курс доллара на сегодня в формате self.format"""
return self.make_format('USD')
def brl(self):
"""Возвращает курс бразильского реала на сегодня в формате self.format"""
return self.make_format('BRL')
r = Rate(format_='full')
r.exchange_rates()
r.usd()
r = Rate()
r.usd()
r.exchange_rates()
###Output
_____no_output_____
###Markdown
Документация необходима почти всем методам
###Code
?r.exchange_rates
###Output
_____no_output_____
###Markdown
Наследование Разработчикам финансового департамента помимо курса надо работать с кодами валют. Как сохранить сохранить разработку класса Rate у нас, а полезные функции передать финансистам?
###Code
class CurrencyCodes(Rate):
def __init__(self):
super().__init__(format_='full')
###Output
_____no_output_____
###Markdown
Теперь классу CurrencyCodes доступны все методы класса Rate. Можем продолжать разработку в нужном направлении.
###Code
cc = CurrencyCodes()
CurrencyCodes().usd()
###Output
_____no_output_____
###Markdown
Допишем в класс что-нибудь свое новенькое
###Code
class CurrencyCodes(Rate):
def __init__(self):
super().__init__(format_='full')
def currency_id(self, currency):
"""Получение идентификатора валюты"""
return self.make_format(currency)['ID']
currency = CurrencyCodes()
currency.currency_id('USD')
###Output
_____no_output_____
###Markdown
Система повышения сотрудников
###Code
class Employee:
def __init__(self, name, seniority):
self.name = name
self.seniority = seniority
self.grade = 1
def grade_up(self):
"""Повышает уровень сотрудника"""
self.grade += 1
def publish_grade(self):
"""Публикация результатов аккредитации сотрудников"""
print(self.name, self.grade)
def check_if_it_is_time_for_upgrade(self):
pass
class Developer(Employee):
def __init__(self, name, seniority):
super().__init__(name, seniority)
def check_if_it_is_time_for_upgrade(self):
# для каждой аккредитации увеличиваем счетчик на 1
# пока считаем, что все разработчики проходят аккредитацию
self.seniority += 1
# условие повышения сотрудника из презентации
if self.seniority % 5 == 0:
self.grade_up()
# публикация результатов
return self.publish_grade()
# проверяем как работает система повышения сотрудников на примере отдела разработки
# разработчик Александр только что пришел в компанию
alex = Developer('Александр', 0)
for i in range(20):
alex.check_if_it_is_time_for_upgrade()
###Output
Александр 1
Александр 1
Александр 1
Александр 1
Александр 2
Александр 2
Александр 2
Александр 2
Александр 2
Александр 3
Александр 3
Александр 3
Александр 3
Александр 3
Александр 4
Александр 4
Александр 4
Александр 4
Александр 4
Александр 5
###Markdown
Импорт классов и функций
###Code
from libs.exchange import my_sum
my_sum(1, 2)
from libs.exchange import Rate
Rate().AZN()
# такой способ импорта крайне не рекомендуется
from libs.exchange import *
###Output
_____no_output_____
###Markdown
Если библиотека лежит в произвольной папке
###Code
import sys
sys.path
# пример
import sys
sys.path.append('/Users/kbashevoy/Desktop/Нетология/Занятия/Записи/06. Классы в python/libs')
from exchange import my_sum
my_sum(3, 3)
###Output
_____no_output_____ |
MonteCarloSimExample_diceRoll.ipynb | ###Markdown
edit input variables and view results of better oddsnote that the double bettor is capable of providing larger returns (black line), but is likewise capable of providing larger losses in comparison to the simple better (cyan line).
###Code
# edit these input variables and run/view results:
sampleSize = 50
startingFunds = 10000
wagerSize = 200
wagerCount = 100
x = 0
while x < sampleSize:
simple_bettor(startingFunds,wagerSize,wagerCount,'c')
#simple_bettor(startingFunds,wagerSize*2,wagerCount,'c')
doubler_bettor(startingFunds,wagerSize,wagerCount,'k')
x+=1
plt.ylabel('Account Value')
plt.xlabel('Wager Count')
plt.axhline(startingFunds, color = 'r', linewidth = 2)
###Output
_____no_output_____
###Markdown
Compare results:
###Code
import random
import matplotlib
import matplotlib.pyplot as plt
plt.style.use('ggplot')
% matplotlib inline
import time
# functions
'''
basic roll function
'''
def rollDice():
roll = random.randint(1,100)
if roll == 100:
return False
elif roll <= 50:
return False
elif 100 > roll >= 50:
return True
'''
double bettor option
'''
def doubler_bettor(funds,initial_wager,wager_count,color):
global doubler_busts
#####################
global doubler_profits
value = funds
wager = initial_wager
wX = []
vY = []
currentWager = 1
previousWager = 'win'
previousWagerAmount = initial_wager
while currentWager <= wager_count:
if previousWager == 'win':
if rollDice():
value += wager
wX.append(currentWager)
vY.append(value)
else:
value -= wager
previousWager = 'loss'
previousWagerAmount = wager
wX.append(currentWager)
vY.append(value)
if value < 0:
currentWager += 10000000000000000
doubler_busts += 1
elif previousWager == 'loss':
if rollDice():
wager = previousWagerAmount * 2
if (value - wager) < 0:
wager = value
value += wager
wager = initial_wager
previousWager = 'win'
wX.append(currentWager)
vY.append(value)
else:
wager = previousWagerAmount * 2
if (value - wager) < 0:
wager = value
value -= wager
previousWager = 'loss'
previousWagerAmount = wager
wX.append(currentWager)
vY.append(value)
if value <= 0:
currentWager += 10000000000000000
doubler_busts += 1
currentWager += 1
plt.plot(wX,vY,color)
#####################
if value > funds:
doubler_profits+=1
'''
Simple bettor, betting the same amount each time.
'''
def simple_bettor(funds,initial_wager,wager_count,color):
global simple_busts
#####################
global simple_profits
value = funds
wager = initial_wager
wX = []
vY = []
currentWager = 1
while currentWager <= wager_count:
if rollDice():
value += wager
wX.append(currentWager)
vY.append(value)
else:
value -= wager
wX.append(currentWager)
vY.append(value)
if value <= 0:
currentWager += 10000000000000000
simple_busts +=1
currentWager += 1
plt.plot(wX,vY,color)
#####################
if value > funds:
simple_profits+=1
x = 0
simple_busts = 0.0
doubler_busts = 0.0
#####################
simple_profits = 0.0
doubler_profits = 0.0
while x < sampleSize:
simple_bettor(startingFunds,wagerSize,wagerCount,'c.')
#simple_bettor(startingFunds,wagerSize*2,wagerCount,'c')
doubler_bettor(startingFunds,wagerSize,wagerCount,'k')
x+=1
print(('Simple Bettor Bust Chances:', (simple_busts/sampleSize)*100.00))
print(('Doubler Bettor Bust Chances:', (doubler_busts/sampleSize)*100.00))
print (('Simple Bettor Profit Chances:', (simple_profits/sampleSize)*100.00))
print(('Doubler Bettor Profit Chances:', (doubler_profits/sampleSize)*100.00))
plt.axhline(0, color = 'r', linewidth = 3)
plt.axhline(startingFunds, color = 'r', linewidth = 2)
plt.ylabel('Account Value')
plt.xlabel('Wager Count')
plt.show()
###Output
('Simple Bettor Bust Chances:', 0.0)
('Doubler Bettor Bust Chances:', 56.00000000000001)
('Simple Bettor Profit Chances:', 24.0)
('Doubler Bettor Profit Chances:', 38.0)
|
Udacity-DL/dlnd_tv_script_generation.ipynb | ###Markdown
TV Script GenerationIn this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern). Get the DataThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_sentence_range` to view different parts of the data.
###Code
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555
The sentences 0 to 10:
Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.
Moe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?
Moe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.
Moe_Szyslak: What's the matter Homer? You're not your normal effervescent self.
Homer_Simpson: I got my problems, Moe. Give me another one.
Moe_Szyslak: Homer, hey, you should not drink to forget your problems.
Barney_Gumble: Yeah, you should only drink to enhance your social skills.
###Markdown
Implement Preprocessing FunctionsThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`
###Code
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( . )- Comma ( , )- Quotation Mark ( " )- Semicolon ( ; )- Exclamation mark ( ! )- Question mark ( ? )- Left Parentheses ( ( )- Right Parentheses ( ) )- Dash ( -- )- Return ( \n )This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
token_up = {}
token_up['.'] = "||Period||"
token_up[','] = "||Comma||"
token_up['--'] = "||Dash||"
token_up[')'] = "||Right_Parentheses||"
token_up['"'] = "||Quotation_Mark||"
token_up['\n'] = "||Return||"
token_up['!'] = "||Exclamation_mark||"
token_up['('] = "||Left_Parentheses||"
token_up[';'] = "||Semicolon||"
token_up['?'] = "||Question_mark||"
return token_up
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkYou'll build the components necessary to build a RNN by implementing the following functions below:- get_inputs- get_init_cell- get_embed- build_rnn- build_nn- get_batches Check the Version of TensorFlow and Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
TensorFlow Version: 1.1.0
###Markdown
InputImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.- Targets placeholder- Learning Rate placeholderReturn the placeholders in the following tuple `(Input, Targets, LearningRate)`
###Code
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
return None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
###Output
_____no_output_____
###Markdown
Build RNN Cell and InitializeStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).- The Rnn size should be set using `rnn_size`- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCellzero_state) function - Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)Return the cell and initial state in the following tuple `(Cell, InitialState)`
###Code
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
###Output
_____no_output_____
###Markdown
Word EmbeddingApply embedding to `input_data` using TensorFlow. Return the embedded sequence.
###Code
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
###Output
_____no_output_____
###Markdown
Build RNNYou created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)Return the outputs and final_state state in the following tuple `(Outputs, FinalState)`
###Code
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
###Output
_____no_output_____
###Markdown
Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.Return the logits and final state in the following tuple (Logits, FinalState)
###Code
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
###Output
_____no_output_____
###Markdown
BatchesImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:- The first element is a single batch of **input** with the shape `[batch size, sequence length]`- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`If you can't fill the last batch with enough data, drop the last batch.For exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)` would return a Numpy array of the following:```[ First Batch [ Batch of Input [[ 1 2 3], [ 7 8 9]], Batch of targets [[ 2 3 4], [ 8 9 10]] ], Second Batch [ Batch of Input [[ 4 5 6], [10 11 12]], Batch of targets [[ 5 6 7], [11 12 13]] ]]```
###Code
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
###Output
_____no_output_____
###Markdown
Neural Network Training HyperparametersTune the following parameters:- Set `num_epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `embed_dim` to the size of the embedding.- Set `seq_length` to the length of sequence.- Set `learning_rate` to the learning rate.- Set `show_every_n_batches` to the number of batches the neural network should print progress.
###Code
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
###Output
_____no_output_____
###Markdown
Build the GraphBuild the graph using the neural network you implemented.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
###Output
_____no_output_____
###Markdown
TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
###Output
_____no_output_____
###Markdown
Save ParametersSave `seq_length` and `save_dir` for generating a new TV script.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
###Output
_____no_output_____
###Markdown
Checkpoint
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
###Output
_____no_output_____
###Markdown
Implement Generate Functions Get TensorsGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graphget_tensor_by_name). Get the tensors using the following names:- "input:0"- "initial_state:0"- "final_state:0"- "probs:0"Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)`
###Code
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return None, None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
###Output
_____no_output_____
###Markdown
Choose WordImplement the `pick_word()` function to select the next word using `probabilities`.
###Code
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
###Output
_____no_output_____
###Markdown
Generate TV ScriptThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.
###Code
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
###Output
_____no_output_____ |
notebooks/topcount/atoti.ipynb | ###Markdown
Ways to visualize top count with atoti\[_In case you’re unable to see the atoti visualizations in GitHub, try viewing the notebook in [nbviewer](https://nbviewer.org/github/atoti/notebooks/blob/master/notebooks/topcount/atoti.ipynb)._]Given different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
session = tt.create_session(config={"user_content_storage": "./content", "port": 55707})
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
table_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
types={"Reviews": tt.type.FLOAT, "Installs": tt.type.FLOAT},
process_quotes=True,
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
session.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h, l, m = cube.hierarchies, cube.levels, cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
session.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
session.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on in a multilevel hierarchy from the atoti editor.
###Code
session.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and add `Category` level to the subplot section. Slice the pie chart is by `Apps` and apply filter on `App` level of the `App Categories`
###Code
session.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.link(path="/#/dashboard/767")
###Output
_____no_output_____
###Markdown
Ways to visualize top count with atotiGiven different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
session = tt.create_session(config={"user_content_storage": "./content"})
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
table_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
types={"Reviews": tt.type.FLOAT, "Installs": tt.type.FLOAT},
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
session.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h, l, m = cube.hierarchies, cube.levels, cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
session.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
session.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on from the atoti editor.
###Code
session.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and click on the ellipsis to show the subplot controls. ![show subplot controls](https://data.atoti.io/notebooks/topcount/show_subplot_controls.png) You should be able to add `Category` level to the subplot section sliced by `Apps`. Apply filter on `App` level of the `App Categories`
###Code
session.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.link(path="/#/dashboard/767")
###Output
_____no_output_____
###Markdown
Ways to visualize top count with atotiGiven different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
session = tt.create_session(config={"user_content_storage": "./content"})
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
table_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
types={"Reviews": tt.type.FLOAT, "Installs": tt.type.FLOAT},
process_quotes=True,
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
session.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h, l, m = cube.hierarchies, cube.levels, cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
session.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
session.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on in a multilevel hierarchy from the atoti editor.
###Code
session.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and add `Category` level to the subplot section. Slice the pie chart is by `Apps` and apply filter on `App` level of the `App Categories`
###Code
session.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.link(path="/#/dashboard/767")
###Output
_____no_output_____
###Markdown
Ways to visualize top count with atotiGiven different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
session = tt.create_session(config={"user_content_storage": "./content"})
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
table_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
types={"Reviews": tt.type.FLOAT, "Installs": tt.type.FLOAT},
process_quotes=True,
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
session.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h, l, m = cube.hierarchies, cube.levels, cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
session.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
session.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on from the atoti editor.
###Code
session.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and click on the ellipsis to show the subplot controls. ![show subplot controls](https://data.atoti.io/notebooks/topcount/show_subplot_controls.png) You should be able to add `Category` level to the subplot section sliced by `Apps`. Apply filter on `App` level of the `App Categories`
###Code
session.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.link(path="/#/dashboard/767")
###Output
_____no_output_____
###Markdown
Ways to visualize top count with atotiGiven different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
from atoti.config import create_config
config = create_config(metadata_db="./metadata.db")
session = tt.create_session(config=config)
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
store_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
sampling_mode=tt.sampling.FULL,
types={"Reviews": tt.type.FLOAT, "Installs": tt.type.FLOAT},
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
session.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h = cube.hierarchies
l = cube.levels
m = cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
session.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
session.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on from the atoti editor.
###Code
session.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and click on the ellipsis to show the subplot controls. ![show subplot controls](https://data.atoti.io/notebooks/topcount/show_subplot_controls.png) You should be able to add `Category` level to the subplot section sliced by `Apps`. Apply filter on `App` level of the `App Categories`
###Code
session.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.url + "/#/dashboard/767"
###Output
_____no_output_____
###Markdown
Ways to visualize top count with atotiGiven different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
from atoti.config import create_config
config = create_config(metadata_db="./metadata.db")
session = tt.create_session(config=config)
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
store_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
sampling_mode=tt.sampling.FULL,
types={"Reviews": tt.types.FLOAT, "Installs": tt.types.FLOAT},
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
cube.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h = cube.hierarchies
l = cube.levels
m = cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
cube.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
cube.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on from the atoti editor.
###Code
cube.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and click on the ellipsis to show the subplot controls. ![show subplot controls](https://data.atoti.io/notebooks/topcount/show_subplot_controls.png) You should be able to add `Category` level to the subplot section sliced by `Apps`. Apply filter on `App` level of the `App Categories`
###Code
cube.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.url + "/#/dashboard/767"
###Output
_____no_output_____
###Markdown
Ways to visualize top count with atotiGiven different categories of items, we will explore how to achieve the following with atoti:- Visualize top 10 apps with the highest rating in table- Visualize top 10 categories with most number of apps rated 5 in Pie chart- Visualize top 10 apps for each category in subplotsSee [pandas.ipynb](pandas.ipynb) to see how we can achieve the similar top count with Pandas.__Note on data:__We are using the [Google Play Store Apps data](https://www.kaggle.com/lava18/google-play-store-apps) from Kaggle. Data has been processed to convert strings with millions and thousands abbreviations into numeric data. Top count with atoti
###Code
import atoti as tt
from atoti.config import create_config
config = create_config(metadata_db="./metadata.db")
session = tt.create_session(config=config)
playstore = session.read_csv(
"s3://data.atoti.io/notebooks/topcount/googleplaystore_cleaned.csv",
store_name="playstore",
keys=["App", "Category", "Genres", "Current Ver"],
sampling_mode=tt.sampling.FULL,
types={"Reviews": tt.type.FLOAT, "Installs": tt.type.FLOAT},
)
playstore.head()
cube = session.create_cube(playstore, "Google Playstore")
cube.schema
###Output
_____no_output_____
###Markdown
Top 10 apps with highest rating across categoriesUse the content editor to apply a top count filter on the pivot table.
###Code
session.visualize("Top 10 apps with highest rating across categories")
###Output
_____no_output_____
###Markdown
Top 10 categories with the most number of apps rated 5
###Code
h = cube.hierarchies
l = cube.levels
m = cube.measures
m
###Output
_____no_output_____
###Markdown
Number of apps rated 5Create a measure that counts the number of apps rated 5 within categories and at levels below the category.
###Code
m["Count with rating 5"] = tt.agg.sum(
tt.where(m["Rating.MEAN"] == 5, m["contributors.COUNT"], 0),
scope=tt.scope.origin(l["Category"], l["App"]),
)
###Output
_____no_output_____
###Markdown
We can drill down to different levels from category and the count is computed on the fly.
###Code
session.visualize("Categories with apps rated 5")
###Output
_____no_output_____
###Markdown
Apply top count filter from **atoti editor** on the category by the `Count with rating 5` measure. The atoti editor is the atoti's Jupyterlab extension on the right with the icon.
###Code
session.visualize("Top 10 categories with most number of apps rated 5")
###Output
_____no_output_____
###Markdown
Top 10 apps for each categorySince we are performing top 10 apps filtering for each category, it's only right that we classify `App` under `Category`. In this case, we create a multi-level hierarchy such as the following:
###Code
h["App Categories"] = [l["Category"], l["App"]]
h
###Output
_____no_output_____
###Markdown
This structure allows us to select at which level we want to apply the top count on from the atoti editor.
###Code
session.visualize("Top 10 apps with highest rating for each category")
###Output
_____no_output_____
###Markdown
Creating subplot to visualize top count per categoryAgain, go to the atoti's Jupyterlab extension and click on the ellipsis to show the subplot controls. ![show subplot controls](https://data.atoti.io/notebooks/topcount/show_subplot_controls.png) You should be able to add `Category` level to the subplot section sliced by `Apps`. Apply filter on `App` level of the `App Categories`
###Code
session.visualize("Top 10 apps within each categories")
###Output
_____no_output_____
###Markdown
You can use the filter to select the categories that you want to view. Alternative, use `session.url` to access the web application to build an interactive dashboard with quick filters. Check out the link below.
###Code
session.url + "/#/dashboard/767"
###Output
_____no_output_____ |
3.5-classifying-movie-reviews.ipynb | ###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved! ConclusionsHere's what you should take away from this example:* There's usually quite a bit of preprocessing you need to do on your raw data in order to be able to feed it -- as tensors -- into a neural network. In the case of sequences of words, they can be encoded as binary vectors -- but there are other encoding options too.* Stacks of `Dense` layers with `relu` activations can solve a wide range of problems (including sentiment classification), and you will likely use them frequently.* In a binary classification problem (two output classes), your network should end with a `Dense` layer with 1 unit and a `sigmoid` activation, i.e. the output of your network should be a scalar between 0 and 1, encoding a probability.* With such a scalar sigmoid output, on a binary classification problem, the loss function you should use is `binary_crossentropy`.* The `rmsprop` optimizer is generally a good enough choice of optimizer, whatever your problem. That's one less thing for you to worry about.* As they get better on their training data, neural networks eventually start _overfitting_ and end up obtaining increasingly worse results on data never-seen-before. Make sure to always monitor performance on data that is outside of the training set.
###Code
model = models.Sequential()
model.add(layers.Dense(5, activation='tanh', input_shape=(10000,)))
# model.add(layers.Dense(16, activation='tanh'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd',
loss='mse',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
###Output
Epoch 1/4
512/25000 [..............................] - ETA: 17:28 - loss: 0.2507 - acc: 0.4941
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 1279s
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 5s - loss: 0.5157 - binary_accuracy: 0.7896 - val_loss: 0.4008 - val_binary_accuracy: 0.8656
Epoch 2/20
15000/15000 [==============================] - 3s - loss: 0.3146 - binary_accuracy: 0.9029 - val_loss: 0.3254 - val_binary_accuracy: 0.8785
Epoch 3/20
15000/15000 [==============================] - 2s - loss: 0.2320 - binary_accuracy: 0.9245 - val_loss: 0.2807 - val_binary_accuracy: 0.8926
Epoch 4/20
15000/15000 [==============================] - 2s - loss: 0.1816 - binary_accuracy: 0.9431 - val_loss: 0.2729 - val_binary_accuracy: 0.8908
Epoch 5/20
15000/15000 [==============================] - 2s - loss: 0.1496 - binary_accuracy: 0.9514 - val_loss: 0.2780 - val_binary_accuracy: 0.8888
Epoch 6/20
15000/15000 [==============================] - 2s - loss: 0.1210 - binary_accuracy: 0.9633 - val_loss: 0.3208 - val_binary_accuracy: 0.8810
Epoch 7/20
15000/15000 [==============================] - 2s - loss: 0.1031 - binary_accuracy: 0.9691 - val_loss: 0.3044 - val_binary_accuracy: 0.8851
Epoch 8/20
15000/15000 [==============================] - 2s - loss: 0.0848 - binary_accuracy: 0.9761 - val_loss: 0.3365 - val_binary_accuracy: 0.8770
Epoch 9/20
15000/15000 [==============================] - 2s - loss: 0.0729 - binary_accuracy: 0.9805 - val_loss: 0.3597 - val_binary_accuracy: 0.8804
Epoch 10/20
15000/15000 [==============================] - 2s - loss: 0.0583 - binary_accuracy: 0.9859 - val_loss: 0.3720 - val_binary_accuracy: 0.8800
Epoch 11/20
15000/15000 [==============================] - 2s - loss: 0.0493 - binary_accuracy: 0.9883 - val_loss: 0.3975 - val_binary_accuracy: 0.8776
Epoch 12/20
15000/15000 [==============================] - 2s - loss: 0.0390 - binary_accuracy: 0.9921 - val_loss: 0.4396 - val_binary_accuracy: 0.8780
Epoch 13/20
15000/15000 [==============================] - 2s - loss: 0.0304 - binary_accuracy: 0.9943 - val_loss: 0.4547 - val_binary_accuracy: 0.8750
Epoch 14/20
15000/15000 [==============================] - 2s - loss: 0.0244 - binary_accuracy: 0.9958 - val_loss: 0.4809 - val_binary_accuracy: 0.8733
Epoch 15/20
15000/15000 [==============================] - 3s - loss: 0.0210 - binary_accuracy: 0.9961 - val_loss: 0.5382 - val_binary_accuracy: 0.8722
Epoch 16/20
15000/15000 [==============================] - 4s - loss: 0.0123 - binary_accuracy: 0.9993 - val_loss: 0.5490 - val_binary_accuracy: 0.8717
Epoch 17/20
15000/15000 [==============================] - 2s - loss: 0.0128 - binary_accuracy: 0.9981 - val_loss: 0.5783 - val_binary_accuracy: 0.8712
Epoch 18/20
15000/15000 [==============================] - 3s - loss: 0.0087 - binary_accuracy: 0.9991 - val_loss: 0.6073 - val_binary_accuracy: 0.8681
Epoch 19/20
15000/15000 [==============================] - 2s - loss: 0.0070 - binary_accuracy: 0.9992 - val_loss: 0.6426 - val_binary_accuracy: 0.8662
Epoch 20/20
15000/15000 [==============================] - 2s - loss: 0.0071 - binary_accuracy: 0.9990 - val_loss: 0.6750 - val_binary_accuracy: 0.8684
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
WARNING:tensorflow:From C:\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
WARNING:tensorflow:From C:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
512/15000 [>.............................] - ETA: 5:08 - loss: 0.6939 - binary_accuracy: 0.5020
1024/15000 [=>............................] - ETA: 2:30 - loss: 0.6918 - binary_accuracy: 0.5322
1536/15000 [==>...........................] - ETA: 1:40 - loss: 0.6847 - binary_accuracy: 0.5475
2048/15000 [===>..........................] - ETA: 1:12 - loss: 0.6749 - binary_accuracy: 0.5957
2560/15000 [====>.........................] - ETA: 56s - loss: 0.6626 - binary_accuracy: 0.6254
3072/15000 [=====>........................] - ETA: 45s - loss: 0.6516 - binary_accuracy: 0.6475
3584/15000 [======>.......................] - ETA: 37s - loss: 0.6417 - binary_accuracy: 0.6607
4096/15000 [=======>......................] - ETA: 32s - loss: 0.6375 - binary_accuracy: 0.6575
4608/15000 [========>.....................] - ETA: 27s - loss: 0.6274 - binary_accuracy: 0.6701
5120/15000 [=========>....................] - ETA: 24s - loss: 0.6198 - binary_accuracy: 0.6818
5632/15000 [==========>...................] - ETA: 21s - loss: 0.6129 - binary_accuracy: 0.6880
6144/15000 [===========>..................] - ETA: 20s - loss: 0.6040 - binary_accuracy: 0.6999
6656/15000 [============>.................] - ETA: 21s - loss: 0.5968 - binary_accuracy: 0.7084
7168/15000 [=============>................] - ETA: 21s - loss: 0.5887 - binary_accuracy: 0.7167
7680/15000 [==============>...............] - ETA: 21s - loss: 0.5789 - binary_accuracy: 0.7276
8192/15000 [===============>..............] - ETA: 18s - loss: 0.5725 - binary_accuracy: 0.7334
8704/15000 [================>.............] - ETA: 17s - loss: 0.5661 - binary_accuracy: 0.7399
9216/15000 [=================>............] - ETA: 16s - loss: 0.5610 - binary_accuracy: 0.7447
9728/15000 [==================>...........] - ETA: 14s - loss: 0.5571 - binary_accuracy: 0.7473
10240/15000 [===================>..........] - ETA: 13s - loss: 0.5518 - binary_accuracy: 0.7519
10752/15000 [====================>.........] - ETA: 11s - loss: 0.5474 - binary_accuracy: 0.7547
11264/15000 [=====================>........] - ETA: 11s - loss: 0.5418 - binary_accuracy: 0.7597
11776/15000 [======================>.......] - ETA: 11s - loss: 0.5364 - binary_accuracy: 0.7633
12288/15000 [=======================>......] - ETA: 9s - loss: 0.5315 - binary_accuracy: 0.7674
12800/15000 [========================>.....] - ETA: 8s - loss: 0.5256 - binary_accuracy: 0.7717
13312/15000 [=========================>....] - ETA: 6s - loss: 0.5206 - binary_accuracy: 0.7750
13824/15000 [==========================>...] - ETA: 4s - loss: 0.5163 - binary_accuracy: 0.7779
14336/15000 [===========================>..] - ETA: 2s - loss: 0.5117 - binary_accuracy: 0.7808
14848/15000 [============================>.] - ETA: 0s - loss: 0.5073 - binary_accuracy: 0.7838
15000/15000 [==============================] - 109s 7ms/step - loss: 0.5058 - binary_accuracy: 0.7849 - val_loss: 0.3778 - val_binary_accuracy: 0.8699
Epoch 2/20
512/15000 [>.............................] - ETA: 1s - loss: 0.3218 - binary_accuracy: 0.9062
1024/15000 [=>............................] - ETA: 1s - loss: 0.3184 - binary_accuracy: 0.9150
1536/15000 [==>...........................] - ETA: 1s - loss: 0.3301 - binary_accuracy: 0.9082
2048/15000 [===>..........................] - ETA: 1s - loss: 0.3257 - binary_accuracy: 0.9106
2560/15000 [====>.........................] - ETA: 1s - loss: 0.3208 - binary_accuracy: 0.9105
3072/15000 [=====>........................] - ETA: 1s - loss: 0.3180 - binary_accuracy: 0.9121
3584/15000 [======>.......................] - ETA: 1s - loss: 0.3155 - binary_accuracy: 0.9146
4096/15000 [=======>......................] - ETA: 1s - loss: 0.3137 - binary_accuracy: 0.9141
4608/15000 [========>.....................] - ETA: 1s - loss: 0.3098 - binary_accuracy: 0.9138
5120/15000 [=========>....................] - ETA: 1s - loss: 0.3060 - binary_accuracy: 0.9162
5632/15000 [==========>...................] - ETA: 1s - loss: 0.3070 - binary_accuracy: 0.9141
6144/15000 [===========>..................] - ETA: 1s - loss: 0.3026 - binary_accuracy: 0.9142
6656/15000 [============>.................] - ETA: 1s - loss: 0.3060 - binary_accuracy: 0.9103
7168/15000 [=============>................] - ETA: 1s - loss: 0.3083 - binary_accuracy: 0.9076
7680/15000 [==============>...............] - ETA: 0s - loss: 0.3112 - binary_accuracy: 0.9053
8192/15000 [===============>..............] - ETA: 0s - loss: 0.3100 - binary_accuracy: 0.9054
8704/15000 [================>.............] - ETA: 0s - loss: 0.3102 - binary_accuracy: 0.9049
9216/15000 [=================>............] - ETA: 0s - loss: 0.3092 - binary_accuracy: 0.9049
9728/15000 [==================>...........] - ETA: 0s - loss: 0.3083 - binary_accuracy: 0.9049
10240/15000 [===================>..........] - ETA: 0s - loss: 0.3070 - binary_accuracy: 0.9059
10752/15000 [====================>.........] - ETA: 0s - loss: 0.3069 - binary_accuracy: 0.9063
11264/15000 [=====================>........] - ETA: 0s - loss: 0.3071 - binary_accuracy: 0.9050
11776/15000 [======================>.......] - ETA: 0s - loss: 0.3051 - binary_accuracy: 0.9060
12288/15000 [=======================>......] - ETA: 0s - loss: 0.3047 - binary_accuracy: 0.9042
12800/15000 [========================>.....] - ETA: 0s - loss: 0.3027 - binary_accuracy: 0.9049
13312/15000 [=========================>....] - ETA: 0s - loss: 0.3022 - binary_accuracy: 0.9044
13824/15000 [==========================>...] - ETA: 0s - loss: 0.3015 - binary_accuracy: 0.9039
14336/15000 [===========================>..] - ETA: 0s - loss: 0.3007 - binary_accuracy: 0.9040
14848/15000 [============================>.] - ETA: 0s - loss: 0.2994 - binary_accuracy: 0.9044
15000/15000 [==============================] - 3s 186us/step - loss: 0.2991 - binary_accuracy: 0.9045 - val_loss: 0.2996 - val_binary_accuracy: 0.8897
Epoch 3/20
512/15000 [>.............................] - ETA: 3s - loss: 0.2102 - binary_accuracy: 0.9355
1024/15000 [=>............................] - ETA: 2s - loss: 0.2274 - binary_accuracy: 0.9297
2048/15000 [===>..........................] - ETA: 1s - loss: 0.2201 - binary_accuracy: 0.9331
2560/15000 [====>.........................] - ETA: 1s - loss: 0.2181 - binary_accuracy: 0.9348
3072/15000 [=====>........................] - ETA: 1s - loss: 0.2226 - binary_accuracy: 0.9307
3584/15000 [======>.......................] - ETA: 1s - loss: 0.2237 - binary_accuracy: 0.9291
4096/15000 [=======>......................] - ETA: 1s - loss: 0.2214 - binary_accuracy: 0.9316
4608/15000 [========>.....................] - ETA: 1s - loss: 0.2202 - binary_accuracy: 0.9325
5120/15000 [=========>....................] - ETA: 1s - loss: 0.2195 - binary_accuracy: 0.9318
5632/15000 [==========>...................] - ETA: 1s - loss: 0.2218 - binary_accuracy: 0.9316
6144/15000 [===========>..................] - ETA: 1s - loss: 0.2199 - binary_accuracy: 0.9320
6656/15000 [============>.................] - ETA: 0s - loss: 0.2193 - binary_accuracy: 0.9313
7168/15000 [=============>................] - ETA: 0s - loss: 0.2203 - binary_accuracy: 0.9311
7680/15000 [==============>...............] - ETA: 0s - loss: 0.2214 - binary_accuracy: 0.9290
8192/15000 [===============>..............] - ETA: 0s - loss: 0.2207 - binary_accuracy: 0.9285
8704/15000 [================>.............] - ETA: 0s - loss: 0.2220 - binary_accuracy: 0.9266
9216/15000 [=================>............] - ETA: 0s - loss: 0.2216 - binary_accuracy: 0.9270
9728/15000 [==================>...........] - ETA: 0s - loss: 0.2209 - binary_accuracy: 0.9273
10240/15000 [===================>..........] - ETA: 0s - loss: 0.2191 - binary_accuracy: 0.9278
10752/15000 [====================>.........] - ETA: 0s - loss: 0.2193 - binary_accuracy: 0.9275
11264/15000 [=====================>........] - ETA: 0s - loss: 0.2198 - binary_accuracy: 0.9273
11776/15000 [======================>.......] - ETA: 0s - loss: 0.2190 - binary_accuracy: 0.9274
12288/15000 [=======================>......] - ETA: 0s - loss: 0.2203 - binary_accuracy: 0.9263
12800/15000 [========================>.....] - ETA: 0s - loss: 0.2188 - binary_accuracy: 0.9272
13312/15000 [=========================>....] - ETA: 0s - loss: 0.2181 - binary_accuracy: 0.9276
13824/15000 [==========================>...] - ETA: 0s - loss: 0.2170 - binary_accuracy: 0.9280
14336/15000 [===========================>..] - ETA: 0s - loss: 0.2170 - binary_accuracy: 0.9280
14848/15000 [============================>.] - ETA: 0s - loss: 0.2168 - binary_accuracy: 0.9285
15000/15000 [==============================] - 3s 179us/step - loss: 0.2167 - binary_accuracy: 0.9283 - val_loss: 0.3087 - val_binary_accuracy: 0.8709
Epoch 4/20
512/15000 [>.............................] - ETA: 5s - loss: 0.1940 - binary_accuracy: 0.9355
1024/15000 [=>............................] - ETA: 4s - loss: 0.1939 - binary_accuracy: 0.9316
1536/15000 [==>...........................] - ETA: 3s - loss: 0.1885 - binary_accuracy: 0.9368
2048/15000 [===>..........................] - ETA: 3s - loss: 0.1847 - binary_accuracy: 0.9429
2560/15000 [====>.........................] - ETA: 3s - loss: 0.1845 - binary_accuracy: 0.9418
3072/15000 [=====>........................] - ETA: 2s - loss: 0.1813 - binary_accuracy: 0.9447
3584/15000 [======>.......................] - ETA: 2s - loss: 0.1754 - binary_accuracy: 0.9478
4096/15000 [=======>......................] - ETA: 2s - loss: 0.1764 - binary_accuracy: 0.9473
4608/15000 [========>.....................] - ETA: 1s - loss: 0.1744 - binary_accuracy: 0.9490
5120/15000 [=========>....................] - ETA: 1s - loss: 0.1750 - binary_accuracy: 0.9488
5632/15000 [==========>...................] - ETA: 1s - loss: 0.1759 - binary_accuracy: 0.9471
6144/15000 [===========>..................] - ETA: 1s - loss: 0.1756 - binary_accuracy: 0.9463
6656/15000 [============>.................] - ETA: 1s - loss: 0.1740 - binary_accuracy: 0.9473
7168/15000 [=============>................] - ETA: 1s - loss: 0.1738 - binary_accuracy: 0.9470
7680/15000 [==============>...............] - ETA: 1s - loss: 0.1729 - binary_accuracy: 0.9482
8192/15000 [===============>..............] - ETA: 1s - loss: 0.1720 - binary_accuracy: 0.9482
8704/15000 [================>.............] - ETA: 0s - loss: 0.1715 - binary_accuracy: 0.9477
9216/15000 [=================>............] - ETA: 0s - loss: 0.1711 - binary_accuracy: 0.9478
9728/15000 [==================>...........] - ETA: 0s - loss: 0.1706 - binary_accuracy: 0.9481
10240/15000 [===================>..........] - ETA: 0s - loss: 0.1712 - binary_accuracy: 0.9479
10752/15000 [====================>.........] - ETA: 0s - loss: 0.1725 - binary_accuracy: 0.9464
11264/15000 [=====================>........] - ETA: 0s - loss: 0.1736 - binary_accuracy: 0.9451
11776/15000 [======================>.......] - ETA: 0s - loss: 0.1738 - binary_accuracy: 0.9445
12288/15000 [=======================>......] - ETA: 0s - loss: 0.1754 - binary_accuracy: 0.9438
12800/15000 [========================>.....] - ETA: 0s - loss: 0.1760 - binary_accuracy: 0.9429
13312/15000 [=========================>....] - ETA: 0s - loss: 0.1761 - binary_accuracy: 0.9427
13824/15000 [==========================>...] - ETA: 0s - loss: 0.1754 - binary_accuracy: 0.9431
14336/15000 [===========================>..] - ETA: 0s - loss: 0.1752 - binary_accuracy: 0.9434
14848/15000 [============================>.] - ETA: 0s - loss: 0.1743 - binary_accuracy: 0.9434
15000/15000 [==============================] - 3s 205us/step - loss: 0.1740 - binary_accuracy: 0.9433 - val_loss: 0.2838 - val_binary_accuracy: 0.8842
Epoch 5/20
512/15000 [>.............................] - ETA: 1s - loss: 0.1301 - binary_accuracy: 0.9707
1024/15000 [=>............................] - ETA: 1s - loss: 0.1387 - binary_accuracy: 0.9668
1536/15000 [==>...........................] - ETA: 1s - loss: 0.1425 - binary_accuracy: 0.9622
2048/15000 [===>..........................] - ETA: 1s - loss: 0.1371 - binary_accuracy: 0.9644
2560/15000 [====>.........................] - ETA: 1s - loss: 0.1365 - binary_accuracy: 0.9637
3072/15000 [=====>........................] - ETA: 1s - loss: 0.1372 - binary_accuracy: 0.9645
3584/15000 [======>.......................] - ETA: 1s - loss: 0.1345 - binary_accuracy: 0.9657
4096/15000 [=======>......................] - ETA: 1s - loss: 0.1332 - binary_accuracy: 0.9656
4608/15000 [========>.....................] - ETA: 1s - loss: 0.1340 - binary_accuracy: 0.9648
5120/15000 [=========>....................] - ETA: 1s - loss: 0.1331 - binary_accuracy: 0.9645
5632/15000 [==========>...................] - ETA: 1s - loss: 0.1355 - binary_accuracy: 0.9636
6144/15000 [===========>..................] - ETA: 1s - loss: 0.1383 - binary_accuracy: 0.9613
6656/15000 [============>.................] - ETA: 1s - loss: 0.1403 - binary_accuracy: 0.9594
7168/15000 [=============>................] - ETA: 0s - loss: 0.1399 - binary_accuracy: 0.9588
7680/15000 [==============>...............] - ETA: 0s - loss: 0.1402 - binary_accuracy: 0.9583
8192/15000 [===============>..............] - ETA: 0s - loss: 0.1390 - binary_accuracy: 0.9580
8704/15000 [================>.............] - ETA: 0s - loss: 0.1385 - binary_accuracy: 0.9584
9216/15000 [=================>............] - ETA: 0s - loss: 0.1382 - binary_accuracy: 0.9586
9728/15000 [==================>...........] - ETA: 0s - loss: 0.1382 - binary_accuracy: 0.9583
10240/15000 [===================>..........] - ETA: 0s - loss: 0.1382 - binary_accuracy: 0.9578
10752/15000 [====================>.........] - ETA: 0s - loss: 0.1395 - binary_accuracy: 0.9568
11264/15000 [=====================>........] - ETA: 0s - loss: 0.1395 - binary_accuracy: 0.9566
11776/15000 [======================>.......] - ETA: 0s - loss: 0.1401 - binary_accuracy: 0.9563
12288/15000 [=======================>......] - ETA: 0s - loss: 0.1404 - binary_accuracy: 0.9560
12800/15000 [========================>.....] - ETA: 0s - loss: 0.1401 - binary_accuracy: 0.9556
13312/15000 [=========================>....] - ETA: 0s - loss: 0.1409 - binary_accuracy: 0.9553
13824/15000 [==========================>...] - ETA: 0s - loss: 0.1418 - binary_accuracy: 0.9544
14336/15000 [===========================>..] - ETA: 0s - loss: 0.1418 - binary_accuracy: 0.9541
14848/15000 [============================>.] - ETA: 0s - loss: 0.1414 - binary_accuracy: 0.9543
15000/15000 [==============================] - 3s 193us/step - loss: 0.1416 - binary_accuracy: 0.9544 - val_loss: 0.2858 - val_binary_accuracy: 0.8864
Epoch 6/20
512/15000 [>.............................] - ETA: 2s - loss: 0.1232 - binary_accuracy: 0.9668
1024/15000 [=>............................] - ETA: 1s - loss: 0.1196 - binary_accuracy: 0.9678
1536/15000 [==>...........................] - ETA: 1s - loss: 0.1149 - binary_accuracy: 0.9688
2048/15000 [===>..........................] - ETA: 1s - loss: 0.1083 - binary_accuracy: 0.9727
2560/15000 [====>.........................] - ETA: 1s - loss: 0.1045 - binary_accuracy: 0.9723
3072/15000 [=====>........................] - ETA: 1s - loss: 0.1070 - binary_accuracy: 0.9714
3584/15000 [======>.......................] - ETA: 1s - loss: 0.1059 - binary_accuracy: 0.9713
4096/15000 [=======>......................] - ETA: 1s - loss: 0.1069 - binary_accuracy: 0.9714
4608/15000 [========>.....................] - ETA: 1s - loss: 0.1116 - binary_accuracy: 0.9698
5120/15000 [=========>....................] - ETA: 1s - loss: 0.1160 - binary_accuracy: 0.9672
5632/15000 [==========>...................] - ETA: 1s - loss: 0.1147 - binary_accuracy: 0.9673
6144/15000 [===========>..................] - ETA: 1s - loss: 0.1137 - binary_accuracy: 0.9674
6656/15000 [============>.................] - ETA: 0s - loss: 0.1144 - binary_accuracy: 0.9671
7168/15000 [=============>................] - ETA: 0s - loss: 0.1152 - binary_accuracy: 0.9668
7680/15000 [==============>...............] - ETA: 0s - loss: 0.1155 - binary_accuracy: 0.9660
8192/15000 [===============>..............] - ETA: 0s - loss: 0.1151 - binary_accuracy: 0.9657
8704/15000 [================>.............] - ETA: 0s - loss: 0.1154 - binary_accuracy: 0.9652
9216/15000 [=================>............] - ETA: 0s - loss: 0.1151 - binary_accuracy: 0.9652
9728/15000 [==================>...........] - ETA: 0s - loss: 0.1146 - binary_accuracy: 0.9654
10240/15000 [===================>..........] - ETA: 0s - loss: 0.1138 - binary_accuracy: 0.9655
10752/15000 [====================>.........] - ETA: 0s - loss: 0.1144 - binary_accuracy: 0.9652
11264/15000 [=====================>........] - ETA: 0s - loss: 0.1138 - binary_accuracy: 0.9650
11776/15000 [======================>.......] - ETA: 0s - loss: 0.1137 - binary_accuracy: 0.9651
12288/15000 [=======================>......] - ETA: 0s - loss: 0.1145 - binary_accuracy: 0.9648
12800/15000 [========================>.....] - ETA: 0s - loss: 0.1141 - binary_accuracy: 0.9652
13312/15000 [=========================>....] - ETA: 0s - loss: 0.1146 - binary_accuracy: 0.9651
13824/15000 [==========================>...] - ETA: 0s - loss: 0.1146 - binary_accuracy: 0.9651
14336/15000 [===========================>..] - ETA: 0s - loss: 0.1141 - binary_accuracy: 0.9653
14848/15000 [============================>.] - ETA: 0s - loss: 0.1140 - binary_accuracy: 0.9654
15000/15000 [==============================] - 3s 184us/step - loss: 0.1141 - binary_accuracy: 0.9655 - val_loss: 0.3180 - val_binary_accuracy: 0.8774
Epoch 7/20
512/15000 [>.............................] - ETA: 1s - loss: 0.1014 - binary_accuracy: 0.9707
1024/15000 [=>............................] - ETA: 1s - loss: 0.1003 - binary_accuracy: 0.9707
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0998 - binary_accuracy: 0.9701
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0972 - binary_accuracy: 0.9717
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0950 - binary_accuracy: 0.9734
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0968 - binary_accuracy: 0.9727
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0931 - binary_accuracy: 0.9741
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0908 - binary_accuracy: 0.9749
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0904 - binary_accuracy: 0.9748
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0901 - binary_accuracy: 0.9752
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0893 - binary_accuracy: 0.9759
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0908 - binary_accuracy: 0.9748
6656/15000 [============>.................] - ETA: 1s - loss: 0.0909 - binary_accuracy: 0.9746
7168/15000 [=============>................] - ETA: 1s - loss: 0.0934 - binary_accuracy: 0.9732
7680/15000 [==============>...............] - ETA: 0s - loss: 0.0933 - binary_accuracy: 0.9734
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0949 - binary_accuracy: 0.9724
8704/15000 [================>.............] - ETA: 0s - loss: 0.0949 - binary_accuracy: 0.9723
9216/15000 [=================>............] - ETA: 0s - loss: 0.0951 - binary_accuracy: 0.9722
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0964 - binary_accuracy: 0.9718
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0964 - binary_accuracy: 0.9719
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0966 - binary_accuracy: 0.9715
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0974 - binary_accuracy: 0.9708
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0965 - binary_accuracy: 0.9711
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0963 - binary_accuracy: 0.9714
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0964 - binary_accuracy: 0.9711
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0965 - binary_accuracy: 0.9714
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0970 - binary_accuracy: 0.9708
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0969 - binary_accuracy: 0.9709
14848/15000 [============================>.] - ETA: 0s - loss: 0.0972 - binary_accuracy: 0.9707
15000/15000 [==============================] - 3s 181us/step - loss: 0.0971 - binary_accuracy: 0.9708 - val_loss: 0.3149 - val_binary_accuracy: 0.8845
Epoch 8/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0714 - binary_accuracy: 0.9844
1024/15000 [=>............................] - ETA: 1s - loss: 0.0660 - binary_accuracy: 0.9854
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0698 - binary_accuracy: 0.9818
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0693 - binary_accuracy: 0.9829
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0679 - binary_accuracy: 0.9840
3072/15000 [=====>........................] - ETA: 2s - loss: 0.0705 - binary_accuracy: 0.9818
3584/15000 [======>.......................] - ETA: 2s - loss: 0.0716 - binary_accuracy: 0.9805
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0725 - binary_accuracy: 0.9805
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0746 - binary_accuracy: 0.9787
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0750 - binary_accuracy: 0.9783
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0738 - binary_accuracy: 0.9790
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0744 - binary_accuracy: 0.9795
6656/15000 [============>.................] - ETA: 1s - loss: 0.0740 - binary_accuracy: 0.9790
7168/15000 [=============>................] - ETA: 1s - loss: 0.0735 - binary_accuracy: 0.9791
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0735 - binary_accuracy: 0.9793
8192/15000 [===============>..............] - ETA: 1s - loss: 0.0730 - binary_accuracy: 0.9795
8704/15000 [================>.............] - ETA: 1s - loss: 0.0747 - binary_accuracy: 0.9790
9216/15000 [=================>............] - ETA: 0s - loss: 0.0768 - binary_accuracy: 0.9781
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0777 - binary_accuracy: 0.9772
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0788 - binary_accuracy: 0.9767
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0796 - binary_accuracy: 0.9764
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0789 - binary_accuracy: 0.9767
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0790 - binary_accuracy: 0.9768
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0790 - binary_accuracy: 0.9767
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0790 - binary_accuracy: 0.9765
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0785 - binary_accuracy: 0.9766
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0782 - binary_accuracy: 0.9770
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0785 - binary_accuracy: 0.9771
14848/15000 [============================>.] - ETA: 0s - loss: 0.0787 - binary_accuracy: 0.9768
15000/15000 [==============================] - 3s 225us/step - loss: 0.0798 - binary_accuracy: 0.9763 - val_loss: 0.3907 - val_binary_accuracy: 0.8649
Epoch 9/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0828 - binary_accuracy: 0.9746
1024/15000 [=>............................] - ETA: 1s - loss: 0.0641 - binary_accuracy: 0.9844
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0643 - binary_accuracy: 0.9824
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0617 - binary_accuracy: 0.9849
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0585 - binary_accuracy: 0.9867
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0589 - binary_accuracy: 0.9870
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0579 - binary_accuracy: 0.9874
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0581 - binary_accuracy: 0.9878
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0585 - binary_accuracy: 0.9874
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0601 - binary_accuracy: 0.9863
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0617 - binary_accuracy: 0.9856
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0608 - binary_accuracy: 0.9858
6656/15000 [============>.................] - ETA: 1s - loss: 0.0604 - binary_accuracy: 0.9859
7168/15000 [=============>................] - ETA: 1s - loss: 0.0606 - binary_accuracy: 0.9858
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0605 - binary_accuracy: 0.9857
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0606 - binary_accuracy: 0.9856
8704/15000 [================>.............] - ETA: 0s - loss: 0.0611 - binary_accuracy: 0.9851
9216/15000 [=================>............] - ETA: 0s - loss: 0.0617 - binary_accuracy: 0.9851
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0628 - binary_accuracy: 0.9841
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0645 - binary_accuracy: 0.9829
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0662 - binary_accuracy: 0.9822
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0663 - binary_accuracy: 0.9821
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0663 - binary_accuracy: 0.9823
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0665 - binary_accuracy: 0.9821
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0661 - binary_accuracy: 0.9825
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0659 - binary_accuracy: 0.9823
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0654 - binary_accuracy: 0.9823
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0652 - binary_accuracy: 0.9825
14848/15000 [============================>.] - ETA: 0s - loss: 0.0650 - binary_accuracy: 0.9824
15000/15000 [==============================] - 3s 190us/step - loss: 0.0653 - binary_accuracy: 0.9821 - val_loss: 0.3678 - val_binary_accuracy: 0.8776
Epoch 10/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0543 - binary_accuracy: 0.9922
1024/15000 [=>............................] - ETA: 2s - loss: 0.0497 - binary_accuracy: 0.9922
1536/15000 [==>...........................] - ETA: 2s - loss: 0.0506 - binary_accuracy: 0.9915
2048/15000 [===>..........................] - ETA: 2s - loss: 0.0501 - binary_accuracy: 0.9912
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0504 - binary_accuracy: 0.9898
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0510 - binary_accuracy: 0.9896
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0521 - binary_accuracy: 0.9888
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0516 - binary_accuracy: 0.9890
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0512 - binary_accuracy: 0.9894
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0510 - binary_accuracy: 0.9887
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0512 - binary_accuracy: 0.9885
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0505 - binary_accuracy: 0.9886
6656/15000 [============>.................] - ETA: 1s - loss: 0.0497 - binary_accuracy: 0.9893
7168/15000 [=============>................] - ETA: 1s - loss: 0.0504 - binary_accuracy: 0.9887
7680/15000 [==============>...............] - ETA: 0s - loss: 0.0507 - binary_accuracy: 0.9887
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0508 - binary_accuracy: 0.9882
8704/15000 [================>.............] - ETA: 0s - loss: 0.0513 - binary_accuracy: 0.9883
9216/15000 [=================>............] - ETA: 0s - loss: 0.0527 - binary_accuracy: 0.9872
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0547 - binary_accuracy: 0.9859
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0565 - binary_accuracy: 0.9844
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0569 - binary_accuracy: 0.9841
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0571 - binary_accuracy: 0.9841
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0568 - binary_accuracy: 0.9842
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0565 - binary_accuracy: 0.9845
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0560 - binary_accuracy: 0.9848
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0556 - binary_accuracy: 0.9848
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0551 - binary_accuracy: 0.9851
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0547 - binary_accuracy: 0.9853
14848/15000 [============================>.] - ETA: 0s - loss: 0.0547 - binary_accuracy: 0.9852
15000/15000 [==============================] - 3s 187us/step - loss: 0.0550 - binary_accuracy: 0.9849 - val_loss: 0.3887 - val_binary_accuracy: 0.8783
Epoch 11/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0378 - binary_accuracy: 0.9922
1024/15000 [=>............................] - ETA: 2s - loss: 0.0350 - binary_accuracy: 0.9941
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0314 - binary_accuracy: 0.9961
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0354 - binary_accuracy: 0.9941
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0357 - binary_accuracy: 0.9945
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0353 - binary_accuracy: 0.9948
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0348 - binary_accuracy: 0.9950
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0338 - binary_accuracy: 0.9949
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0333 - binary_accuracy: 0.9948
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0346 - binary_accuracy: 0.9941
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0355 - binary_accuracy: 0.9934
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0365 - binary_accuracy: 0.9930
6656/15000 [============>.................] - ETA: 1s - loss: 0.0361 - binary_accuracy: 0.9932
7680/15000 [==============>...............] - ETA: 0s - loss: 0.0383 - binary_accuracy: 0.9915
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0406 - binary_accuracy: 0.9904
8704/15000 [================>.............] - ETA: 0s - loss: 0.0439 - binary_accuracy: 0.9886
9216/15000 [=================>............] - ETA: 0s - loss: 0.0443 - binary_accuracy: 0.9885
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0444 - binary_accuracy: 0.9884
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0442 - binary_accuracy: 0.9883
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0442 - binary_accuracy: 0.9884
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0444 - binary_accuracy: 0.9882
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0443 - binary_accuracy: 0.9883
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0445 - binary_accuracy: 0.9882
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0442 - binary_accuracy: 0.9884
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0443 - binary_accuracy: 0.9885
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0442 - binary_accuracy: 0.9887
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0448 - binary_accuracy: 0.9884
14848/15000 [============================>.] - ETA: 0s - loss: 0.0448 - binary_accuracy: 0.9884
15000/15000 [==============================] - 3s 188us/step - loss: 0.0447 - binary_accuracy: 0.9885 - val_loss: 0.4251 - val_binary_accuracy: 0.8760
Epoch 12/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0300 - binary_accuracy: 0.9941
1024/15000 [=>............................] - ETA: 1s - loss: 0.0280 - binary_accuracy: 0.9971
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0276 - binary_accuracy: 0.9974
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0267 - binary_accuracy: 0.9966
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0272 - binary_accuracy: 0.9969
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0271 - binary_accuracy: 0.9967
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0269 - binary_accuracy: 0.9972
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0266 - binary_accuracy: 0.9973
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0275 - binary_accuracy: 0.9970
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0285 - binary_accuracy: 0.9963
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0305 - binary_accuracy: 0.9950
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0318 - binary_accuracy: 0.9946
6656/15000 [============>.................] - ETA: 0s - loss: 0.0317 - binary_accuracy: 0.9947
7168/15000 [=============>................] - ETA: 0s - loss: 0.0318 - binary_accuracy: 0.9948
7680/15000 [==============>...............] - ETA: 0s - loss: 0.0313 - binary_accuracy: 0.9951
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0313 - binary_accuracy: 0.9949
8704/15000 [================>.............] - ETA: 0s - loss: 0.0311 - binary_accuracy: 0.9951
9216/15000 [=================>............] - ETA: 0s - loss: 0.0316 - binary_accuracy: 0.9948
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0317 - binary_accuracy: 0.9950
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0317 - binary_accuracy: 0.9951
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0323 - binary_accuracy: 0.9949
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0320 - binary_accuracy: 0.9949
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0323 - binary_accuracy: 0.9947
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0326 - binary_accuracy: 0.9946
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0332 - binary_accuracy: 0.9941
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0347 - binary_accuracy: 0.9934
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0374 - binary_accuracy: 0.9919
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0379 - binary_accuracy: 0.9916
14848/15000 [============================>.] - ETA: 0s - loss: 0.0383 - binary_accuracy: 0.9914
15000/15000 [==============================] - 3s 183us/step - loss: 0.0383 - binary_accuracy: 0.9915 - val_loss: 0.4572 - val_binary_accuracy: 0.8706
Epoch 13/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0246 - binary_accuracy: 1.0000
1024/15000 [=>............................] - ETA: 1s - loss: 0.0222 - binary_accuracy: 0.9990
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0241 - binary_accuracy: 0.9974
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0268 - binary_accuracy: 0.9956
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0266 - binary_accuracy: 0.9957
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0253 - binary_accuracy: 0.9964
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0249 - binary_accuracy: 0.9969
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0239 - binary_accuracy: 0.9973
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0239 - binary_accuracy: 0.9974
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0235 - binary_accuracy: 0.9975
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0229 - binary_accuracy: 0.9973
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0233 - binary_accuracy: 0.9971
6656/15000 [============>.................] - ETA: 1s - loss: 0.0235 - binary_accuracy: 0.9968
7168/15000 [=============>................] - ETA: 0s - loss: 0.0234 - binary_accuracy: 0.9968
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0245 - binary_accuracy: 0.9960
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0244 - binary_accuracy: 0.9962
9216/15000 [=================>............] - ETA: 0s - loss: 0.0242 - binary_accuracy: 0.9961
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0242 - binary_accuracy: 0.9963
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0246 - binary_accuracy: 0.9962
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0243 - binary_accuracy: 0.9964
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0249 - binary_accuracy: 0.9963
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0250 - binary_accuracy: 0.9963
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0259 - binary_accuracy: 0.9958
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0274 - binary_accuracy: 0.9948
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0288 - binary_accuracy: 0.9941
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0294 - binary_accuracy: 0.9938
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0294 - binary_accuracy: 0.9937
14848/15000 [============================>.] - ETA: 0s - loss: 0.0292 - binary_accuracy: 0.9938
15000/15000 [==============================] - 3s 198us/step - loss: 0.0291 - binary_accuracy: 0.9939 - val_loss: 0.4756 - val_binary_accuracy: 0.8741
Epoch 14/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0182 - binary_accuracy: 0.9980
1024/15000 [=>............................] - ETA: 1s - loss: 0.0187 - binary_accuracy: 0.9980
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0187 - binary_accuracy: 0.9980
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0179 - binary_accuracy: 0.9985
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0177 - binary_accuracy: 0.9980
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0192 - binary_accuracy: 0.9974
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0187 - binary_accuracy: 0.9975
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0183 - binary_accuracy: 0.9978
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0184 - binary_accuracy: 0.9976
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0185 - binary_accuracy: 0.9973
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0186 - binary_accuracy: 0.9973
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0187 - binary_accuracy: 0.9974
6656/15000 [============>.................] - ETA: 0s - loss: 0.0186 - binary_accuracy: 0.9974
7168/15000 [=============>................] - ETA: 0s - loss: 0.0189 - binary_accuracy: 0.9973
7680/15000 [==============>...............] - ETA: 0s - loss: 0.0184 - binary_accuracy: 0.9975
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0184 - binary_accuracy: 0.9976
8704/15000 [================>.............] - ETA: 0s - loss: 0.0185 - binary_accuracy: 0.9975
9216/15000 [=================>............] - ETA: 0s - loss: 0.0188 - binary_accuracy: 0.9973
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0195 - binary_accuracy: 0.9967
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0215 - binary_accuracy: 0.9954
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0236 - binary_accuracy: 0.9942
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0246 - binary_accuracy: 0.9939
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0244 - binary_accuracy: 0.9941
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0244 - binary_accuracy: 0.9941
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0244 - binary_accuracy: 0.9941
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0241 - binary_accuracy: 0.9944
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0240 - binary_accuracy: 0.9945
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0240 - binary_accuracy: 0.9946
14848/15000 [============================>.] - ETA: 0s - loss: 0.0240 - binary_accuracy: 0.9947
15000/15000 [==============================] - 3s 185us/step - loss: 0.0239 - binary_accuracy: 0.9947 - val_loss: 0.5065 - val_binary_accuracy: 0.8729
Epoch 15/20
512/15000 [>.............................] - ETA: 2s - loss: 0.0100 - binary_accuracy: 1.0000
1024/15000 [=>............................] - ETA: 2s - loss: 0.0119 - binary_accuracy: 1.0000
1536/15000 [==>...........................] - ETA: 2s - loss: 0.0141 - binary_accuracy: 0.9987
2048/15000 [===>..........................] - ETA: 2s - loss: 0.0139 - binary_accuracy: 0.9985
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0137 - binary_accuracy: 0.9984
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0138 - binary_accuracy: 0.9987
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0142 - binary_accuracy: 0.9989
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0141 - binary_accuracy: 0.9990
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0141 - binary_accuracy: 0.9989
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0145 - binary_accuracy: 0.9988
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0144 - binary_accuracy: 0.9989
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0146 - binary_accuracy: 0.9987
6656/15000 [============>.................] - ETA: 1s - loss: 0.0146 - binary_accuracy: 0.9988
7168/15000 [=============>................] - ETA: 1s - loss: 0.0149 - binary_accuracy: 0.9987
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0155 - binary_accuracy: 0.9984
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0160 - binary_accuracy: 0.9983
8704/15000 [================>.............] - ETA: 0s - loss: 0.0167 - binary_accuracy: 0.9980
9216/15000 [=================>............] - ETA: 0s - loss: 0.0175 - binary_accuracy: 0.9977
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0181 - binary_accuracy: 0.9976
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0180 - binary_accuracy: 0.9978
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0181 - binary_accuracy: 0.9978
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0180 - binary_accuracy: 0.9978
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0179 - binary_accuracy: 0.9979
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0180 - binary_accuracy: 0.9979
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0178 - binary_accuracy: 0.9980
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0176 - binary_accuracy: 0.9980
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0176 - binary_accuracy: 0.9979
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0177 - binary_accuracy: 0.9978
14848/15000 [============================>.] - ETA: 0s - loss: 0.0176 - binary_accuracy: 0.9979
15000/15000 [==============================] - 3s 203us/step - loss: 0.0175 - binary_accuracy: 0.9979 - val_loss: 0.5383 - val_binary_accuracy: 0.8697
Epoch 16/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0180 - binary_accuracy: 0.9961
1024/15000 [=>............................] - ETA: 1s - loss: 0.0150 - binary_accuracy: 0.9971
1536/15000 [==>...........................] - ETA: 2s - loss: 0.0136 - binary_accuracy: 0.9980
2048/15000 [===>..........................] - ETA: 2s - loss: 0.0126 - binary_accuracy: 0.9980
2560/15000 [====>.........................] - ETA: 2s - loss: 0.0126 - binary_accuracy: 0.9984
3072/15000 [=====>........................] - ETA: 2s - loss: 0.0123 - binary_accuracy: 0.9987
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0120 - binary_accuracy: 0.9989
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0121 - binary_accuracy: 0.9988
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0122 - binary_accuracy: 0.9989
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0121 - binary_accuracy: 0.9988
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0126 - binary_accuracy: 0.9986
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0132 - binary_accuracy: 0.9985
6656/15000 [============>.................] - ETA: 1s - loss: 0.0136 - binary_accuracy: 0.9986
7168/15000 [=============>................] - ETA: 1s - loss: 0.0143 - binary_accuracy: 0.9980
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0150 - binary_accuracy: 0.9979
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0154 - binary_accuracy: 0.9978
8704/15000 [================>.............] - ETA: 0s - loss: 0.0152 - binary_accuracy: 0.9979
9216/15000 [=================>............] - ETA: 0s - loss: 0.0151 - binary_accuracy: 0.9979
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0148 - binary_accuracy: 0.9980
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0148 - binary_accuracy: 0.9980
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0146 - binary_accuracy: 0.9980
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0146 - binary_accuracy: 0.9980
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0145 - binary_accuracy: 0.9980
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0144 - binary_accuracy: 0.9980
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0143 - binary_accuracy: 0.9981
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0142 - binary_accuracy: 0.9982
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0141 - binary_accuracy: 0.9983
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0142 - binary_accuracy: 0.9983
14848/15000 [============================>.] - ETA: 0s - loss: 0.0149 - binary_accuracy: 0.9981
15000/15000 [==============================] - 3s 197us/step - loss: 0.0152 - binary_accuracy: 0.9980 - val_loss: 0.5824 - val_binary_accuracy: 0.8676
Epoch 17/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0092 - binary_accuracy: 1.0000
1024/15000 [=>............................] - ETA: 1s - loss: 0.0084 - binary_accuracy: 1.0000
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0083 - binary_accuracy: 1.0000
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0082 - binary_accuracy: 1.0000
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0079 - binary_accuracy: 1.0000
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0079 - binary_accuracy: 1.0000
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0078 - binary_accuracy: 1.0000
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0080 - binary_accuracy: 0.9998
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0080 - binary_accuracy: 0.9998
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0081 - binary_accuracy: 0.9998
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0079 - binary_accuracy: 0.9998
6656/15000 [============>.................] - ETA: 1s - loss: 0.0079 - binary_accuracy: 0.9998
7168/15000 [=============>................] - ETA: 0s - loss: 0.0079 - binary_accuracy: 0.9999
7680/15000 [==============>...............] - ETA: 0s - loss: 0.0080 - binary_accuracy: 0.9999
8192/15000 [===============>..............] - ETA: 0s - loss: 0.0084 - binary_accuracy: 0.9995
8704/15000 [================>.............] - ETA: 0s - loss: 0.0086 - binary_accuracy: 0.9995
9216/15000 [=================>............] - ETA: 0s - loss: 0.0087 - binary_accuracy: 0.9995
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0087 - binary_accuracy: 0.9995
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0088 - binary_accuracy: 0.9995
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0092 - binary_accuracy: 0.9993
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0095 - binary_accuracy: 0.9993
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0102 - binary_accuracy: 0.9991
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0108 - binary_accuracy: 0.9989
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0111 - binary_accuracy: 0.9988
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0110 - binary_accuracy: 0.9989
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0110 - binary_accuracy: 0.9989
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0110 - binary_accuracy: 0.9989
14848/15000 [============================>.] - ETA: 0s - loss: 0.0110 - binary_accuracy: 0.9989
15000/15000 [==============================] - 3s 189us/step - loss: 0.0110 - binary_accuracy: 0.9989 - val_loss: 0.6164 - val_binary_accuracy: 0.8666
Epoch 18/20
512/15000 [>.............................] - ETA: 2s - loss: 0.0077 - binary_accuracy: 1.0000
1024/15000 [=>............................] - ETA: 2s - loss: 0.0064 - binary_accuracy: 1.0000
1536/15000 [==>...........................] - ETA: 2s - loss: 0.0064 - binary_accuracy: 1.0000
2048/15000 [===>..........................] - ETA: 2s - loss: 0.0063 - binary_accuracy: 1.0000
2560/15000 [====>.........................] - ETA: 1s - loss: 0.0061 - binary_accuracy: 1.0000
3072/15000 [=====>........................] - ETA: 1s - loss: 0.0060 - binary_accuracy: 1.0000
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0062 - binary_accuracy: 1.0000
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0063 - binary_accuracy: 1.0000
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0062 - binary_accuracy: 1.0000
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0063 - binary_accuracy: 0.9998
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0063 - binary_accuracy: 0.9998
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0063 - binary_accuracy: 0.9998
6656/15000 [============>.................] - ETA: 1s - loss: 0.0062 - binary_accuracy: 0.9998
7168/15000 [=============>................] - ETA: 1s - loss: 0.0060 - binary_accuracy: 0.9999
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0060 - binary_accuracy: 0.9999
8192/15000 [===============>..............] - ETA: 1s - loss: 0.0059 - binary_accuracy: 0.9999
8704/15000 [================>.............] - ETA: 0s - loss: 0.0059 - binary_accuracy: 0.9999
9216/15000 [=================>............] - ETA: 0s - loss: 0.0058 - binary_accuracy: 0.9999
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0060 - binary_accuracy: 0.9999
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0063 - binary_accuracy: 0.9998
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0086 - binary_accuracy: 0.9989
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0112 - binary_accuracy: 0.9974
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0119 - binary_accuracy: 0.9970
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0116 - binary_accuracy: 0.9972
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0118 - binary_accuracy: 0.9972
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0115 - binary_accuracy: 0.9973
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0114 - binary_accuracy: 0.9974
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0113 - binary_accuracy: 0.9975
14848/15000 [============================>.] - ETA: 0s - loss: 0.0112 - binary_accuracy: 0.9976
15000/15000 [==============================] - 3s 220us/step - loss: 0.0112 - binary_accuracy: 0.9976 - val_loss: 0.6450 - val_binary_accuracy: 0.8671
Epoch 19/20
512/15000 [>.............................] - ETA: 5s - loss: 0.0052 - binary_accuracy: 1.0000
1024/15000 [=>............................] - ETA: 3s - loss: 0.0049 - binary_accuracy: 1.0000
1536/15000 [==>...........................] - ETA: 3s - loss: 0.0046 - binary_accuracy: 1.0000
2048/15000 [===>..........................] - ETA: 3s - loss: 0.0050 - binary_accuracy: 0.9995
2560/15000 [====>.........................] - ETA: 2s - loss: 0.0048 - binary_accuracy: 0.9996
3072/15000 [=====>........................] - ETA: 2s - loss: 0.0046 - binary_accuracy: 0.9997
3584/15000 [======>.......................] - ETA: 2s - loss: 0.0045 - binary_accuracy: 0.9997
4096/15000 [=======>......................] - ETA: 2s - loss: 0.0052 - binary_accuracy: 0.9995
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0052 - binary_accuracy: 0.9996
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0050 - binary_accuracy: 0.9996
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0048 - binary_accuracy: 0.9996
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0049 - binary_accuracy: 0.9997
6656/15000 [============>.................] - ETA: 1s - loss: 0.0049 - binary_accuracy: 0.9997
7168/15000 [=============>................] - ETA: 1s - loss: 0.0048 - binary_accuracy: 0.9997
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0047 - binary_accuracy: 0.9997
8192/15000 [===============>..............] - ETA: 1s - loss: 0.0047 - binary_accuracy: 0.9998
8704/15000 [================>.............] - ETA: 0s - loss: 0.0048 - binary_accuracy: 0.9997
9216/15000 [=================>............] - ETA: 0s - loss: 0.0048 - binary_accuracy: 0.9997
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0049 - binary_accuracy: 0.9997
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0048 - binary_accuracy: 0.9997
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0049 - binary_accuracy: 0.9997
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0049 - binary_accuracy: 0.9997
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0049 - binary_accuracy: 0.9997
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0050 - binary_accuracy: 0.9998
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0051 - binary_accuracy: 0.9998
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0051 - binary_accuracy: 0.9998
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0051 - binary_accuracy: 0.9998
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0051 - binary_accuracy: 0.9998
14848/15000 [============================>.] - ETA: 0s - loss: 0.0052 - binary_accuracy: 0.9998
15000/15000 [==============================] - 3s 208us/step - loss: 0.0054 - binary_accuracy: 0.9997 - val_loss: 0.7441 - val_binary_accuracy: 0.8545
Epoch 20/20
512/15000 [>.............................] - ETA: 1s - loss: 0.0113 - binary_accuracy: 1.0000
1024/15000 [=>............................] - ETA: 1s - loss: 0.0080 - binary_accuracy: 1.0000
1536/15000 [==>...........................] - ETA: 1s - loss: 0.0069 - binary_accuracy: 1.0000
2048/15000 [===>..........................] - ETA: 1s - loss: 0.0063 - binary_accuracy: 1.0000
2560/15000 [====>.........................] - ETA: 2s - loss: 0.0060 - binary_accuracy: 1.0000
3072/15000 [=====>........................] - ETA: 2s - loss: 0.0058 - binary_accuracy: 1.0000
3584/15000 [======>.......................] - ETA: 1s - loss: 0.0056 - binary_accuracy: 1.0000
4096/15000 [=======>......................] - ETA: 1s - loss: 0.0053 - binary_accuracy: 1.0000
4608/15000 [========>.....................] - ETA: 1s - loss: 0.0052 - binary_accuracy: 1.0000
5120/15000 [=========>....................] - ETA: 1s - loss: 0.0050 - binary_accuracy: 1.0000
5632/15000 [==========>...................] - ETA: 1s - loss: 0.0048 - binary_accuracy: 1.0000
6144/15000 [===========>..................] - ETA: 1s - loss: 0.0047 - binary_accuracy: 1.0000
6656/15000 [============>.................] - ETA: 1s - loss: 0.0047 - binary_accuracy: 1.0000
7168/15000 [=============>................] - ETA: 1s - loss: 0.0046 - binary_accuracy: 1.0000
7680/15000 [==============>...............] - ETA: 1s - loss: 0.0045 - binary_accuracy: 1.0000
8192/15000 [===============>..............] - ETA: 1s - loss: 0.0045 - binary_accuracy: 1.0000
8704/15000 [================>.............] - ETA: 1s - loss: 0.0044 - binary_accuracy: 1.0000
9216/15000 [=================>............] - ETA: 0s - loss: 0.0047 - binary_accuracy: 0.9999
9728/15000 [==================>...........] - ETA: 0s - loss: 0.0047 - binary_accuracy: 0.9999
10240/15000 [===================>..........] - ETA: 0s - loss: 0.0047 - binary_accuracy: 0.9999
10752/15000 [====================>.........] - ETA: 0s - loss: 0.0047 - binary_accuracy: 0.9999
11264/15000 [=====================>........] - ETA: 0s - loss: 0.0047 - binary_accuracy: 0.9999
11776/15000 [======================>.......] - ETA: 0s - loss: 0.0049 - binary_accuracy: 0.9998
12288/15000 [=======================>......] - ETA: 0s - loss: 0.0063 - binary_accuracy: 0.9993
12800/15000 [========================>.....] - ETA: 0s - loss: 0.0108 - binary_accuracy: 0.9971
13312/15000 [=========================>....] - ETA: 0s - loss: 0.0118 - binary_accuracy: 0.9969
13824/15000 [==========================>...] - ETA: 0s - loss: 0.0115 - binary_accuracy: 0.9970
14336/15000 [===========================>..] - ETA: 0s - loss: 0.0112 - binary_accuracy: 0.9971
14848/15000 [============================>.] - ETA: 0s - loss: 0.0109 - binary_accuracy: 0.9972
15000/15000 [==============================] - 3s 220us/step - loss: 0.0109 - binary_accuracy: 0.9973 - val_loss: 0.7045 - val_binary_accuracy: 0.8659
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from tensorflow.keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
train_data, test_data는 숫자들의 배열로 나타내진다. 데이터에 있는 단어를 숫자로 인코딩한다. - num_words=n: 데이터에서 가장 많이 등장하는 단어 n개로 제한 The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
###Output
_____no_output_____
###Markdown
각 샘플은 정수 리스트이다. 각각의 숫자들은 단어를 의미한다. 위에서 10,000개의 단어를 포함시켰으므로 표현 가능한 단어가 10,000개가 된다.
###Code
train_labels[0]
###Output
_____no_output_____
###Markdown
첫 번째 리뷰는 긍정적인 리뷰이다. Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
10,000개의 단어만 포함하므로 train_data에 포함된 모든 단어는 0에서 9,999로 인코딩된다. For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
word_index
###Output
_____no_output_____
###Markdown
모든 단어에 대해 단어를 key값으로 사용된 빈도순위를 value로해서 딕셔너리로 맵핑한다.
###Code
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
reverse_word_index
###Output
_____no_output_____
###Markdown
사용된 빈도순위를 key값, 단어를 value값으로 바꿔서 딕셔너리로 바꾼다.
###Code
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
train_data[0][:10]
[reverse_word_index.get(i - 3, '?') for i in train_data[0][:10]]
###Output
_____no_output_____
###Markdown
train_data와 test_data에서 0, 1, 2는 'padding', 'start of sequence', 'unknown' 을 위한 인덱스이므로 3을 빼준다. 데이터마다 형식이 달라서 그에 맞게 빼줘야한다. reverse_word_index에 해당하는 단어가 있으면 그 단어를 출력하고 없으면 '?'를 출력하다.
###Code
' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0][:10]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
각각의 리뷰가 숫자들의 배열로 나타나는데 그 배열의 길이가 다 같지는 않다. 리뷰의 배열로 매트릭스를 만드는데 그 길이를 똑같이 마춰주어야한다. 텍스트 데이터를 사용할때 자주 사용되는 방법은 backword word's feature라고 해서 만개의 열에서 해당하는 인덱스를 가진 단어가 리뷰에 포함되면 1, 그렇지 않으면 0을 입력하다. 1이 아닌 포함되는 횟수를 넣는 경우도 잇다. 하나씩 입력해나가는 것보다 np.zeros로 0을 채워놓고 일치하는 단어가 있으면 1로 바꾸는 방식이 더 빠르다.
###Code
np.zeros((2,3))
for i, sequence in enumerate(['Andy', 'Tom', 'Max']):
print(i, sequence)
###Output
0 Andy
1 Tom
2 Max
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
train_labels[:5]
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
y_train[:5]
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from tensorflow.keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from tensorflow.keras import losses
from tensorflow.keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s 60us/sample - loss: 0.5237 - binary_accuracy: 0.7828 - val_loss: 0.3991 - val_binary_accuracy: 0.8644
Epoch 2/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.3086 - binary_accuracy: 0.9041 - val_loss: 0.3033 - val_binary_accuracy: 0.8891
Epoch 3/20
15000/15000 [==============================] - 0s 31us/sample - loss: 0.2233 - binary_accuracy: 0.9283 - val_loss: 0.2776 - val_binary_accuracy: 0.8918
Epoch 4/20
15000/15000 [==============================] - 0s 31us/sample - loss: 0.1750 - binary_accuracy: 0.9417 - val_loss: 0.2775 - val_binary_accuracy: 0.8904
Epoch 5/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.1413 - binary_accuracy: 0.9543 - val_loss: 0.2807 - val_binary_accuracy: 0.8887
Epoch 6/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.1144 - binary_accuracy: 0.9643 - val_loss: 0.2997 - val_binary_accuracy: 0.8850
Epoch 7/20
15000/15000 [==============================] - 1s 37us/sample - loss: 0.0966 - binary_accuracy: 0.9712 - val_loss: 0.3167 - val_binary_accuracy: 0.8840
Epoch 8/20
15000/15000 [==============================] - 0s 31us/sample - loss: 0.0783 - binary_accuracy: 0.9771 - val_loss: 0.3343 - val_binary_accuracy: 0.8798
Epoch 9/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0662 - binary_accuracy: 0.9817 - val_loss: 0.3592 - val_binary_accuracy: 0.8804
Epoch 10/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0529 - binary_accuracy: 0.9856 - val_loss: 0.3920 - val_binary_accuracy: 0.8781
Epoch 11/20
15000/15000 [==============================] - 0s 31us/sample - loss: 0.0420 - binary_accuracy: 0.9897 - val_loss: 0.4424 - val_binary_accuracy: 0.8738
Epoch 12/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0354 - binary_accuracy: 0.9913 - val_loss: 0.4597 - val_binary_accuracy: 0.8746
Epoch 13/20
15000/15000 [==============================] - 0s 31us/sample - loss: 0.0287 - binary_accuracy: 0.9939 - val_loss: 0.4812 - val_binary_accuracy: 0.8726
Epoch 14/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0241 - binary_accuracy: 0.9949 - val_loss: 0.5108 - val_binary_accuracy: 0.8713
Epoch 15/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0152 - binary_accuracy: 0.9983 - val_loss: 0.5523 - val_binary_accuracy: 0.8655
Epoch 16/20
15000/15000 [==============================] - 0s 33us/sample - loss: 0.0199 - binary_accuracy: 0.9961 - val_loss: 0.5954 - val_binary_accuracy: 0.8725
Epoch 17/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0079 - binary_accuracy: 0.9995 - val_loss: 0.6185 - val_binary_accuracy: 0.8705
Epoch 18/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0103 - binary_accuracy: 0.9983 - val_loss: 0.6575 - val_binary_accuracy: 0.8677
Epoch 19/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0055 - binary_accuracy: 0.9995 - val_loss: 0.7698 - val_binary_accuracy: 0.8497
Epoch 20/20
15000/15000 [==============================] - 0s 32us/sample - loss: 0.0042 - binary_accuracy: 0.9997 - val_loss: 0.7173 - val_binary_accuracy: 0.8656
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Epoch가 늘어날수록 훈련세트의 손실은 계속해서 줄어드는 반면 검증 세트의 손실은 줄어들다가 다시 늘어난다.
###Code
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
정확도의 측면에서 보면 epoch가 늘어날수록 훈련세트의 정확도는 계속해서 늘어나는 반면 검증 세트의 정확도는 늘어나다가 다시 줄어든다. 이런 것을 훈련세트에대해 과대적합한다고 한다. The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data: Epoch가 계속해서 늘어나면 검증 세트의 손실은 증가하고 정확도는 감소하므로 epoch를 4로 제한하고 훈련시켜보자. 이 방법은 과적합을 방지하는 방법으로 조기종료(early stopping)라고한다.
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 4s 249us/step - loss: 0.5084 - binary_accuracy: 0.7813 - val_loss: 0.3797 - val_binary_accuracy: 0.8684
Epoch 2/20
15000/15000 [==============================] - 3s 195us/step - loss: 0.3004 - binary_accuracy: 0.9047 - val_loss: 0.3004 - val_binary_accuracy: 0.8897
Epoch 3/20
15000/15000 [==============================] - 3s 196us/step - loss: 0.2179 - binary_accuracy: 0.9285 - val_loss: 0.3085 - val_binary_accuracy: 0.8711
Epoch 4/20
15000/15000 [==============================] - 3s 197us/step - loss: 0.1750 - binary_accuracy: 0.9437 - val_loss: 0.2840 - val_binary_accuracy: 0.8832
Epoch 5/20
15000/15000 [==============================] - 3s 196us/step - loss: 0.1427 - binary_accuracy: 0.9543 - val_loss: 0.2841 - val_binary_accuracy: 0.8872
Epoch 6/20
15000/15000 [==============================] - 3s 197us/step - loss: 0.1150 - binary_accuracy: 0.9650 - val_loss: 0.3166 - val_binary_accuracy: 0.8772
Epoch 7/20
15000/15000 [==============================] - 3s 194us/step - loss: 0.0980 - binary_accuracy: 0.9705 - val_loss: 0.3127 - val_binary_accuracy: 0.8846
Epoch 8/20
15000/15000 [==============================] - 3s 194us/step - loss: 0.0807 - binary_accuracy: 0.9763 - val_loss: 0.3859 - val_binary_accuracy: 0.8649
Epoch 9/20
15000/15000 [==============================] - 3s 195us/step - loss: 0.0661 - binary_accuracy: 0.9821 - val_loss: 0.3635 - val_binary_accuracy: 0.8782
Epoch 10/20
15000/15000 [==============================] - 3s 212us/step - loss: 0.0561 - binary_accuracy: 0.9853 - val_loss: 0.3843 - val_binary_accuracy: 0.8792
Epoch 11/20
15000/15000 [==============================] - 3s 195us/step - loss: 0.0439 - binary_accuracy: 0.9893 - val_loss: 0.4153 - val_binary_accuracy: 0.8779
Epoch 12/20
15000/15000 [==============================] - 3s 195us/step - loss: 0.0381 - binary_accuracy: 0.9921 - val_loss: 0.4525 - val_binary_accuracy: 0.8690
Epoch 13/20
15000/15000 [==============================] - 3s 202us/step - loss: 0.0300 - binary_accuracy: 0.9928 - val_loss: 0.4698 - val_binary_accuracy: 0.8729
Epoch 14/20
15000/15000 [==============================] - 3s 196us/step - loss: 0.0247 - binary_accuracy: 0.9945 - val_loss: 0.5023 - val_binary_accuracy: 0.8726
Epoch 15/20
15000/15000 [==============================] - 3s 197us/step - loss: 0.0175 - binary_accuracy: 0.9979 - val_loss: 0.5342 - val_binary_accuracy: 0.8693
Epoch 16/20
15000/15000 [==============================] - 3s 195us/step - loss: 0.0149 - binary_accuracy: 0.9983 - val_loss: 0.5710 - val_binary_accuracy: 0.8698
Epoch 17/20
15000/15000 [==============================] - 3s 194us/step - loss: 0.0151 - binary_accuracy: 0.9971 - val_loss: 0.6025 - val_binary_accuracy: 0.8697
Epoch 18/20
15000/15000 [==============================] - 3s 197us/step - loss: 0.0075 - binary_accuracy: 0.9996 - val_loss: 0.6782 - val_binary_accuracy: 0.8633
Epoch 19/20
15000/15000 [==============================] - 3s 196us/step - loss: 0.0117 - binary_accuracy: 0.9975 - val_loss: 0.6693 - val_binary_accuracy: 0.8674
Epoch 20/20
15000/15000 [==============================] - 3s 197us/step - loss: 0.0041 - binary_accuracy: 0.9999 - val_loss: 0.6941 - val_binary_accuracy: 0.8658
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from tensorflow.keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from tensorflow.keras import losses
from tensorflow.keras import metrics
from tensorflow.keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 147us/sample - loss: 0.5271 - binary_accuracy: 0.7890 - val_loss: 0.3863 - val_binary_accuracy: 0.8717
Epoch 2/20
15000/15000 [==============================] - 1s 53us/sample - loss: 0.3081 - binary_accuracy: 0.9016 - val_loss: 0.3030 - val_binary_accuracy: 0.8902
Epoch 3/20
15000/15000 [==============================] - 1s 47us/sample - loss: 0.2262 - binary_accuracy: 0.9251 - val_loss: 0.2820 - val_binary_accuracy: 0.8883
Epoch 4/20
15000/15000 [==============================] - 1s 48us/sample - loss: 0.1771 - binary_accuracy: 0.9417 - val_loss: 0.2765 - val_binary_accuracy: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s 45us/sample - loss: 0.1410 - binary_accuracy: 0.9561 - val_loss: 0.2806 - val_binary_accuracy: 0.8892
Epoch 6/20
15000/15000 [==============================] - 1s 60us/sample - loss: 0.1195 - binary_accuracy: 0.9621 - val_loss: 0.2964 - val_binary_accuracy: 0.8847
Epoch 7/20
15000/15000 [==============================] - 3s 190us/sample - loss: 0.1022 - binary_accuracy: 0.9687 - val_loss: 0.3063 - val_binary_accuracy: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s 55us/sample - loss: 0.0843 - binary_accuracy: 0.9756 - val_loss: 0.3361 - val_binary_accuracy: 0.8806
Epoch 9/20
15000/15000 [==============================] - 1s 51us/sample - loss: 0.0700 - binary_accuracy: 0.9818 - val_loss: 0.3616 - val_binary_accuracy: 0.8751
Epoch 10/20
15000/15000 [==============================] - 1s 61us/sample - loss: 0.0600 - binary_accuracy: 0.9847 - val_loss: 0.3779 - val_binary_accuracy: 0.8791
Epoch 11/20
15000/15000 [==============================] - 1s 62us/sample - loss: 0.0486 - binary_accuracy: 0.9886 - val_loss: 0.3982 - val_binary_accuracy: 0.8757
Epoch 12/20
15000/15000 [==============================] - 1s 64us/sample - loss: 0.0413 - binary_accuracy: 0.9905 - val_loss: 0.4267 - val_binary_accuracy: 0.8734
Epoch 13/20
15000/15000 [==============================] - 1s 54us/sample - loss: 0.0341 - binary_accuracy: 0.9931 - val_loss: 0.4572 - val_binary_accuracy: 0.8741
Epoch 14/20
15000/15000 [==============================] - 1s 56us/sample - loss: 0.0279 - binary_accuracy: 0.9948 - val_loss: 0.4859 - val_binary_accuracy: 0.8720
Epoch 15/20
15000/15000 [==============================] - 1s 61us/sample - loss: 0.0238 - binary_accuracy: 0.9948 - val_loss: 0.5213 - val_binary_accuracy: 0.8709
Epoch 16/20
15000/15000 [==============================] - 1s 51us/sample - loss: 0.0180 - binary_accuracy: 0.9971 - val_loss: 0.5500 - val_binary_accuracy: 0.8669
Epoch 17/20
15000/15000 [==============================] - 1s 58us/sample - loss: 0.0138 - binary_accuracy: 0.9987 - val_loss: 0.5831 - val_binary_accuracy: 0.8654
Epoch 18/20
15000/15000 [==============================] - 1s 61us/sample - loss: 0.0111 - binary_accuracy: 0.9989 - val_loss: 0.6295 - val_binary_accuracy: 0.8686
Epoch 19/20
15000/15000 [==============================] - 1s 60us/sample - loss: 0.0100 - binary_accuracy: 0.9991 - val_loss: 0.6522 - val_binary_accuracy: 0.8669
Epoch 20/20
15000/15000 [==============================] - 1s 48us/sample - loss: 0.0088 - binary_accuracy: 0.9984 - val_loss: 0.6954 - val_binary_accuracy: 0.8661
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from tensorflow.keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Problems`ValueError: Object arrays cannot be loaded when allow_pickle=False`. What happened since these notebooks were published? According to [this git commit](https://github.com/tensorflow/tensorflow/commit/79a8d5cdad942b9853aa70b59441983b42a8aeb3diff-b0a029ad68170f59173eb2f6660cd8e0), it turns out that numpy 1.16.3 changed the `load()` API such that the `allow_pickle` keyword argument defaults to `False`, which breaks the above load command.There are a couple of creative solutions in this thread: https://stackoverflow.com/questions/55890813/how-to-fix-object-arrays-cannot-be-loaded-when-allow-pickle-false-for-imdb-loa/56062555Rather than edit `imdb.py`, I'll follow Sajad Norouzi's answer in the above thread:**EDIT 2020-02-05:** upon re-running this notebook, the above command just... worked, without writing anything to cell output. I guess I'll skip execution of the following workaround code block, but will keep it in this notebook in case the above command breaks again in the future.
###Code
# import numpy as np
# # This one line led to a 40-minute diversion on functools.partial, currying (similar but different!), and FP
# from functools import partial
# # save np.load
# np_load_old = partial(np.load)
# # modify the default parameters of np.load
# np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# # call load_data with allow_pickle implicitly set to true
# (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
len(train_data[0])
type(train_data)
train_data.dtype
type(train_data[0])
###Output
_____no_output_____
###Markdown
Wait, so Keras' official data-loading helper function gives us a *numpy array of python lists?* I realize that performance is not super important, but why not just a 2D array?
###Code
# Sanity check our training data
train_data[0][0:10]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '�') for i in train_data[0]])
decoded_review
def decode_review(arg):
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
if isinstance(arg, int):
return ' '.join([reverse_word_index.get(i - 3, '�') for i in train_data[arg]])
else: # arg is an encoded review
return ' '.join([reverse_word_index.get(i - 3, '�') for i in arg])
decode_review(0)
for i in range(1, 31):
print(i, reverse_word_index[i])
###Output
1 the
2 and
3 a
4 of
5 to
6 is
7 br
8 in
9 it
10 i
11 this
12 that
13 was
14 as
15 for
16 with
17 movie
18 but
19 film
20 on
21 not
22 you
23 are
24 his
25 have
26 he
27 be
28 one
29 all
30 at
###Markdown
I'd guess `br` comes from imperfect removal of `` tags.
###Code
for review in train_data:
if 10 in review:
print(decode_review(review))
break
###Output
� the � � at storytelling the traditional sort many years after the event i can still see in my � eye an elderly lady my friend's mother retelling the battle of � she makes the characters come alive her passion is that of an eye witness one to the events on the � heath a mile or so from where she lives br br of course it happened many years before she was born but you wouldn't guess from the way she tells it the same story is told in bars the length and � of scotland as i discussed it with a friend one night in � a local cut in to give his version the discussion continued to closing time br br stories passed down like this become part of our being who doesn't remember the stories our parents told us when we were children they become our invisible world and as we grow older they maybe still serve as inspiration or as an emotional � fact and fiction blend with � role models warning stories � magic and mystery br br my name is � like my grandfather and his grandfather before him our protagonist introduces himself to us and also introduces the story that stretches back through generations it produces stories within stories stories that evoke the � wonder of scotland its rugged mountains � in � the stuff of legend yet � is � in reality this is what gives it its special charm it has a rough beauty and authenticity � with some of the finest � singing you will ever hear br br � � visits his grandfather in hospital shortly before his death he burns with frustration part of him � to be in the twenty first century to hang out in � but he is raised on the western � among a � speaking community br br yet there is a deeper conflict within him he � to know the truth the truth behind his � ancient stories where does fiction end and he wants to know the truth behind the death of his parents br br he is pulled to make a last � journey to the � of one of � most � mountains can the truth be told or is it all in stories br br in this story about stories we � bloody battles � lovers the � of old and the sometimes more � � of accepted truth in doing so we each connect with � as he lives the story of his own life br br � the � � is probably the most honest � and genuinely beautiful film of scotland ever made like � i got slightly annoyed with the � of hanging stories on more stories but also like � i � this once i saw the � picture ' forget the box office � of braveheart and its like you might even � the � famous � of the wicker man to see a film that is true to scotland this one is probably unique if you maybe � on it deeply enough you might even re � the power of storytelling and the age old question of whether there are some truths that cannot be told but only experienced
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity: **Note to self:** Wait, maybe the reason the official data loader gave us 1D arrays of python lists is to demonstrate the need for data prep and vectorization.The following for-looping is naive, but only takes something like 10 s.
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
x_train.shape
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Side note: what's the difference between `np.asarray()` and `np.array()`? Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from tensorflow.keras import models, layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
WARNING:tensorflow:From /Users/pleimbigler/anaconda3/envs/tf_p36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
In light of the broken image link above, let's try graphing the model architecture:
###Code
from tensorflow.keras.utils import plot_model
plot_model(model, to_file='3.5-model1.png')
###Output
_____no_output_____
###Markdown
The model plot: Why doesn't `plot_model` show the image inline? I may never know. (I did try with and without `%matplotlib inline`)
###Code
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 16) 160016
_________________________________________________________________
dense_1 (Dense) (None, 16) 272
_________________________________________________________________
dense_2 (Dense) (None, 1) 17
=================================================================
Total params: 160,305
Trainable params: 160,305
Non-trainable params: 0
_________________________________________________________________
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
<__array_function__ internals>:5: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/datasets/imdb.py:159: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])
/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/datasets/imdb.py:160: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
imdb.get_word_index()
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
x=np.zeros((1, 10))
print("x=", x)
x[0, [2, 7, 4, 7,]]=2
print("x=", x)
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
print(id(train_labels))
print("------------")
print(id(np.asarray(train_labels)))
print(train_labels.astype("float32"))
train_labels
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
y_train
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 1s 33ms/step - loss: 0.6207 - binary_accuracy: 0.6567 - val_loss: 0.4607 - val_binary_accuracy: 0.8121
Epoch 2/20
30/30 [==============================] - 0s 16ms/step - loss: 0.3751 - binary_accuracy: 0.8917 - val_loss: 0.3354 - val_binary_accuracy: 0.8790
Epoch 3/20
30/30 [==============================] - 0s 16ms/step - loss: 0.2490 - binary_accuracy: 0.9308 - val_loss: 0.2943 - val_binary_accuracy: 0.8848
Epoch 4/20
30/30 [==============================] - 0s 16ms/step - loss: 0.1863 - binary_accuracy: 0.9473 - val_loss: 0.2847 - val_binary_accuracy: 0.8878
Epoch 5/20
30/30 [==============================] - 1s 17ms/step - loss: 0.1461 - binary_accuracy: 0.9580 - val_loss: 0.2835 - val_binary_accuracy: 0.8852
Epoch 6/20
30/30 [==============================] - 1s 17ms/step - loss: 0.1179 - binary_accuracy: 0.9685 - val_loss: 0.2954 - val_binary_accuracy: 0.8834
Epoch 7/20
30/30 [==============================] - 1s 17ms/step - loss: 0.0956 - binary_accuracy: 0.9746 - val_loss: 0.3276 - val_binary_accuracy: 0.8787
Epoch 8/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0796 - binary_accuracy: 0.9808 - val_loss: 0.3526 - val_binary_accuracy: 0.8746
Epoch 9/20
30/30 [==============================] - 1s 17ms/step - loss: 0.0655 - binary_accuracy: 0.9840 - val_loss: 0.3907 - val_binary_accuracy: 0.8713
Epoch 10/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0511 - binary_accuracy: 0.9887 - val_loss: 0.3842 - val_binary_accuracy: 0.8778
Epoch 11/20
30/30 [==============================] - 1s 17ms/step - loss: 0.0389 - binary_accuracy: 0.9925 - val_loss: 0.4124 - val_binary_accuracy: 0.8766
Epoch 12/20
30/30 [==============================] - 1s 17ms/step - loss: 0.0288 - binary_accuracy: 0.9951 - val_loss: 0.4427 - val_binary_accuracy: 0.8756
Epoch 13/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0223 - binary_accuracy: 0.9960 - val_loss: 0.4777 - val_binary_accuracy: 0.8737
Epoch 14/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0140 - binary_accuracy: 0.9987 - val_loss: 0.5047 - val_binary_accuracy: 0.8720
Epoch 15/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0109 - binary_accuracy: 0.9993 - val_loss: 0.5429 - val_binary_accuracy: 0.8713
Epoch 16/20
30/30 [==============================] - 0s 17ms/step - loss: 0.0118 - binary_accuracy: 0.9981 - val_loss: 0.5733 - val_binary_accuracy: 0.8722
Epoch 17/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0091 - binary_accuracy: 0.9981 - val_loss: 0.6003 - val_binary_accuracy: 0.8710
Epoch 18/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0047 - binary_accuracy: 0.9998 - val_loss: 0.6369 - val_binary_accuracy: 0.8693
Epoch 19/20
30/30 [==============================] - 0s 17ms/step - loss: 0.0074 - binary_accuracy: 0.9980 - val_loss: 0.6720 - val_binary_accuracy: 0.8683
Epoch 20/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0027 - binary_accuracy: 0.9998 - val_loss: 0.6963 - val_binary_accuracy: 0.8680
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved!
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model = models.Sequential()
model.add(layers.Dense(32, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model = models.Sequential()
model.add(layers.Dense(16, activation='tanh', input_shape=(10000,)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
###Output
Epoch 1/4
49/49 [==============================] - 1s 12ms/step - loss: 0.5269 - accuracy: 0.7575
Epoch 2/4
49/49 [==============================] - 1s 12ms/step - loss: 0.2943 - accuracy: 0.9066
Epoch 3/4
49/49 [==============================] - 1s 12ms/step - loss: 0.2230 - accuracy: 0.9272
Epoch 4/4
49/49 [==============================] - 1s 12ms/step - loss: 0.1810 - accuracy: 0.9405
782/782 [==============================] - 2s 2ms/step - loss: 0.2862 - accuracy: 0.8841
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
# IMDbデータセットの読み込み
# num_words=10000は、訓練データにおいて出現頻度が最も高い10000個の単語だけを残して、出現頻度がそれ以下の単語は捨てる。
# train_labels, test_labelsは0と1のリスト。0は否定的、1は肯定的。
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
# dataは単語を表し、インデックスにエンコードされているため、インデックス値のベクトルとなっている。
# labelsは0 or 1をとり、0がnegativeで、1がpositiveいずれかのベクトルとなっている。
print ('train_data.shape : ', train_data.shape) # (25000,)
print ('train_labels.shape : ', train_labels.shape) # (25000,)
print ('test_data.shape : ', test_data.shape) # (25000,)
print ('test_labels.shape : ', test_labels.shape) # (25000,)
# train_data[0]
# train_labels
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
# 出現頻度が最も高い10000個の単語に制限しているため、単語のインデックスが10000を超えない。
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_indexは単語を整数のインデックスにマッピングする辞書。
# 整数のインデックスになっているデータをテキストデータに変換して確認してみます。
# そのため、単語がkeyでインデックスがvalueとして辞書になっているjsonデータをダウンロードします。
# ~/.keras/datasets/imdb_word_index.json
# 以下のようなJSONファイル
"""
{
"fawn": 34701,
"tsukino": 52006,
"nunnery": 52007,
"sonja": 16816,
"vani": 63951,
"woods": 1408,
"spiders": 16115,
"hanging": 2345,
:
}
"""
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# print ('word_index : ', word_index)
# key と value を入れ替えて、index_value : word の形式にする
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# print ('reverse_word_index : ', reverse_word_index)
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
# データをニューラルネットワークに供給するための準備
# 整数のシーケンスを二値行列に変換する。
# テンソルに変換を行わなければならない。つまり、one-hotベクトル化を行う。
# ここでは、One-hotベクトルの次元は10000とし、出現した単語に1を割り当てること。
# one-hot表現というのはある要素のみが1でその他の要素が0であるような表現方法のこと。
# 基本的に単語や文字を表すのに用い、文や文書を表す時はそれらの系列で表す。
# One-hotベクトルへの変換では、単語の出現頻度や並び順といった情報は落ちる。
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
print ('x_train.shape : ', x_train.shape) # (25000, 10000)
print ('x_test.shape : ', x_test.shape) # (25000, 10000)
###Output
x_train.shape : (25000, 10000)
x_test.shape : (25000, 10000)
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
print ('y_train.shape : ', y_train.shape) # (25000, )
print ('y_test.shape : ', y_test.shape) # (25000, )
###Output
y_train.shape : (25000,)
y_test.shape : (25000,)
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
print ('x_val.shape : ', x_val.shape) # (10000, 10000)
print ('partial_x_train.shape : ', partial_x_train.shape) # (15000, 10000)
print ('y_val.shape : ', y_val.shape) # (10000, )
print ('partial_y_train.shape : ', partial_y_train.shape) # (15000, )
###Output
x_val.shape : (10000, 10000)
partial_x_train.shape : (15000, 10000)
y_val.shape : (10000,)
partial_y_train.shape : (15000,)
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 2s 80ms/step - loss: 0.5037 - acc: 0.7915 - val_loss: 0.3808 - val_acc: 0.8710
Epoch 2/20
30/30 [==============================] - 1s 19ms/step - loss: 0.3019 - acc: 0.9048 - val_loss: 0.3066 - val_acc: 0.8856
Epoch 3/20
30/30 [==============================] - 0s 17ms/step - loss: 0.2202 - acc: 0.9321 - val_loss: 0.2770 - val_acc: 0.8939
Epoch 4/20
30/30 [==============================] - 0s 16ms/step - loss: 0.1750 - acc: 0.9439 - val_loss: 0.2772 - val_acc: 0.8893
Epoch 5/20
30/30 [==============================] - 0s 17ms/step - loss: 0.1430 - acc: 0.9560 - val_loss: 0.2904 - val_acc: 0.8839
Epoch 6/20
30/30 [==============================] - 1s 18ms/step - loss: 0.1155 - acc: 0.9667 - val_loss: 0.2929 - val_acc: 0.8883
Epoch 7/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0961 - acc: 0.9720 - val_loss: 0.3098 - val_acc: 0.8841
Epoch 8/20
30/30 [==============================] - 1s 17ms/step - loss: 0.0800 - acc: 0.9776 - val_loss: 0.3290 - val_acc: 0.8825
Epoch 9/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0626 - acc: 0.9842 - val_loss: 0.3847 - val_acc: 0.8766
Epoch 10/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0525 - acc: 0.9874 - val_loss: 0.3866 - val_acc: 0.8728
Epoch 11/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0412 - acc: 0.9914 - val_loss: 0.4322 - val_acc: 0.8768
Epoch 12/20
30/30 [==============================] - 0s 15ms/step - loss: 0.0312 - acc: 0.9940 - val_loss: 0.4509 - val_acc: 0.8668
Epoch 13/20
30/30 [==============================] - 0s 15ms/step - loss: 0.0239 - acc: 0.9959 - val_loss: 0.5239 - val_acc: 0.8683
Epoch 14/20
30/30 [==============================] - 0s 15ms/step - loss: 0.0207 - acc: 0.9969 - val_loss: 0.5041 - val_acc: 0.8736
Epoch 15/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0142 - acc: 0.9981 - val_loss: 0.5358 - val_acc: 0.8711
Epoch 16/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0085 - acc: 0.9996 - val_loss: 0.5729 - val_acc: 0.8706
Epoch 17/20
30/30 [==============================] - 0s 15ms/step - loss: 0.0105 - acc: 0.9982 - val_loss: 0.6105 - val_acc: 0.8705
Epoch 18/20
30/30 [==============================] - 0s 15ms/step - loss: 0.0059 - acc: 0.9992 - val_loss: 0.7322 - val_acc: 0.8504
Epoch 19/20
30/30 [==============================] - 0s 15ms/step - loss: 0.0039 - acc: 0.9997 - val_loss: 0.6637 - val_acc: 0.8688
Epoch 20/20
30/30 [==============================] - 0s 16ms/step - loss: 0.0028 - acc: 0.9999 - val_loss: 0.7854 - val_acc: 0.8493
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# ****** *ここでおちる why? ******
# "bo" is for "blue dot"
# python 3.5.6 で構築した"py35_Keras"だと実行できた。つまりpython 3.6以降の場合、何らかの不具合があるらしい。
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# 訓練データと検証データの損失値をプロット
# ドットは訓練データ、折れ線は検証データの損失値の結果を示している
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('acc')
plt.legend()
plt.show()
# 訓練データと検証データでの正解率をプロット
###Output
_____no_output_____
###Markdown
訓練データではモデルの性能が良かったとしても、まったく見たことがない新しいデータでも性能が良いとは限らない。これは過学習の一例と言える。2つめのエポック後、このモデルは訓練データの過学習に陥っている。The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
# モデルの訓練をやりなおす
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved! ConclusionsHere's what you should take away from this example:* There's usually quite a bit of preprocessing you need to do on your raw data in order to be able to feed it -- as tensors -- into a neural network. In the case of sequences of words, they can be encoded as binary vectors -- but there are other encoding options too.* Stacks of `Dense` layers with `relu` activations can solve a wide range of problems (including sentiment classification), and you will likely use them frequently.* In a binary classification problem (two output classes), your network should end with a `Dense` layer with 1 unit and a `sigmoid` activation, i.e. the output of your network should be a scalar between 0 and 1, encoding a probability.* With such a scalar sigmoid output, on a binary classification problem, the loss function you should use is `binary_crossentropy`.* The `rmsprop` optimizer is generally a good enough choice of optimizer, whatever your problem. That's one less thing for you to worry about.* As they get better on their training data, neural networks eventually start _overfitting_ and end up obtaining increasingly worse results on data never-seen-before. Make sure to always monitor performance on data that is outside of the training set. モデルの訓練を他の設定でやりなおして、結果を比較する* 2層の隠れユニットを16 -> 32
###Code
model = models.Sequential()
model.add(layers.Dense(32, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, y_test)
results
import matplotlib.pyplot as plt
%matplotlib inline
history_dict = history.history
history_dict.keys()
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
history_dict = history.history
history_dict.keys()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('acc')
plt.legend()
plt.show()
# 訓練データと検証データでの正解率をプロット
###Output
_____no_output_____
###Markdown
モデルの訓練をやりなおす* 隠れユニットと隠れ層をより複雑に* 損失関数を binary_crossentropy -> mse
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['acc'])
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
history_dict = history.history
history_dict.keys()
results = model.evaluate(x_test, y_test)
results
history_dict = history.history
history_dict.keys()
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
history_dict = history.history
history_dict.keys()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('acc')
plt.legend()
plt.show()
# 訓練データと検証データでの正解率をプロット
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 3s 0us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
/opt/conda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 4s - loss: 0.5160 - binary_accuracy: 0.7894 - val_loss: 0.4025 - val_binary_accuracy: 0.8647
Epoch 2/20
15000/15000 [==============================] - 3s - loss: 0.3148 - binary_accuracy: 0.9029 - val_loss: 0.3250 - val_binary_accuracy: 0.8786
Epoch 3/20
15000/15000 [==============================] - 4s - loss: 0.2321 - binary_accuracy: 0.9249 - val_loss: 0.2806 - val_binary_accuracy: 0.8921
Epoch 4/20
15000/15000 [==============================] - 4s - loss: 0.1818 - binary_accuracy: 0.9427 - val_loss: 0.2728 - val_binary_accuracy: 0.8905
Epoch 5/20
15000/15000 [==============================] - 4s - loss: 0.1499 - binary_accuracy: 0.9517 - val_loss: 0.2778 - val_binary_accuracy: 0.8885
Epoch 6/20
15000/15000 [==============================] - 2s - loss: 0.1212 - binary_accuracy: 0.9629 - val_loss: 0.3236 - val_binary_accuracy: 0.8797
Epoch 7/20
15000/15000 [==============================] - 3s - loss: 0.1035 - binary_accuracy: 0.9689 - val_loss: 0.3037 - val_binary_accuracy: 0.8850
Epoch 8/20
15000/15000 [==============================] - 4s - loss: 0.0848 - binary_accuracy: 0.9761 - val_loss: 0.3352 - val_binary_accuracy: 0.8774
Epoch 9/20
15000/15000 [==============================] - 4s - loss: 0.0727 - binary_accuracy: 0.9804 - val_loss: 0.3590 - val_binary_accuracy: 0.8799
Epoch 10/20
15000/15000 [==============================] - 4s - loss: 0.0580 - binary_accuracy: 0.9862 - val_loss: 0.3725 - val_binary_accuracy: 0.8800
Epoch 11/20
15000/15000 [==============================] - 4s - loss: 0.0488 - binary_accuracy: 0.9886 - val_loss: 0.3974 - val_binary_accuracy: 0.8781
Epoch 12/20
15000/15000 [==============================] - 4s - loss: 0.0385 - binary_accuracy: 0.9923 - val_loss: 0.4407 - val_binary_accuracy: 0.8773
Epoch 13/20
15000/15000 [==============================] - 3s - loss: 0.0300 - binary_accuracy: 0.9946 - val_loss: 0.4498 - val_binary_accuracy: 0.8746
Epoch 14/20
15000/15000 [==============================] - 3s - loss: 0.0241 - binary_accuracy: 0.9959 - val_loss: 0.4798 - val_binary_accuracy: 0.8735
Epoch 15/20
15000/15000 [==============================] - 4s - loss: 0.0190 - binary_accuracy: 0.9973 - val_loss: 0.5807 - val_binary_accuracy: 0.8651
Epoch 16/20
15000/15000 [==============================] - 3s - loss: 0.0129 - binary_accuracy: 0.9991 - val_loss: 0.5529 - val_binary_accuracy: 0.8716
Epoch 17/20
15000/15000 [==============================] - 2s - loss: 0.0109 - binary_accuracy: 0.9994 - val_loss: 0.6040 - val_binary_accuracy: 0.8697
Epoch 18/20
15000/15000 [==============================] - 2s - loss: 0.0098 - binary_accuracy: 0.9987 - val_loss: 0.6109 - val_binary_accuracy: 0.8697
Epoch 19/20
15000/15000 [==============================] - 3s - loss: 0.0066 - binary_accuracy: 0.9993 - val_loss: 0.6402 - val_binary_accuracy: 0.8666
Epoch 20/20
15000/15000 [==============================] - 4s - loss: 0.0042 - binary_accuracy: 0.9999 - val_loss: 0.6898 - val_binary_accuracy: 0.8675
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model.compile(metrics=)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, **k)
# call load_data with allow_pickle implicitly set to true
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# restore np.load for future normal usage
np.load = np_load_old
###Output
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 29s 2us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
_____no_output_____
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
# from keras import optimizers
# model.compile(optimizer=optimizers.RMSprop(lr=0.001),
# loss='binary_crossentropy',
# metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
# from keras import losses
# from keras import metrics
# model.compile(optimizer=optimizers.RMSprop(lr=0.001),
# loss=losses.binary_crossentropy,
# metrics=[metrics.binary_accuracy])
# https://stackoverflow.com/questions/59612914/difference-about-binarycrossentropy-and-binary-crossentropy-in-tf-keras-loss
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 1s 28ms/step - loss: 0.5023 - accuracy: 0.7772 - val_loss: 0.3825 - val_accuracy: 0.8584
Epoch 2/20
30/30 [==============================] - 1s 18ms/step - loss: 0.2879 - accuracy: 0.9045 - val_loss: 0.2998 - val_accuracy: 0.8852
Epoch 3/20
30/30 [==============================] - 1s 18ms/step - loss: 0.2127 - accuracy: 0.9305 - val_loss: 0.2774 - val_accuracy: 0.8893
Epoch 4/20
30/30 [==============================] - 1s 18ms/step - loss: 0.1677 - accuracy: 0.9464 - val_loss: 0.3280 - val_accuracy: 0.8685
Epoch 5/20
30/30 [==============================] - 1s 18ms/step - loss: 0.1397 - accuracy: 0.9565 - val_loss: 0.2843 - val_accuracy: 0.8876
Epoch 6/20
30/30 [==============================] - 1s 18ms/step - loss: 0.1127 - accuracy: 0.9663 - val_loss: 0.3151 - val_accuracy: 0.8782
Epoch 7/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0941 - accuracy: 0.9732 - val_loss: 0.3140 - val_accuracy: 0.8820
Epoch 8/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0785 - accuracy: 0.9775 - val_loss: 0.3333 - val_accuracy: 0.8813
Epoch 9/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0660 - accuracy: 0.9829 - val_loss: 0.3556 - val_accuracy: 0.8788
Epoch 10/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0532 - accuracy: 0.9871 - val_loss: 0.3824 - val_accuracy: 0.8763
Epoch 11/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0415 - accuracy: 0.9901 - val_loss: 0.4686 - val_accuracy: 0.8642
Epoch 12/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0333 - accuracy: 0.9931 - val_loss: 0.4385 - val_accuracy: 0.8746
Epoch 13/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0268 - accuracy: 0.9950 - val_loss: 0.4675 - val_accuracy: 0.8721
Epoch 14/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0211 - accuracy: 0.9965 - val_loss: 0.4975 - val_accuracy: 0.8726
Epoch 15/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0185 - accuracy: 0.9970 - val_loss: 0.5324 - val_accuracy: 0.8674
Epoch 16/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0101 - accuracy: 0.9993 - val_loss: 0.5796 - val_accuracy: 0.8627
Epoch 17/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0115 - accuracy: 0.9978 - val_loss: 0.5981 - val_accuracy: 0.8696
Epoch 18/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0096 - accuracy: 0.9982 - val_loss: 0.6347 - val_accuracy: 0.8654
Epoch 19/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0040 - accuracy: 0.9999 - val_loss: 0.6576 - val_accuracy: 0.8666
Epoch 20/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0076 - accuracy: 0.9984 - val_loss: 0.6860 - val_accuracy: 0.8662
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss') #'bo' means blue dots
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
# loss: 0.1732 - accuracy: 0.9387
results = model.evaluate(x_test, y_test)
# results [0.3075881600379944, 0.8782399892807007] loss: 0.3076 - accuracy: 0.8782
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.2878 - accuracy: 0.8869
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved! using one hidden layer
###Code
# from keras import models
# from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512) #loss: 0.1861 - accuracy: 0.9368
###Output
Epoch 1/4
49/49 [==============================] - 1s 11ms/step - loss: 0.4399 - accuracy: 0.8289
Epoch 2/4
49/49 [==============================] - 1s 11ms/step - loss: 0.2687 - accuracy: 0.9074
Epoch 3/4
49/49 [==============================] - 1s 11ms/step - loss: 0.2147 - accuracy: 0.9250
Epoch 4/4
49/49 [==============================] - 1s 11ms/step - loss: 0.1817 - accuracy: 0.9392
###Markdown
results = model.evaluate(x_test, y_test)results 88.4
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
# loss: 0.1608 - accuracy: 0.9424
results = model.evaluate(x_test, y_test)
results #loss: 0.3261 - accuracy: 0.8760
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.3261 - accuracy: 0.8760
###Markdown
Adding more units to 3-layer NN
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
# loss: 0.1428 - accuracy: 0.9468
results = model.evaluate(x_test, y_test)
results #loss: 0.3222 - accuracy: 0.8783
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.3222 - accuracy: 0.8783
###Markdown
Using a different loss function
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
# loss: 0.0493 - accuracy: 0.9418
results = model.evaluate(x_test, y_test)
results #loss: 0.0896 - accuracy: 0.8770
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.0896 - accuracy: 0.8770
###Markdown
Using different activation function
###Code
model = models.Sequential()
# model.add(layers.Dense(16, activation='tanh', input_shape=(10000,)))
# model.add(layers.Dense(16, activation='tanh'))
model.add(layers.Dense(16, activation='elu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='elu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
# loss: 0.0421 - accuracy: 0.9476 for tanh
# loss: 0.0436 - accuracy: 0.9461 for elu
results = model.evaluate(x_test, y_test)
results
# loss: 0.0991 - accuracy: 0.8694 for tanh
# loss: 0.0899 - accuracy: 0.8798 for elu
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.0899 - accuracy: 0.8798
###Markdown
Using a different optimizer
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='mse',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
# loss: 0.0401 - accuracy: 0.9563
results = model.evaluate(x_test, y_test)
results
# loss: 0.0887 - accuracy: 0.8807
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.0887 - accuracy: 0.8807
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from tensorflow.keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 1s 0us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from tensorflow.keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from tensorflow.keras import losses
from tensorflow.keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Using TensorFlow backend.
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 12s 781us/step - loss: 0.4991 - binary_accuracy: 0.7967 - val_loss: 0.3775 - val_binary_accuracy: 0.8717
Epoch 2/20
15000/15000 [==============================] - 4s 235us/step - loss: 0.2999 - binary_accuracy: 0.9031 - val_loss: 0.3150 - val_binary_accuracy: 0.8782
Epoch 3/20
15000/15000 [==============================] - 3s 218us/step - loss: 0.2187 - binary_accuracy: 0.9298 - val_loss: 0.2769 - val_binary_accuracy: 0.8904
Epoch 4/20
15000/15000 [==============================] - 3s 217us/step - loss: 0.1747 - binary_accuracy: 0.9426 - val_loss: 0.2777 - val_binary_accuracy: 0.8888
Epoch 5/20
15000/15000 [==============================] - 3s 219us/step - loss: 0.1394 - binary_accuracy: 0.9554 - val_loss: 0.2804 - val_binary_accuracy: 0.8872
Epoch 6/20
15000/15000 [==============================] - 3s 217us/step - loss: 0.1155 - binary_accuracy: 0.9657 - val_loss: 0.3009 - val_binary_accuracy: 0.8825
Epoch 7/20
15000/15000 [==============================] - 3s 220us/step - loss: 0.0959 - binary_accuracy: 0.9722 - val_loss: 0.3192 - val_binary_accuracy: 0.8819
Epoch 8/20
15000/15000 [==============================] - 3s 222us/step - loss: 0.0804 - binary_accuracy: 0.9758 - val_loss: 0.3625 - val_binary_accuracy: 0.8749
Epoch 9/20
15000/15000 [==============================] - 3s 230us/step - loss: 0.0642 - binary_accuracy: 0.9827 - val_loss: 0.3592 - val_binary_accuracy: 0.8768
Epoch 10/20
15000/15000 [==============================] - 3s 223us/step - loss: 0.0556 - binary_accuracy: 0.9852 - val_loss: 0.3776 - val_binary_accuracy: 0.8766
Epoch 11/20
15000/15000 [==============================] - 3s 226us/step - loss: 0.0432 - binary_accuracy: 0.9895 - val_loss: 0.4054 - val_binary_accuracy: 0.8767
Epoch 12/20
15000/15000 [==============================] - 3s 219us/step - loss: 0.0344 - binary_accuracy: 0.9931 - val_loss: 0.4330 - val_binary_accuracy: 0.8745
Epoch 13/20
15000/15000 [==============================] - 3s 217us/step - loss: 0.0286 - binary_accuracy: 0.9940 - val_loss: 0.4635 - val_binary_accuracy: 0.8742
Epoch 14/20
15000/15000 [==============================] - 3s 219us/step - loss: 0.0240 - binary_accuracy: 0.9945 - val_loss: 0.4959 - val_binary_accuracy: 0.8718
Epoch 15/20
15000/15000 [==============================] - 3s 218us/step - loss: 0.0194 - binary_accuracy: 0.9965 - val_loss: 0.5265 - val_binary_accuracy: 0.8708
Epoch 16/20
15000/15000 [==============================] - 3s 217us/step - loss: 0.0105 - binary_accuracy: 0.9994 - val_loss: 0.5594 - val_binary_accuracy: 0.8689
Epoch 17/20
15000/15000 [==============================] - 3s 218us/step - loss: 0.0134 - binary_accuracy: 0.9980 - val_loss: 0.5911 - val_binary_accuracy: 0.8693
Epoch 18/20
15000/15000 [==============================] - 3s 219us/step - loss: 0.0063 - binary_accuracy: 0.9998 - val_loss: 0.6330 - val_binary_accuracy: 0.8677
Epoch 19/20
15000/15000 [==============================] - 3s 217us/step - loss: 0.0069 - binary_accuracy: 0.9995 - val_loss: 0.6634 - val_binary_accuracy: 0.8681
Epoch 20/20
15000/15000 [==============================] - 3s 216us/step - loss: 0.0071 - binary_accuracy: 0.9989 - val_loss: 0.7030 - val_binary_accuracy: 0.8668
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['acc'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 100us/step - loss: 0.5084 - binary_accuracy: 0.7811 - val_loss: 0.3797 - val_binary_accuracy: 0.8684
Epoch 2/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.3005 - binary_accuracy: 0.9047 - val_loss: 0.3004 - val_binary_accuracy: 0.8896
Epoch 3/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.2179 - binary_accuracy: 0.9288 - val_loss: 0.3085 - val_binary_accuracy: 0.8713
Epoch 4/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.1751 - binary_accuracy: 0.9436 - val_loss: 0.2839 - val_binary_accuracy: 0.8833
Epoch 5/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.1427 - binary_accuracy: 0.9543 - val_loss: 0.2841 - val_binary_accuracy: 0.8872
Epoch 6/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.1150 - binary_accuracy: 0.9650 - val_loss: 0.3160 - val_binary_accuracy: 0.8770
Epoch 7/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0980 - binary_accuracy: 0.9707 - val_loss: 0.3127 - val_binary_accuracy: 0.8842
Epoch 8/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0807 - binary_accuracy: 0.9763 - val_loss: 0.3858 - val_binary_accuracy: 0.8650
Epoch 9/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0661 - binary_accuracy: 0.9821 - val_loss: 0.3636 - val_binary_accuracy: 0.8778
Epoch 10/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0566 - binary_accuracy: 0.9852 - val_loss: 0.3845 - val_binary_accuracy: 0.8800
Epoch 11/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0425 - binary_accuracy: 0.9905 - val_loss: 0.4123 - val_binary_accuracy: 0.8774
Epoch 12/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0374 - binary_accuracy: 0.9922 - val_loss: 0.4614 - val_binary_accuracy: 0.8674
Epoch 13/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0299 - binary_accuracy: 0.9931 - val_loss: 0.4715 - val_binary_accuracy: 0.8730
Epoch 14/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0247 - binary_accuracy: 0.9946 - val_loss: 0.5055 - val_binary_accuracy: 0.8718
Epoch 15/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0189 - binary_accuracy: 0.9966 - val_loss: 0.5327 - val_binary_accuracy: 0.8713
Epoch 16/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0164 - binary_accuracy: 0.9969 - val_loss: 0.5662 - val_binary_accuracy: 0.8691
Epoch 17/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0122 - binary_accuracy: 0.9982 - val_loss: 0.5979 - val_binary_accuracy: 0.8675
Epoch 18/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0111 - binary_accuracy: 0.9979 - val_loss: 0.6290 - val_binary_accuracy: 0.8670
Epoch 19/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0070 - binary_accuracy: 0.9993 - val_loss: 0.7389 - val_binary_accuracy: 0.8536
Epoch 20/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.0048 - binary_accuracy: 0.9998 - val_loss: 0.6858 - val_binary_accuracy: 0.8662
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
# decoded_review
decoded_review2= ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review2
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
train_labels.astype
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 8s 525us/step - loss: 0.5090 - binary_accuracy: 0.7813 - val_loss: 0.3794 - val_binary_accuracy: 0.8686
Epoch 2/20
15000/15000 [==============================] - 5s 326us/step - loss: 0.3006 - binary_accuracy: 0.9049 - val_loss: 0.3003 - val_binary_accuracy: 0.8894
Epoch 3/20
15000/15000 [==============================] - 5s 317us/step - loss: 0.2180 - binary_accuracy: 0.9284 - val_loss: 0.3083 - val_binary_accuracy: 0.8716
Epoch 4/20
15000/15000 [==============================] - 6s 418us/step - loss: 0.1751 - binary_accuracy: 0.9433 - val_loss: 0.2836 - val_binary_accuracy: 0.8836
Epoch 5/20
15000/15000 [==============================] - 6s 408us/step - loss: 0.1426 - binary_accuracy: 0.9541 - val_loss: 0.2842 - val_binary_accuracy: 0.8873
Epoch 6/20
15000/15000 [==============================] - 5s 357us/step - loss: 0.1149 - binary_accuracy: 0.9652 - val_loss: 0.3150 - val_binary_accuracy: 0.8776
Epoch 7/20
15000/15000 [==============================] - 6s 389us/step - loss: 0.0978 - binary_accuracy: 0.9708 - val_loss: 0.3129 - val_binary_accuracy: 0.8846
Epoch 8/20
15000/15000 [==============================] - 5s 358us/step - loss: 0.0806 - binary_accuracy: 0.9765 - val_loss: 0.3863 - val_binary_accuracy: 0.8650
Epoch 9/20
15000/15000 [==============================] - 5s 358us/step - loss: 0.0660 - binary_accuracy: 0.9819 - val_loss: 0.3638 - val_binary_accuracy: 0.8779
Epoch 10/20
15000/15000 [==============================] - 6s 410us/step - loss: 0.0564 - binary_accuracy: 0.9851 - val_loss: 0.3845 - val_binary_accuracy: 0.8799
Epoch 11/20
15000/15000 [==============================] - 5s 353us/step - loss: 0.0429 - binary_accuracy: 0.9904 - val_loss: 0.4144 - val_binary_accuracy: 0.8793
Epoch 12/20
15000/15000 [==============================] - 6s 376us/step - loss: 0.0375 - binary_accuracy: 0.9923 - val_loss: 0.4604 - val_binary_accuracy: 0.8675
Epoch 13/20
15000/15000 [==============================] - 6s 392us/step - loss: 0.0299 - binary_accuracy: 0.9928 - val_loss: 0.4728 - val_binary_accuracy: 0.8730
Epoch 14/20
15000/15000 [==============================] - 5s 351us/step - loss: 0.0244 - binary_accuracy: 0.9944 - val_loss: 0.5060 - val_binary_accuracy: 0.8725
Epoch 15/20
15000/15000 [==============================] - 6s 419us/step - loss: 0.0181 - binary_accuracy: 0.9975 - val_loss: 0.5343 - val_binary_accuracy: 0.8711
Epoch 16/20
15000/15000 [==============================] - 6s 423us/step - loss: 0.0181 - binary_accuracy: 0.9960 - val_loss: 0.5663 - val_binary_accuracy: 0.8699
Epoch 17/20
15000/15000 [==============================] - 7s 490us/step - loss: 0.0101 - binary_accuracy: 0.9994 - val_loss: 0.6165 - val_binary_accuracy: 0.8639
Epoch 18/20
15000/15000 [==============================] - 7s 463us/step - loss: 0.0134 - binary_accuracy: 0.9970 - val_loss: 0.6369 - val_binary_accuracy: 0.8674
Epoch 19/20
15000/15000 [==============================] - 8s 545us/step - loss: 0.0063 - binary_accuracy: 0.9995 - val_loss: 0.7284 - val_binary_accuracy: 0.8566
Epoch 20/20
15000/15000 [==============================] - 8s 516us/step - loss: 0.0047 - binary_accuracy: 0.9999 - val_loss: 0.7057 - val_binary_accuracy: 0.8641
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc_values, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleTwo-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB dataset> - 50,000 highly-polarized reviews from the Internet Movie Database. > - Split into 25,000 reviews for training and 25,000 reviews for testing>> - Each set consisting in 50% negative and 50% positive reviews.> - The IMDB dataset comes packaged with Keras. > - It has already been preprocessed: >> - The reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.> - About 80MB of data
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
> - The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. > - Rare words will be discarded. This allows us to work with vector data of manageable size.> - The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). > - `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* __Padding__ We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* __One_Hot_Encoding__ We could one-hot-encode our lists to turn them into vectors of 0s and 1s. > * the sequence `[3, 5]` in a 10,000-dimensional vector would be all-zeros except for indices 3 and 5, which would be ones. * __Input__ This can be used as first layer in our network a `Dense` layer, capable of handling floating point vector data.> * We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our network__Input data__ * is simply vectors, and labels are scalars (1s and 0s)* This type of network that performs well on this is a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`> * A `relu` (rectified linear unit) is a function meant to zero-out negative values> * A `sigmoid` "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability__Dense__ The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. * Each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`* Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`* The dot product with `W` will project the input data onto a 16-dimensional representation space * Then we would add the bias vector `b` and apply the `relu` operation * Having more hidden units (a higher-dimensional representation space) > * allows your network to learn more complex representations, > * but it makes your network more computationally expensive and may lead to learning unwanted patterns - patterns that will improve performance on the training data but not on the test data.__Two Key Architectural Decisions__ * How many layers to use.* How many "hidden units" to chose for each layer. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
__Loss function__ A binary classification problem with output of probability* Best to use the `binary_crossentropy` loss. * Crossentropy is usually the best choice when you are dealing with models that output probabilities. * Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions* In our case, between the ground-truth distribution and our predictions.__Optimizer__ is almost always `rmsprop`
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. * To configure parameters of your optimizer or pass a custom loss or metric function use an optimizer class instance as the `optimizer` argument
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
* __Computation_Time__ On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.* __History__ The call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training.
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
* The __`History`__ object contains 4 entries: one per metric that was being monitored, during training and during validation. * Use Matplotlib to plot > * the training and validation loss side by side> * the training and validation accuracy
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
__Understanding the Two Graphs__* The dots are the training loss and accuracy, * The solid lines are the validation loss and accuracy. * The training loss decreases with every epoch and the training accuracy increases with every epoch. > * __Gradient Descent Optimization__ -- the quantity you are trying to minimize should get lower with every iteration. > * __Validation loss and Accuracy__ they seem to peak at the fourth epoch* This is "overfitting": > * After the second epoch, we are over-optimizing on the training data> * The model is learning representations that are specific to the training data and do not generalize to data outside of the training set.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
print(train_data[0][:5])
print(train_data[1][:5])
train_labels[0:5]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
[max(sequence) for sequence in train_data[:2]]
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0][:10]])
print(train_data[0][:10])
print(decoded_review)
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0][:10]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 118us/step - loss: 0.5338 - binary_accuracy: 0.7833 - val_loss: 0.4002 - val_binary_accuracy: 0.8708
Epoch 2/20
15000/15000 [==============================] - 2s 109us/step - loss: 0.3228 - binary_accuracy: 0.8993 - val_loss: 0.3148 - val_binary_accuracy: 0.8867
Epoch 3/20
15000/15000 [==============================] - 2s 137us/step - loss: 0.2342 - binary_accuracy: 0.9253 - val_loss: 0.2828 - val_binary_accuracy: 0.8913
Epoch 4/20
15000/15000 [==============================] - 2s 116us/step - loss: 0.1855 - binary_accuracy: 0.9407 - val_loss: 0.2875 - val_binary_accuracy: 0.8845
Epoch 5/20
15000/15000 [==============================] - 2s 108us/step - loss: 0.1493 - binary_accuracy: 0.9531 - val_loss: 0.2880 - val_binary_accuracy: 0.8862
Epoch 6/20
15000/15000 [==============================] - 2s 111us/step - loss: 0.1256 - binary_accuracy: 0.9610 - val_loss: 0.2861 - val_binary_accuracy: 0.8895
Epoch 7/20
15000/15000 [==============================] - 2s 122us/step - loss: 0.1025 - binary_accuracy: 0.9724 - val_loss: 0.2990 - val_binary_accuracy: 0.8877
Epoch 8/20
15000/15000 [==============================] - 2s 128us/step - loss: 0.0856 - binary_accuracy: 0.9761 - val_loss: 0.3176 - val_binary_accuracy: 0.8841
Epoch 9/20
15000/15000 [==============================] - 2s 111us/step - loss: 0.0696 - binary_accuracy: 0.9823 - val_loss: 0.3599 - val_binary_accuracy: 0.8735
Epoch 10/20
15000/15000 [==============================] - 2s 108us/step - loss: 0.0572 - binary_accuracy: 0.9857 - val_loss: 0.3872 - val_binary_accuracy: 0.8779
Epoch 11/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.0469 - binary_accuracy: 0.9900 - val_loss: 0.3989 - val_binary_accuracy: 0.8712
Epoch 12/20
15000/15000 [==============================] - 2s 127us/step - loss: 0.0356 - binary_accuracy: 0.9936 - val_loss: 0.4140 - val_binary_accuracy: 0.8783
Epoch 13/20
15000/15000 [==============================] - 2s 107us/step - loss: 0.0312 - binary_accuracy: 0.9934 - val_loss: 0.4541 - val_binary_accuracy: 0.8771
Epoch 14/20
15000/15000 [==============================] - 2s 110us/step - loss: 0.0217 - binary_accuracy: 0.9969 - val_loss: 0.4757 - val_binary_accuracy: 0.8704
Epoch 15/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.0180 - binary_accuracy: 0.9977 - val_loss: 0.5063 - val_binary_accuracy: 0.8739
Epoch 16/20
15000/15000 [==============================] - 2s 124us/step - loss: 0.0130 - binary_accuracy: 0.9986 - val_loss: 0.5431 - val_binary_accuracy: 0.8728
Epoch 17/20
15000/15000 [==============================] - 2s 110us/step - loss: 0.0105 - binary_accuracy: 0.9989 - val_loss: 0.5698 - val_binary_accuracy: 0.8689
Epoch 18/20
15000/15000 [==============================] - 2s 118us/step - loss: 0.0072 - binary_accuracy: 0.9997 - val_loss: 0.6116 - val_binary_accuracy: 0.8652
Epoch 19/20
15000/15000 [==============================] - 2s 123us/step - loss: 0.0055 - binary_accuracy: 0.9997 - val_loss: 0.6467 - val_binary_accuracy: 0.8671
Epoch 20/20
15000/15000 [==============================] - 2s 132us/step - loss: 0.0071 - binary_accuracy: 0.9986 - val_loss: 0.6773 - val_binary_accuracy: 0.8672
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
#acc = history.history['acc']
#val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure(figsize=(15,5))
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
#plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.figure(figsize=(15,5))
plt.plot(epochs, acc_values, 'bo', label='Training binary_accuracy')
plt.plot(epochs, val_acc_values, 'b', label='Validation val_binary_accuracy')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
# note that due to the error allow_pickle=False, we need to use the slight workaround with numpy
# from keras.datasets import imdb
# import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# call load_data with allow_pickle implicitly set to true
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# restore np.load for future normal usage
np.load = np_load_old
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
# from keras import models
# from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
WARNING:tensorflow:From C:\Users\mauerdo\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
WARNING:tensorflow:From C:\Users\mauerdo\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_core\python\ops\nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
# from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
# from keras import losses
# from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument: On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
# import matplotlib.pyplot as plt
# plt.style.use('seaborn')
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, loss, ls = '--', marker = 'x', mew = 3.0, label = 'Training loss')
plt.plot(epochs, val_loss, ls = '--', marker = 'o', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(fontsize = 16);
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, ls = '--', marker = 'x', mew = 3.0, label='Training acc')
plt.plot(epochs, val_acc, ls = '--', marker = 'o', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(fontsize = 16);
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved!
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='nadam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
Train on 25000 samples
Epoch 1/4
25000/25000 [==============================] - 3s 138us/sample - loss: 0.4958 - acc: 0.7984
Epoch 2/4
25000/25000 [==============================] - 3s 119us/sample - loss: 0.2537 - acc: 0.9091
Epoch 3/4
25000/25000 [==============================] - 3s 111us/sample - loss: 0.1839 - acc: 0.9353
Epoch 4/4
25000/25000 [==============================] - 3s 113us/sample - loss: 0.1442 - acc: 0.9506
25000/25000 [==============================] - 4s 170us/sample - loss: 0.3103 - acc: 0.8789
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[:1]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
dict([(1,2) , (3,4)])
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
print(list(word_index.items())[:10])
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
M = np.zeros((3,4))
M[2,[0,2,3]]=5.
M
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
train_labels, type(train_data), len(train_labels)
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
train_labels, y_train
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
#val_size= 5000
#x_val = x_train[:val_size]
#partial_x_train = x_train[val_size:]
#y_val = y_train[:val_size]
#partial_y_train = y_train[val_size:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
model.reset_states()
history = model.fit(x_train,
y_train,
epochs=10,
batch_size=512,
validation_split=0.25)
###Output
Train on 18750 samples, validate on 6250 samples
Epoch 1/10
18750/18750 [==============================] - 2s 118us/step - loss: 0.4912 - binary_accuracy: 0.7975 - val_loss: 0.3433 - val_binary_accuracy: 0.8838
Epoch 2/10
18750/18750 [==============================] - 2s 90us/step - loss: 0.2752 - binary_accuracy: 0.9072 - val_loss: 0.2949 - val_binary_accuracy: 0.8867
Epoch 3/10
18750/18750 [==============================] - 2s 90us/step - loss: 0.2056 - binary_accuracy: 0.9310 - val_loss: 0.2769 - val_binary_accuracy: 0.8898
Epoch 4/10
18750/18750 [==============================] - 2s 92us/step - loss: 0.1690 - binary_accuracy: 0.9419 - val_loss: 0.2899 - val_binary_accuracy: 0.8862
Epoch 5/10
18750/18750 [==============================] - 3s 152us/step - loss: 0.1404 - binary_accuracy: 0.9521 - val_loss: 0.2849 - val_binary_accuracy: 0.8899
Epoch 6/10
18750/18750 [==============================] - 3s 137us/step - loss: 0.1219 - binary_accuracy: 0.9590 - val_loss: 0.3170 - val_binary_accuracy: 0.8840
Epoch 7/10
18750/18750 [==============================] - 2s 122us/step - loss: 0.1008 - binary_accuracy: 0.9669 - val_loss: 0.3294 - val_binary_accuracy: 0.8830
Epoch 8/10
18750/18750 [==============================] - 2s 107us/step - loss: 0.0883 - binary_accuracy: 0.9722 - val_loss: 0.3493 - val_binary_accuracy: 0.8822
Epoch 9/10
18750/18750 [==============================] - 2s 98us/step - loss: 0.0768 - binary_accuracy: 0.9760 - val_loss: 0.4016 - val_binary_accuracy: 0.8741
Epoch 10/10
18750/18750 [==============================] - 3s 133us/step - loss: 0.0653 - binary_accuracy: 0.9806 - val_loss: 0.4255 - val_binary_accuracy: 0.8707
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
y_hat =model.predict(x_test)
plt.hist(y_hat,bins=20)
None
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved! ConclusionsHere's what you should take away from this example:* There's usually quite a bit of preprocessing you need to do on your raw data in order to be able to feed it -- as tensors -- into a neural network. In the case of sequences of words, they can be encoded as binary vectors -- but there are other encoding options too.* Stacks of `Dense` layers with `relu` activations can solve a wide range of problems (including sentiment classification), and you will likely use them frequently.* In a binary classification problem (two output classes), your network should end with a `Dense` layer with 1 unit and a `sigmoid` activation, i.e. the output of your network should be a scalar between 0 and 1, encoding a probability.* With such a scalar sigmoid output, on a binary classification problem, the loss function you should use is `binary_crossentropy`.* The `rmsprop` optimizer is generally a good enough choice of optimizer, whatever your problem. That's one less thing for you to worry about.* As they get better on their training data, neural networks eventually start _overfitting_ and end up obtaining increasingly worse results on data never-seen-before. Make sure to always monitor performance on data that is outside of the training set.
###Code
model = models.Sequential()
model.add(layers.Dense(128, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=20, batch_size=256, validation_split=0.15)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
# (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# call load_data with allow_pickle implicitly set to true
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# restore np.load for future normal usage
np.load = np_load_old
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
# loss='mse',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 5s 304us/step - loss: 0.4827 - binary_accuracy: 0.7947 - val_loss: 0.3521 - val_binary_accuracy: 0.8754
Epoch 2/20
15000/15000 [==============================] - 4s 236us/step - loss: 0.2720 - binary_accuracy: 0.9055 - val_loss: 0.3345 - val_binary_accuracy: 0.8572
Epoch 3/20
15000/15000 [==============================] - 4s 238us/step - loss: 0.1923 - binary_accuracy: 0.9353 - val_loss: 0.3123 - val_binary_accuracy: 0.8718
Epoch 4/20
15000/15000 [==============================] - 4s 259us/step - loss: 0.1474 - binary_accuracy: 0.9498 - val_loss: 0.2813 - val_binary_accuracy: 0.8888
Epoch 5/20
15000/15000 [==============================] - 4s 269us/step - loss: 0.1101 - binary_accuracy: 0.9631 - val_loss: 0.3120 - val_binary_accuracy: 0.8803
Epoch 6/20
15000/15000 [==============================] - 4s 270us/step - loss: 0.0869 - binary_accuracy: 0.9732 - val_loss: 0.3430 - val_binary_accuracy: 0.8819
Epoch 7/20
15000/15000 [==============================] - 4s 264us/step - loss: 0.0740 - binary_accuracy: 0.9755 - val_loss: 0.3827 - val_binary_accuracy: 0.8782
Epoch 8/20
15000/15000 [==============================] - 4s 259us/step - loss: 0.0523 - binary_accuracy: 0.9838 - val_loss: 0.5641 - val_binary_accuracy: 0.8473
Epoch 9/20
15000/15000 [==============================] - 4s 262us/step - loss: 0.0349 - binary_accuracy: 0.9911 - val_loss: 0.4700 - val_binary_accuracy: 0.8722
Epoch 10/20
15000/15000 [==============================] - 4s 267us/step - loss: 0.0296 - binary_accuracy: 0.9917 - val_loss: 0.5149 - val_binary_accuracy: 0.8685
Epoch 11/20
15000/15000 [==============================] - 4s 273us/step - loss: 0.0316 - binary_accuracy: 0.9909 - val_loss: 0.5409 - val_binary_accuracy: 0.8679
Epoch 12/20
15000/15000 [==============================] - 4s 269us/step - loss: 0.0310 - binary_accuracy: 0.9909 - val_loss: 0.5811 - val_binary_accuracy: 0.8670
Epoch 13/20
15000/15000 [==============================] - 4s 268us/step - loss: 0.0076 - binary_accuracy: 0.9993 - val_loss: 0.6006 - val_binary_accuracy: 0.8658
Epoch 14/20
15000/15000 [==============================] - 4s 264us/step - loss: 0.0289 - binary_accuracy: 0.9919 - val_loss: 0.6385 - val_binary_accuracy: 0.8636
Epoch 15/20
15000/15000 [==============================] - 4s 261us/step - loss: 0.0037 - binary_accuracy: 0.9997 - val_loss: 0.6602 - val_binary_accuracy: 0.8641
Epoch 16/20
15000/15000 [==============================] - 4s 261us/step - loss: 0.0187 - binary_accuracy: 0.9948 - val_loss: 0.7018 - val_binary_accuracy: 0.8624
Epoch 17/20
15000/15000 [==============================] - 4s 264us/step - loss: 0.0018 - binary_accuracy: 0.9999 - val_loss: 0.7197 - val_binary_accuracy: 0.8634
Epoch 18/20
15000/15000 [==============================] - 4s 272us/step - loss: 0.0011 - binary_accuracy: 1.0000 - val_loss: 0.7639 - val_binary_accuracy: 0.8628
Epoch 19/20
15000/15000 [==============================] - 4s 266us/step - loss: 0.0159 - binary_accuracy: 0.9945 - val_loss: 0.7928 - val_binary_accuracy: 0.8630
Epoch 20/20
15000/15000 [==============================] - 4s 263us/step - loss: 6.1070e-04 - binary_accuracy: 1.0000 - val_loss: 0.8104 - val_binary_accuracy: 0.8621
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
###Code
import keras
keras.__version__
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 0s 0us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# Let's look at one entry
word_index['rickman']
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, n_dimensions=10000):
#First build a matrix of zeroes of shape (number of sequences, dimensions)
results = np.zeros(shape = (len(sequences), n_dimensions))
for i in range(len(sequences)):
results[i, sequences[i]] = 1.
return results
print(f"train_data[0]: {train_data[0]} \n with {len(train_data[0])} words")
print(f"Vectorized version is {vectorize_sequences(train_data[0])} with shape {vectorize_sequences(train_data[0]).shape}")
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 1s 47ms/step - loss: 0.5180 - binary_accuracy: 0.7669 - val_loss: 0.3752 - val_binary_accuracy: 0.8731
Epoch 2/20
30/30 [==============================] - 1s 32ms/step - loss: 0.3005 - binary_accuracy: 0.9021 - val_loss: 0.3007 - val_binary_accuracy: 0.8887
Epoch 3/20
30/30 [==============================] - 1s 33ms/step - loss: 0.2190 - binary_accuracy: 0.9285 - val_loss: 0.2793 - val_binary_accuracy: 0.8890
Epoch 4/20
30/30 [==============================] - 1s 32ms/step - loss: 0.1763 - binary_accuracy: 0.9425 - val_loss: 0.2907 - val_binary_accuracy: 0.8836
Epoch 5/20
30/30 [==============================] - 1s 33ms/step - loss: 0.1417 - binary_accuracy: 0.9551 - val_loss: 0.2839 - val_binary_accuracy: 0.8855
Epoch 6/20
30/30 [==============================] - 1s 32ms/step - loss: 0.1184 - binary_accuracy: 0.9647 - val_loss: 0.2946 - val_binary_accuracy: 0.8838
Epoch 7/20
30/30 [==============================] - 1s 34ms/step - loss: 0.0973 - binary_accuracy: 0.9707 - val_loss: 0.3100 - val_binary_accuracy: 0.8831
Epoch 8/20
30/30 [==============================] - 1s 32ms/step - loss: 0.0789 - binary_accuracy: 0.9767 - val_loss: 0.3324 - val_binary_accuracy: 0.8797
Epoch 9/20
30/30 [==============================] - 1s 33ms/step - loss: 0.0661 - binary_accuracy: 0.9814 - val_loss: 0.3533 - val_binary_accuracy: 0.8789
Epoch 10/20
30/30 [==============================] - 1s 33ms/step - loss: 0.0541 - binary_accuracy: 0.9857 - val_loss: 0.4038 - val_binary_accuracy: 0.8751
Epoch 11/20
30/30 [==============================] - 1s 32ms/step - loss: 0.0433 - binary_accuracy: 0.9901 - val_loss: 0.4021 - val_binary_accuracy: 0.8755
Epoch 12/20
30/30 [==============================] - 1s 33ms/step - loss: 0.0347 - binary_accuracy: 0.9930 - val_loss: 0.4317 - val_binary_accuracy: 0.8737
Epoch 13/20
30/30 [==============================] - 1s 33ms/step - loss: 0.0300 - binary_accuracy: 0.9927 - val_loss: 0.4623 - val_binary_accuracy: 0.8731
Epoch 14/20
30/30 [==============================] - 1s 33ms/step - loss: 0.0224 - binary_accuracy: 0.9955 - val_loss: 0.4963 - val_binary_accuracy: 0.8701
Epoch 15/20
30/30 [==============================] - 1s 41ms/step - loss: 0.0180 - binary_accuracy: 0.9968 - val_loss: 0.5300 - val_binary_accuracy: 0.8684
Epoch 16/20
30/30 [==============================] - 1s 36ms/step - loss: 0.0137 - binary_accuracy: 0.9985 - val_loss: 0.5628 - val_binary_accuracy: 0.8694
Epoch 17/20
30/30 [==============================] - 1s 39ms/step - loss: 0.0128 - binary_accuracy: 0.9976 - val_loss: 0.5955 - val_binary_accuracy: 0.8688
Epoch 18/20
30/30 [==============================] - 1s 37ms/step - loss: 0.0072 - binary_accuracy: 0.9995 - val_loss: 0.6264 - val_binary_accuracy: 0.8678
Epoch 19/20
30/30 [==============================] - 1s 39ms/step - loss: 0.0119 - binary_accuracy: 0.9972 - val_loss: 0.6607 - val_binary_accuracy: 0.8658
Epoch 20/20
30/30 [==============================] - 1s 36ms/step - loss: 0.0036 - binary_accuracy: 0.9999 - val_loss: 0.6804 - val_binary_accuracy: 0.8658
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'orange', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
# acc_values = history_dict['acc']
# val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'orange', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The orange lines are the training loss and accuracy, while the blue lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved! Experiment 1: Only 1 hidden layer
###Code
from keras import models
from keras import layers
from keras import losses
from keras import metrics
from keras import optimizers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape = (10000,)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss = losses.binary_crossentropy,
metrics = [metrics.binary_accuracy])
model.summary()
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'orange', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
results = model.evaluate(x_test, y_test)
results
###Output
782/782 [==============================] - 2s 2ms/step - loss: 0.5097 - binary_accuracy: 0.8563
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
for j in sequence:
results[i, j] += 1. # set specific indices of results[i] to 1s
for j, val in enumerate(sequence):
results[i,j] = results[i,j] / len(sequence)
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
print(x_train[0][4])
print ( [i for i in train_data[0] if i == 4])
print ( [i for i in x_train[0] if i > 1.])
###Output
0.06880733944954129
[4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4]
[2.0, 2.0, 2.0, 3.0, 3.0]
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 5s 329us/step - loss: 0.5255 - binary_accuracy: 0.8051 - val_loss: 0.3845 - val_binary_accuracy: 0.8691
Epoch 2/20
15000/15000 [==============================] - 2s 164us/step - loss: 0.2844 - binary_accuracy: 0.9132 - val_loss: 0.3043 - val_binary_accuracy: 0.8786
Epoch 3/20
15000/15000 [==============================] - 2s 164us/step - loss: 0.1910 - binary_accuracy: 0.9375 - val_loss: 0.2946 - val_binary_accuracy: 0.8795
Epoch 4/20
15000/15000 [==============================] - 2s 162us/step - loss: 0.1386 - binary_accuracy: 0.9552 - val_loss: 0.3124 - val_binary_accuracy: 0.8780
Epoch 5/20
15000/15000 [==============================] - 2s 161us/step - loss: 0.1018 - binary_accuracy: 0.9680 - val_loss: 0.3456 - val_binary_accuracy: 0.8735
Epoch 6/20
15000/15000 [==============================] - 2s 161us/step - loss: 0.0734 - binary_accuracy: 0.9789 - val_loss: 0.3826 - val_binary_accuracy: 0.8731
Epoch 7/20
15000/15000 [==============================] - 2s 159us/step - loss: 0.0531 - binary_accuracy: 0.9864 - val_loss: 0.4318 - val_binary_accuracy: 0.8666
Epoch 8/20
15000/15000 [==============================] - 2s 166us/step - loss: 0.0368 - binary_accuracy: 0.9921 - val_loss: 0.4870 - val_binary_accuracy: 0.8641
Epoch 9/20
15000/15000 [==============================] - 2s 161us/step - loss: 0.0243 - binary_accuracy: 0.9952 - val_loss: 0.5501 - val_binary_accuracy: 0.8606
Epoch 10/20
15000/15000 [==============================] - 2s 165us/step - loss: 0.0159 - binary_accuracy: 0.9973 - val_loss: 0.6125 - val_binary_accuracy: 0.8591
Epoch 11/20
15000/15000 [==============================] - 2s 165us/step - loss: 0.0097 - binary_accuracy: 0.9988 - val_loss: 0.6838 - val_binary_accuracy: 0.8586
Epoch 12/20
15000/15000 [==============================] - 2s 162us/step - loss: 0.0053 - binary_accuracy: 0.9993 - val_loss: 0.7740 - val_binary_accuracy: 0.8528
Epoch 13/20
15000/15000 [==============================] - 2s 161us/step - loss: 0.0030 - binary_accuracy: 0.9997 - val_loss: 0.8485 - val_binary_accuracy: 0.8522
Epoch 14/20
15000/15000 [==============================] - 2s 162us/step - loss: 0.0014 - binary_accuracy: 0.9999 - val_loss: 0.9336 - val_binary_accuracy: 0.8536
Epoch 15/20
15000/15000 [==============================] - 2s 165us/step - loss: 7.3055e-04 - binary_accuracy: 1.0000 - val_loss: 1.0072 - val_binary_accuracy: 0.8524
Epoch 16/20
15000/15000 [==============================] - 2s 162us/step - loss: 3.6336e-04 - binary_accuracy: 1.0000 - val_loss: 1.0850 - val_binary_accuracy: 0.8531
Epoch 17/20
15000/15000 [==============================] - 2s 160us/step - loss: 1.6859e-04 - binary_accuracy: 1.0000 - val_loss: 1.1639 - val_binary_accuracy: 0.8501
Epoch 18/20
15000/15000 [==============================] - 2s 162us/step - loss: 7.0063e-05 - binary_accuracy: 1.0000 - val_loss: 1.2463 - val_binary_accuracy: 0.8494
Epoch 19/20
15000/15000 [==============================] - 2s 161us/step - loss: 3.5363e-05 - binary_accuracy: 1.0000 - val_loss: 1.3140 - val_binary_accuracy: 0.8486
Epoch 20/20
15000/15000 [==============================] - 2s 162us/step - loss: 1.3693e-05 - binary_accuracy: 1.0000 - val_loss: 1.3722 - val_binary_accuracy: 0.8462
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
print(history.history)
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# restore np.load for future normal usage
np.load = np_load_old
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:3376: The name tf.log is deprecated. Please use tf.math.log instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\tensorflow\python\ops\nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
# model.compile(loss="mean_squared_error",optimizer=optimizer, metrics="binary_accuracy"])
# model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{binary_accuracy}.hdf5")])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 6s 430us/step - loss: 0.5061 - binary_accuracy: 0.7861 - val_loss: 0.3784 - val_binary_accuracy: 0.8709
Epoch 2/20
15000/15000 [==============================] - 4s 294us/step - loss: 0.2996 - binary_accuracy: 0.9046 - val_loss: 0.3001 - val_binary_accuracy: 0.8901
Epoch 3/20
15000/15000 [==============================] - 4s 295us/step - loss: 0.2173 - binary_accuracy: 0.9291 - val_loss: 0.3098 - val_binary_accuracy: 0.8706
Epoch 4/20
15000/15000 [==============================] - 4s 296us/step - loss: 0.1744 - binary_accuracy: 0.9438 - val_loss: 0.2835 - val_binary_accuracy: 0.8841
Epoch 5/20
15000/15000 [==============================] - 5s 324us/step - loss: 0.1419 - binary_accuracy: 0.9542 - val_loss: 0.2854 - val_binary_accuracy: 0.8861
Epoch 6/20
15000/15000 [==============================] - 4s 297us/step - loss: 0.1146 - binary_accuracy: 0.9652 - val_loss: 0.3159 - val_binary_accuracy: 0.8774
Epoch 7/20
15000/15000 [==============================] - 4s 295us/step - loss: 0.0977 - binary_accuracy: 0.9711 - val_loss: 0.3132 - val_binary_accuracy: 0.8844
Epoch 8/20
15000/15000 [==============================] - 4s 296us/step - loss: 0.0805 - binary_accuracy: 0.9764 - val_loss: 0.3867 - val_binary_accuracy: 0.8653
Epoch 9/20
15000/15000 [==============================] - 4s 294us/step - loss: 0.0659 - binary_accuracy: 0.9821 - val_loss: 0.3636 - val_binary_accuracy: 0.8782
Epoch 10/20
15000/15000 [==============================] - 4s 295us/step - loss: 0.0553 - binary_accuracy: 0.9849 - val_loss: 0.3851 - val_binary_accuracy: 0.8791
Epoch 11/20
15000/15000 [==============================] - 4s 294us/step - loss: 0.0455 - binary_accuracy: 0.9887 - val_loss: 0.4174 - val_binary_accuracy: 0.8761
Epoch 12/20
15000/15000 [==============================] - 4s 293us/step - loss: 0.0385 - binary_accuracy: 0.9913 - val_loss: 0.4514 - val_binary_accuracy: 0.8699
Epoch 13/20
15000/15000 [==============================] - 4s 295us/step - loss: 0.0295 - binary_accuracy: 0.9940 - val_loss: 0.4722 - val_binary_accuracy: 0.8735
Epoch 14/20
15000/15000 [==============================] - 4s 294us/step - loss: 0.0243 - binary_accuracy: 0.9949 - val_loss: 0.5040 - val_binary_accuracy: 0.8722
Epoch 15/20
15000/15000 [==============================] - 4s 295us/step - loss: 0.0178 - binary_accuracy: 0.9976 - val_loss: 0.5368 - val_binary_accuracy: 0.8691
Epoch 16/20
15000/15000 [==============================] - 4s 296us/step - loss: 0.0169 - binary_accuracy: 0.9969 - val_loss: 0.5723 - val_binary_accuracy: 0.8701
Epoch 17/20
15000/15000 [==============================] - 4s 295us/step - loss: 0.0095 - binary_accuracy: 0.9994 - val_loss: 0.6375 - val_binary_accuracy: 0.8608
Epoch 18/20
15000/15000 [==============================] - 4s 294us/step - loss: 0.0104 - binary_accuracy: 0.9984 - val_loss: 0.6405 - val_binary_accuracy: 0.8650
Epoch 19/20
15000/15000 [==============================] - 4s 296us/step - loss: 0.0083 - binary_accuracy: 0.9989 - val_loss: 0.6703 - val_binary_accuracy: 0.8659
Epoch 20/20
15000/15000 [==============================] - 4s 297us/step - loss: 0.0095 - binary_accuracy: 0.9981 - val_loss: 0.6956 - val_binary_accuracy: 0.8660
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4).
###Code
plt.hist(model.predict(x_test))
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive": 10000 most frequently occurrring word are computed and the words in the reviewws are encoded with the indices of these frequent words as they occur in the review. This is sort of like choosing a hyperparameter, as will become clear once we build the data representation before feeding it to the neural network
###Code
train_data[0:2]
train_labels.shape
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
len(word_index.keys())
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0:10]
x_train.shape
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument: This is valuable since the learning rate of the optimizer is one of the most pesky hyperparameters that will need to be tuned
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 3s 207us/step - loss: 0.5084 - binary_accuracy: 0.7813 - val_loss: 0.3798 - val_binary_accuracy: 0.8689
Epoch 2/20
15000/15000 [==============================] - 3s 170us/step - loss: 0.3006 - binary_accuracy: 0.9042 - val_loss: 0.3003 - val_binary_accuracy: 0.8898
Epoch 3/20
15000/15000 [==============================] - 2s 167us/step - loss: 0.2180 - binary_accuracy: 0.9287 - val_loss: 0.3081 - val_binary_accuracy: 0.8719
Epoch 4/20
15000/15000 [==============================] - 3s 176us/step - loss: 0.1751 - binary_accuracy: 0.9435 - val_loss: 0.2839 - val_binary_accuracy: 0.8833
Epoch 5/20
15000/15000 [==============================] - 3s 169us/step - loss: 0.1427 - binary_accuracy: 0.9543 - val_loss: 0.2847 - val_binary_accuracy: 0.8866
Epoch 6/20
15000/15000 [==============================] - 3s 175us/step - loss: 0.1150 - binary_accuracy: 0.9654 - val_loss: 0.3146 - val_binary_accuracy: 0.8772
Epoch 7/20
15000/15000 [==============================] - 3s 172us/step - loss: 0.0979 - binary_accuracy: 0.9705 - val_loss: 0.3127 - val_binary_accuracy: 0.8843
Epoch 8/20
15000/15000 [==============================] - 3s 176us/step - loss: 0.0807 - binary_accuracy: 0.9763 - val_loss: 0.3858 - val_binary_accuracy: 0.8653
Epoch 9/20
15000/15000 [==============================] - 3s 172us/step - loss: 0.0660 - binary_accuracy: 0.9822 - val_loss: 0.3632 - val_binary_accuracy: 0.8784
Epoch 10/20
15000/15000 [==============================] - 3s 175us/step - loss: 0.0554 - binary_accuracy: 0.9853 - val_loss: 0.3842 - val_binary_accuracy: 0.8791
Epoch 11/20
15000/15000 [==============================] - 3s 196us/step - loss: 0.0454 - binary_accuracy: 0.9889 - val_loss: 0.4167 - val_binary_accuracy: 0.8766
Epoch 12/20
15000/15000 [==============================] - 3s 178us/step - loss: 0.0385 - binary_accuracy: 0.9915 - val_loss: 0.4506 - val_binary_accuracy: 0.8696
Epoch 13/20
15000/15000 [==============================] - 3s 173us/step - loss: 0.0297 - binary_accuracy: 0.9929 - val_loss: 0.4698 - val_binary_accuracy: 0.8731
Epoch 14/20
15000/15000 [==============================] - 3s 187us/step - loss: 0.0244 - binary_accuracy: 0.9949 - val_loss: 0.5025 - val_binary_accuracy: 0.8721
Epoch 15/20
15000/15000 [==============================] - 3s 176us/step - loss: 0.0177 - binary_accuracy: 0.9977 - val_loss: 0.5359 - val_binary_accuracy: 0.8696
Epoch 16/20
15000/15000 [==============================] - 3s 170us/step - loss: 0.0169 - binary_accuracy: 0.9970 - val_loss: 0.5719 - val_binary_accuracy: 0.8692
Epoch 17/20
15000/15000 [==============================] - 3s 187us/step - loss: 0.0094 - binary_accuracy: 0.9994 - val_loss: 0.6255 - val_binary_accuracy: 0.8631
Epoch 18/20
15000/15000 [==============================] - 3s 178us/step - loss: 0.0110 - binary_accuracy: 0.9979 - val_loss: 0.6406 - val_binary_accuracy: 0.8667
Epoch 19/20
15000/15000 [==============================] - 3s 174us/step - loss: 0.0093 - binary_accuracy: 0.9985 - val_loss: 0.6706 - val_binary_accuracy: 0.8673
Epoch 20/20
15000/15000 [==============================] - 3s 172us/step - loss: 0.0041 - binary_accuracy: 0.9999 - val_loss: 0.6993 - val_binary_accuracy: 0.8664
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
import seaborn as sns
# Plot settings
%matplotlib inline
sns.set_context('talk')
sns.set_palette('gray')
sns.set_style('ticks', {'grid.color' : '0.9'})
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved!
###Code
# We experiment with 4 hidden layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
print(results)
# We experiment with 2 hidden layers, but with mse error function
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
print(results)
# We experiment with 2 hidden layers, but with tanh activation function
model = models.Sequential()
model.add(layers.Dense(16, activation='tanh', input_shape=(10000,)))
model.add(layers.Dense(16, activation='tanh'))
model.add(layers.Dense(16, activation='tanh'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
print(results)
# We experiment with 2 hidden layers, each with 32 units
model = models.Sequential()
model.add(layers.Dense(32, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
print(results)
# We experiment with 2 hidden layers, each with 64 units
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
print(results)
###Output
Epoch 1/4
25000/25000 [==============================] - 4s 169us/step - loss: 0.4393 - acc: 0.7961
Epoch 2/4
25000/25000 [==============================] - 4s 150us/step - loss: 0.2382 - acc: 0.9090
Epoch 3/4
25000/25000 [==============================] - 4s 149us/step - loss: 0.1846 - acc: 0.9290
Epoch 4/4
25000/25000 [==============================] - 4s 150us/step - loss: 0.1435 - acc: 0.9459
25000/25000 [==============================] - 5s 181us/step
[0.3257248517560959, 0.8782]
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[1]])
decoded_review
decoded_review_l = [0]*len(train_data)
for i in range(len(train_data)):
decoded_review_l[i] = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[i]])
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 1s 19ms/step - loss: 0.5784 - binary_accuracy: 0.7208 - val_loss: 0.3843 - val_binary_accuracy: 0.8645
Epoch 2/20
30/30 [==============================] - 0s 8ms/step - loss: 0.3191 - binary_accuracy: 0.9006 - val_loss: 0.3032 - val_binary_accuracy: 0.8882
Epoch 3/20
30/30 [==============================] - 0s 8ms/step - loss: 0.2248 - binary_accuracy: 0.9289 - val_loss: 0.2920 - val_binary_accuracy: 0.8848
Epoch 4/20
30/30 [==============================] - 0s 8ms/step - loss: 0.1784 - binary_accuracy: 0.9460 - val_loss: 0.2907 - val_binary_accuracy: 0.8832
Epoch 5/20
30/30 [==============================] - 0s 8ms/step - loss: 0.1402 - binary_accuracy: 0.9574 - val_loss: 0.2857 - val_binary_accuracy: 0.8850
Epoch 6/20
30/30 [==============================] - 0s 8ms/step - loss: 0.1118 - binary_accuracy: 0.9672 - val_loss: 0.3248 - val_binary_accuracy: 0.8781
Epoch 7/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0908 - binary_accuracy: 0.9743 - val_loss: 0.3522 - val_binary_accuracy: 0.8694
Epoch 8/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0814 - binary_accuracy: 0.9767 - val_loss: 0.3309 - val_binary_accuracy: 0.8823
Epoch 9/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0632 - binary_accuracy: 0.9836 - val_loss: 0.3545 - val_binary_accuracy: 0.8800
Epoch 10/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0498 - binary_accuracy: 0.9893 - val_loss: 0.4075 - val_binary_accuracy: 0.8752
Epoch 11/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0374 - binary_accuracy: 0.9933 - val_loss: 0.4360 - val_binary_accuracy: 0.8742
Epoch 12/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0334 - binary_accuracy: 0.9935 - val_loss: 0.4575 - val_binary_accuracy: 0.8747
Epoch 13/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0278 - binary_accuracy: 0.9952 - val_loss: 0.4673 - val_binary_accuracy: 0.8724
Epoch 14/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0209 - binary_accuracy: 0.9970 - val_loss: 0.5035 - val_binary_accuracy: 0.8693
Epoch 15/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0151 - binary_accuracy: 0.9987 - val_loss: 0.5352 - val_binary_accuracy: 0.8690
Epoch 16/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0132 - binary_accuracy: 0.9985 - val_loss: 0.5718 - val_binary_accuracy: 0.8698
Epoch 17/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0108 - binary_accuracy: 0.9988 - val_loss: 0.6018 - val_binary_accuracy: 0.8677
Epoch 18/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0076 - binary_accuracy: 0.9994 - val_loss: 0.6349 - val_binary_accuracy: 0.8650
Epoch 19/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0052 - binary_accuracy: 0.9999 - val_loss: 0.6639 - val_binary_accuracy: 0.8639
Epoch 20/20
30/30 [==============================] - 0s 7ms/step - loss: 0.0045 - binary_accuracy: 0.9997 - val_loss: 0.6993 - val_binary_accuracy: 0.8646
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=4, batch_size=512, validation_data=(x_test, y_test))
history_dict = history.history
#results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
pred = model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0077 - binary_accuracy: 0.9983 - val_loss: 0.7050 - val_binary_accuracy: 0.8651
Epoch 2/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0019 - binary_accuracy: 0.9999 - val_loss: 0.7361 - val_binary_accuracy: 0.8658
Epoch 3/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0061 - binary_accuracy: 0.9983 - val_loss: 0.7643 - val_binary_accuracy: 0.8668
Epoch 4/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0012 - binary_accuracy: 0.9999 - val_loss: 0.7880 - val_binary_accuracy: 0.8664
Epoch 5/20
15000/15000 [==============================] - 1s 41us/step - loss: 9.9081e-04 - binary_accuracy: 0.9999 - val_loss: 0.8225 - val_binary_accuracy: 0.8656
Epoch 6/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0030 - binary_accuracy: 0.9991 - val_loss: 0.8584 - val_binary_accuracy: 0.8663
Epoch 7/20
15000/15000 [==============================] - 1s 40us/step - loss: 5.6672e-04 - binary_accuracy: 0.9999 - val_loss: 0.8866 - val_binary_accuracy: 0.8657
Epoch 8/20
15000/15000 [==============================] - 1s 41us/step - loss: 4.6281e-04 - binary_accuracy: 0.9999 - val_loss: 0.9381 - val_binary_accuracy: 0.8644
Epoch 9/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0018 - binary_accuracy: 0.9995 - val_loss: 0.9743 - val_binary_accuracy: 0.8641
Epoch 10/20
15000/15000 [==============================] - 1s 41us/step - loss: 2.1893e-04 - binary_accuracy: 1.0000 - val_loss: 0.9961 - val_binary_accuracy: 0.8645
Epoch 11/20
15000/15000 [==============================] - 1s 41us/step - loss: 1.8174e-04 - binary_accuracy: 1.0000 - val_loss: 1.0329 - val_binary_accuracy: 0.8643
Epoch 12/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0020 - binary_accuracy: 0.9992 - val_loss: 1.0795 - val_binary_accuracy: 0.8623
Epoch 13/20
15000/15000 [==============================] - 1s 41us/step - loss: 1.0687e-04 - binary_accuracy: 1.0000 - val_loss: 1.0915 - val_binary_accuracy: 0.8620
Epoch 14/20
15000/15000 [==============================] - 1s 40us/step - loss: 8.5433e-05 - binary_accuracy: 1.0000 - val_loss: 1.1196 - val_binary_accuracy: 0.8612
Epoch 15/20
15000/15000 [==============================] - 1s 41us/step - loss: 1.2130e-04 - binary_accuracy: 1.0000 - val_loss: 1.2767 - val_binary_accuracy: 0.8519
Epoch 16/20
15000/15000 [==============================] - 1s 41us/step - loss: 1.0919e-04 - binary_accuracy: 1.0000 - val_loss: 1.1844 - val_binary_accuracy: 0.8612
Epoch 17/20
15000/15000 [==============================] - 1s 41us/step - loss: 4.1298e-05 - binary_accuracy: 1.0000 - val_loss: 1.2235 - val_binary_accuracy: 0.8598
Epoch 18/20
15000/15000 [==============================] - 1s 41us/step - loss: 3.0781e-05 - binary_accuracy: 1.0000 - val_loss: 1.3004 - val_binary_accuracy: 0.8588
Epoch 19/20
15000/15000 [==============================] - 1s 41us/step - loss: 0.0027 - binary_accuracy: 0.9992 - val_loss: 1.2983 - val_binary_accuracy: 0.8607
Epoch 20/20
15000/15000 [==============================] - 1s 41us/step - loss: 1.7237e-05 - binary_accuracy: 1.0000 - val_loss: 1.3093 - val_binary_accuracy: 0.8609
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 0s 0us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 1s 18ms/step - loss: 0.5019 - binary_accuracy: 0.7920 - val_loss: 0.3751 - val_binary_accuracy: 0.8729
Epoch 2/20
30/30 [==============================] - 0s 8ms/step - loss: 0.2979 - binary_accuracy: 0.9025 - val_loss: 0.3080 - val_binary_accuracy: 0.8810
Epoch 3/20
30/30 [==============================] - 0s 8ms/step - loss: 0.2208 - binary_accuracy: 0.9293 - val_loss: 0.2774 - val_binary_accuracy: 0.8909
Epoch 4/20
30/30 [==============================] - 0s 8ms/step - loss: 0.1705 - binary_accuracy: 0.9449 - val_loss: 0.3106 - val_binary_accuracy: 0.8777
Epoch 5/20
30/30 [==============================] - 0s 8ms/step - loss: 0.1418 - binary_accuracy: 0.9553 - val_loss: 0.2831 - val_binary_accuracy: 0.8846
Epoch 6/20
30/30 [==============================] - 0s 8ms/step - loss: 0.1172 - binary_accuracy: 0.9638 - val_loss: 0.3302 - val_binary_accuracy: 0.8763
Epoch 7/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0976 - binary_accuracy: 0.9702 - val_loss: 0.3304 - val_binary_accuracy: 0.8798
Epoch 8/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0818 - binary_accuracy: 0.9771 - val_loss: 0.3314 - val_binary_accuracy: 0.8794
Epoch 9/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0671 - binary_accuracy: 0.9818 - val_loss: 0.3550 - val_binary_accuracy: 0.8788
Epoch 10/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0557 - binary_accuracy: 0.9851 - val_loss: 0.3851 - val_binary_accuracy: 0.8761
Epoch 11/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0441 - binary_accuracy: 0.9897 - val_loss: 0.4262 - val_binary_accuracy: 0.8710
Epoch 12/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0365 - binary_accuracy: 0.9919 - val_loss: 0.4579 - val_binary_accuracy: 0.8722
Epoch 13/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0317 - binary_accuracy: 0.9929 - val_loss: 0.4710 - val_binary_accuracy: 0.8752
Epoch 14/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0245 - binary_accuracy: 0.9959 - val_loss: 0.5036 - val_binary_accuracy: 0.8707
Epoch 15/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0201 - binary_accuracy: 0.9965 - val_loss: 0.5334 - val_binary_accuracy: 0.8713
Epoch 16/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0134 - binary_accuracy: 0.9991 - val_loss: 0.6076 - val_binary_accuracy: 0.8648
Epoch 17/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0156 - binary_accuracy: 0.9971 - val_loss: 0.6103 - val_binary_accuracy: 0.8694
Epoch 18/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0073 - binary_accuracy: 0.9997 - val_loss: 0.6428 - val_binary_accuracy: 0.8677
Epoch 19/20
30/30 [==============================] - 0s 8ms/step - loss: 0.0111 - binary_accuracy: 0.9975 - val_loss: 0.6829 - val_binary_accuracy: 0.8661
Epoch 20/20
30/30 [==============================] - 0s 7ms/step - loss: 0.0042 - binary_accuracy: 0.9999 - val_loss: 0.7096 - val_binary_accuracy: 0.8661
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
###Code
import tensorflow
tensorflow.keras.__version__
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from tensorflow.keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 0s 0us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[:5]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
train_data[0]
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
x_train.shape
train_labels[:5]
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from tensorflow.keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments: Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
30/30 [==============================] - 4s 30ms/step - loss: 0.5792 - accuracy: 0.7185 - val_loss: 0.3909 - val_accuracy: 0.8558
Epoch 2/20
30/30 [==============================] - 1s 19ms/step - loss: 0.3143 - accuracy: 0.9082 - val_loss: 0.3085 - val_accuracy: 0.8860
Epoch 3/20
30/30 [==============================] - 1s 19ms/step - loss: 0.2270 - accuracy: 0.9287 - val_loss: 0.2779 - val_accuracy: 0.8913
Epoch 4/20
30/30 [==============================] - 1s 19ms/step - loss: 0.1723 - accuracy: 0.9482 - val_loss: 0.2730 - val_accuracy: 0.8914
Epoch 5/20
30/30 [==============================] - 1s 18ms/step - loss: 0.1414 - accuracy: 0.9572 - val_loss: 0.2801 - val_accuracy: 0.8870
Epoch 6/20
30/30 [==============================] - 1s 19ms/step - loss: 0.1152 - accuracy: 0.9670 - val_loss: 0.3159 - val_accuracy: 0.8756
Epoch 7/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0982 - accuracy: 0.9708 - val_loss: 0.3113 - val_accuracy: 0.8844
Epoch 8/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0712 - accuracy: 0.9833 - val_loss: 0.3575 - val_accuracy: 0.8736
Epoch 9/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0623 - accuracy: 0.9865 - val_loss: 0.3487 - val_accuracy: 0.8804
Epoch 10/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0476 - accuracy: 0.9912 - val_loss: 0.3800 - val_accuracy: 0.8764
Epoch 11/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0394 - accuracy: 0.9926 - val_loss: 0.4000 - val_accuracy: 0.8765
Epoch 12/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0329 - accuracy: 0.9943 - val_loss: 0.4289 - val_accuracy: 0.8730
Epoch 13/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0231 - accuracy: 0.9974 - val_loss: 0.4808 - val_accuracy: 0.8692
Epoch 14/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0192 - accuracy: 0.9978 - val_loss: 0.4900 - val_accuracy: 0.8711
Epoch 15/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0135 - accuracy: 0.9992 - val_loss: 0.5275 - val_accuracy: 0.8674
Epoch 16/20
30/30 [==============================] - 1s 18ms/step - loss: 0.0106 - accuracy: 0.9993 - val_loss: 0.5481 - val_accuracy: 0.8703
Epoch 17/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0080 - accuracy: 0.9997 - val_loss: 0.5841 - val_accuracy: 0.8676
Epoch 18/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0060 - accuracy: 0.9999 - val_loss: 0.6677 - val_accuracy: 0.8608
Epoch 19/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0054 - accuracy: 0.9999 - val_loss: 0.6401 - val_accuracy: 0.8675
Epoch 20/20
30/30 [==============================] - 1s 19ms/step - loss: 0.0043 - accuracy: 0.9996 - val_loss: 0.6752 - val_accuracy: 0.8672
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
print len(x_train)
print len(x_test)
###Output
25000
25000
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 16) 160016
_________________________________________________________________
dense_2 (Dense) (None, 16) 272
_________________________________________________________________
dense_3 (Dense) (None, 1) 17
=================================================================
Total params: 160,305
Trainable params: 160,305
Non-trainable params: 0
_________________________________________________________________
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 3s 209us/step - loss: 0.5084 - binary_accuracy: 0.7813 - val_loss: 0.3797 - val_binary_accuracy: 0.8684
Epoch 2/20
15000/15000 [==============================] - 3s 179us/step - loss: 0.3004 - binary_accuracy: 0.9047 - val_loss: 0.3004 - val_binary_accuracy: 0.8897
Epoch 3/20
15000/15000 [==============================] - 3s 184us/step - loss: 0.2179 - binary_accuracy: 0.9285 - val_loss: 0.3085 - val_binary_accuracy: 0.8711
Epoch 4/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.1750 - binary_accuracy: 0.9437 - val_loss: 0.2840 - val_binary_accuracy: 0.8832
Epoch 5/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.1427 - binary_accuracy: 0.9543 - val_loss: 0.2841 - val_binary_accuracy: 0.8872
Epoch 6/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.1150 - binary_accuracy: 0.9650 - val_loss: 0.3166 - val_binary_accuracy: 0.8772
Epoch 7/20
15000/15000 [==============================] - 3s 180us/step - loss: 0.0980 - binary_accuracy: 0.9705 - val_loss: 0.3127 - val_binary_accuracy: 0.8846
Epoch 8/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0807 - binary_accuracy: 0.9763 - val_loss: 0.3859 - val_binary_accuracy: 0.8649
Epoch 9/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0661 - binary_accuracy: 0.9821 - val_loss: 0.3635 - val_binary_accuracy: 0.8782
Epoch 10/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0561 - binary_accuracy: 0.9853 - val_loss: 0.3843 - val_binary_accuracy: 0.8792
Epoch 11/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.0439 - binary_accuracy: 0.9893 - val_loss: 0.4153 - val_binary_accuracy: 0.8779
Epoch 12/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.0381 - binary_accuracy: 0.9921 - val_loss: 0.4525 - val_binary_accuracy: 0.8690
Epoch 13/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0300 - binary_accuracy: 0.9928 - val_loss: 0.4698 - val_binary_accuracy: 0.8729
Epoch 14/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0247 - binary_accuracy: 0.9945 - val_loss: 0.5023 - val_binary_accuracy: 0.8726
Epoch 15/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0175 - binary_accuracy: 0.9979 - val_loss: 0.5342 - val_binary_accuracy: 0.8693
Epoch 16/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.0149 - binary_accuracy: 0.9983 - val_loss: 0.5710 - val_binary_accuracy: 0.8698
Epoch 17/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.0151 - binary_accuracy: 0.9971 - val_loss: 0.6025 - val_binary_accuracy: 0.8697
Epoch 18/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0075 - binary_accuracy: 0.9996 - val_loss: 0.6782 - val_binary_accuracy: 0.8633
Epoch 19/20
15000/15000 [==============================] - 3s 183us/step - loss: 0.0117 - binary_accuracy: 0.9975 - val_loss: 0.6693 - val_binary_accuracy: 0.8673
Epoch 20/20
15000/15000 [==============================] - 3s 179us/step - loss: 0.0041 - binary_accuracy: 0.9999 - val_loss: 0.6941 - val_binary_accuracy: 0.8658
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
# acc_values = history_dict['acc']
# val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 1s - loss: 0.5103 - acc: 0.7911 - val_loss: 0.4016 - val_acc: 0.8628
Epoch 2/20
15000/15000 [==============================] - 1s - loss: 0.3110 - acc: 0.9031 - val_loss: 0.3085 - val_acc: 0.8870
Epoch 3/20
15000/15000 [==============================] - 1s - loss: 0.2309 - acc: 0.9235 - val_loss: 0.2803 - val_acc: 0.8908
Epoch 4/20
15000/15000 [==============================] - 1s - loss: 0.1795 - acc: 0.9428 - val_loss: 0.2735 - val_acc: 0.8893
Epoch 5/20
15000/15000 [==============================] - 1s - loss: 0.1475 - acc: 0.9526 - val_loss: 0.2788 - val_acc: 0.8890
Epoch 6/20
15000/15000 [==============================] - 1s - loss: 0.1185 - acc: 0.9638 - val_loss: 0.3330 - val_acc: 0.8764
Epoch 7/20
15000/15000 [==============================] - 1s - loss: 0.1005 - acc: 0.9703 - val_loss: 0.3055 - val_acc: 0.8838
Epoch 8/20
15000/15000 [==============================] - 1s - loss: 0.0818 - acc: 0.9773 - val_loss: 0.3344 - val_acc: 0.8769
Epoch 9/20
15000/15000 [==============================] - 1s - loss: 0.0696 - acc: 0.9814 - val_loss: 0.3607 - val_acc: 0.8800
Epoch 10/20
15000/15000 [==============================] - 1s - loss: 0.0547 - acc: 0.9873 - val_loss: 0.3776 - val_acc: 0.8785
Epoch 11/20
15000/15000 [==============================] - 1s - loss: 0.0453 - acc: 0.9895 - val_loss: 0.4035 - val_acc: 0.8765
Epoch 12/20
15000/15000 [==============================] - 1s - loss: 0.0353 - acc: 0.9930 - val_loss: 0.4437 - val_acc: 0.8766
Epoch 13/20
15000/15000 [==============================] - 1s - loss: 0.0269 - acc: 0.9956 - val_loss: 0.4637 - val_acc: 0.8747
Epoch 14/20
15000/15000 [==============================] - 1s - loss: 0.0212 - acc: 0.9968 - val_loss: 0.4877 - val_acc: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s - loss: 0.0162 - acc: 0.9977 - val_loss: 0.6080 - val_acc: 0.8625
Epoch 16/20
15000/15000 [==============================] - 1s - loss: 0.0115 - acc: 0.9993 - val_loss: 0.5778 - val_acc: 0.8698
Epoch 17/20
15000/15000 [==============================] - 1s - loss: 0.0116 - acc: 0.9979 - val_loss: 0.5906 - val_acc: 0.8702
Epoch 18/20
15000/15000 [==============================] - 1s - loss: 0.0054 - acc: 0.9998 - val_loss: 0.6204 - val_acc: 0.8639
Epoch 19/20
15000/15000 [==============================] - 1s - loss: 0.0083 - acc: 0.9984 - val_loss: 0.6419 - val_acc: 0.8676
Epoch 20/20
15000/15000 [==============================] - 1s - loss: 0.0031 - acc: 0.9998 - val_loss: 0.6796 - val_acc: 0.8683
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 23s 1us/step
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 4s 248us/step - loss: 0.5084 - acc: 0.7813 - val_loss: 0.3797 - val_acc: 0.8684
Epoch 2/20
15000/15000 [==============================] - 3s 186us/step - loss: 0.3004 - acc: 0.9047 - val_loss: 0.3004 - val_acc: 0.8897
Epoch 3/20
15000/15000 [==============================] - 3s 186us/step - loss: 0.2179 - acc: 0.9285 - val_loss: 0.3085 - val_acc: 0.8711
Epoch 4/20
15000/15000 [==============================] - 3s 188us/step - loss: 0.1750 - acc: 0.9437 - val_loss: 0.2840 - val_acc: 0.8832
Epoch 5/20
15000/15000 [==============================] - 3s 186us/step - loss: 0.1427 - acc: 0.9543 - val_loss: 0.2841 - val_acc: 0.8872
Epoch 6/20
15000/15000 [==============================] - 3s 183us/step - loss: 0.1150 - acc: 0.9650 - val_loss: 0.3166 - val_acc: 0.8772
Epoch 7/20
15000/15000 [==============================] - 3s 189us/step - loss: 0.0980 - acc: 0.9705 - val_loss: 0.3127 - val_acc: 0.8846
Epoch 8/20
15000/15000 [==============================] - 3s 183us/step - loss: 0.0807 - acc: 0.9763 - val_loss: 0.3860 - val_acc: 0.8648
Epoch 9/20
15000/15000 [==============================] - 3s 182us/step - loss: 0.0661 - acc: 0.9821 - val_loss: 0.3635 - val_acc: 0.8782
Epoch 10/20
15000/15000 [==============================] - 3s 184us/step - loss: 0.0560 - acc: 0.9853 - val_loss: 0.3842 - val_acc: 0.8792
Epoch 11/20
15000/15000 [==============================] - 3s 185us/step - loss: 0.0442 - acc: 0.9890 - val_loss: 0.4154 - val_acc: 0.8778
Epoch 12/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0382 - acc: 0.9917 - val_loss: 0.4519 - val_acc: 0.8689
Epoch 13/20
15000/15000 [==============================] - 3s 181us/step - loss: 0.0299 - acc: 0.9929 - val_loss: 0.4697 - val_acc: 0.8731
Epoch 14/20
15000/15000 [==============================] - 3s 183us/step - loss: 0.0247 - acc: 0.9947 - val_loss: 0.5023 - val_acc: 0.8722
Epoch 15/20
15000/15000 [==============================] - 3s 183us/step - loss: 0.0171 - acc: 0.9983 - val_loss: 0.5557 - val_acc: 0.8664
Epoch 16/20
15000/15000 [==============================] - 3s 184us/step - loss: 0.0137 - acc: 0.9981 - val_loss: 0.5911 - val_acc: 0.8684
Epoch 17/20
15000/15000 [==============================] - 3s 184us/step - loss: 0.0123 - acc: 0.9981 - val_loss: 0.6184 - val_acc: 0.8681
Epoch 18/20
15000/15000 [==============================] - 3s 185us/step - loss: 0.0121 - acc: 0.9973 - val_loss: 0.6488 - val_acc: 0.8692
Epoch 19/20
15000/15000 [==============================] - 3s 189us/step - loss: 0.0053 - acc: 0.9997 - val_loss: 0.7614 - val_acc: 0.8530
Epoch 20/20
15000/15000 [==============================] - 3s 187us/step - loss: 0.0079 - acc: 0.9986 - val_loss: 0.7053 - val_acc: 0.8681
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
#train_data[0]
#train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
#max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 51s 3ms/step - loss: 0.5084 - binary_accuracy: 0.7814 - val_loss: 0.3796 - val_binary_accuracy: 0.8685
Epoch 2/20
15000/15000 [==============================] - 12s 830us/step - loss: 0.3005 - binary_accuracy: 0.9045 - val_loss: 0.3004 - val_binary_accuracy: 0.8897
Epoch 3/20
15000/15000 [==============================] - 12s 832us/step - loss: 0.2180 - binary_accuracy: 0.9286 - val_loss: 0.3088 - val_binary_accuracy: 0.8710
Epoch 4/20
15000/15000 [==============================] - 13s 863us/step - loss: 0.1751 - binary_accuracy: 0.9437 - val_loss: 0.2841 - val_binary_accuracy: 0.8830
Epoch 5/20
15000/15000 [==============================] - 13s 841us/step - loss: 0.1427 - binary_accuracy: 0.9543 - val_loss: 0.2847 - val_binary_accuracy: 0.8862
Epoch 6/20
15000/15000 [==============================] - 13s 861us/step - loss: 0.1150 - binary_accuracy: 0.9652 - val_loss: 0.3144 - val_binary_accuracy: 0.8776
Epoch 7/20
15000/15000 [==============================] - 12s 829us/step - loss: 0.0980 - binary_accuracy: 0.9706 - val_loss: 0.3127 - val_binary_accuracy: 0.8844
Epoch 8/20
15000/15000 [==============================] - 13s 840us/step - loss: 0.0808 - binary_accuracy: 0.9763 - val_loss: 0.3858 - val_binary_accuracy: 0.8650
Epoch 9/20
15000/15000 [==============================] - 13s 874us/step - loss: 0.0661 - binary_accuracy: 0.9821 - val_loss: 0.3632 - val_binary_accuracy: 0.8782
Epoch 10/20
15000/15000 [==============================] - 14s 947us/step - loss: 0.0559 - binary_accuracy: 0.9851 - val_loss: 0.3842 - val_binary_accuracy: 0.8790
Epoch 11/20
15000/15000 [==============================] - 13s 837us/step - loss: 0.0447 - binary_accuracy: 0.9889 - val_loss: 0.4158 - val_binary_accuracy: 0.8768
Epoch 12/20
15000/15000 [==============================] - 13s 852us/step - loss: 0.0385 - binary_accuracy: 0.9914 - val_loss: 0.4506 - val_binary_accuracy: 0.8693
Epoch 13/20
15000/15000 [==============================] - 14s 942us/step - loss: 0.0300 - binary_accuracy: 0.9929 - val_loss: 0.4692 - val_binary_accuracy: 0.8731
Epoch 14/20
15000/15000 [==============================] - 14s 945us/step - loss: 0.0247 - binary_accuracy: 0.9949 - val_loss: 0.5014 - val_binary_accuracy: 0.8718
Epoch 15/20
15000/15000 [==============================] - 13s 877us/step - loss: 0.0175 - binary_accuracy: 0.9981 - val_loss: 0.5520 - val_binary_accuracy: 0.8670
Epoch 16/20
15000/15000 [==============================] - 14s 942us/step - loss: 0.0144 - binary_accuracy: 0.9981 - val_loss: 0.5985 - val_binary_accuracy: 0.8637
Epoch 17/20
15000/15000 [==============================] - 13s 836us/step - loss: 0.0128 - binary_accuracy: 0.9981 - val_loss: 0.6097 - val_binary_accuracy: 0.8694
Epoch 18/20
15000/15000 [==============================] - 13s 885us/step - loss: 0.0112 - binary_accuracy: 0.9979 - val_loss: 0.6384 - val_binary_accuracy: 0.8688
Epoch 19/20
15000/15000 [==============================] - 14s 907us/step - loss: 0.0068 - binary_accuracy: 0.9994 - val_loss: 0.7411 - val_binary_accuracy: 0.8555
Epoch 20/20
15000/15000 [==============================] - 13s 854us/step - loss: 0.0048 - binary_accuracy: 0.9998 - val_loss: 0.7031 - val_binary_accuracy: 0.8655
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews. URL: https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviewsWhy do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
##len(train_data)
##25000
##print(len(train_data[1]))
##189
##print(len(train_data[0]))
##218
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
## XP. to fully understand ...
# print(word_index) ## {'fawn': 34701, 'tsukino': 52006, 'nunnery': 52007, 'sonja': 16816, ...
# print('word_index.get("this") => ', word_index.get('this')) ## 'this' is mapped to 11
# print(reverse_word_index ) ## {34701: 'fawn', 52006: 'tsukino', 52007: 'nunnery', 16816: 'sonja', ...
# print('reverse_word_index.get(16) = ', reverse_word_index.get(16)) # with
# reverse_word_index.get(11) ## 'this'
# print(' 11 => ', reverse_word_index.get(11)) ##14 - 3 = 11
# print (train_data[0]) # integers [1, 14, 22, 16, 43, 530, 973, 1622, ...
for i in train_data[0]:
print(i, reverse_word_index.get(i-3, '*')) ##replace not found with *
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
##model.add(layers.Dense(32, activation='relu'))##XP
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
## First 10K. The basic slice syntax is i:j:k where i is the starting index, j is the stopping index, and k is the step
## Remaing. Starting from 10K.
# x_val = x_train[:10000]
# partial_x_train = x_train[10000:]
# y_val = y_train[:10000]
# partial_y_train = y_train[10000:]
x_val = x_train[15000:]
partial_x_train = x_train[:15000]
y_val = y_train[15000:]
partial_y_train = y_train[:15000]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
# model.reset_states() ##XP
history = model.fit(partial_x_train,
partial_y_train,
epochs=20, ## 20
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 138us/step - loss: 0.5112 - binary_accuracy: 0.7912 - val_loss: 0.3723 - val_binary_accuracy: 0.8762
Epoch 2/20
15000/15000 [==============================] - 1s 70us/step - loss: 0.2963 - binary_accuracy: 0.9027 - val_loss: 0.3039 - val_binary_accuracy: 0.8854
Epoch 3/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.2152 - binary_accuracy: 0.9286 - val_loss: 0.2925 - val_binary_accuracy: 0.8824
Epoch 4/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.1727 - binary_accuracy: 0.9425 - val_loss: 0.2754 - val_binary_accuracy: 0.8918
Epoch 5/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.1409 - binary_accuracy: 0.9547 - val_loss: 0.2895 - val_binary_accuracy: 0.8869
Epoch 6/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.1163 - binary_accuracy: 0.9638 - val_loss: 0.3092 - val_binary_accuracy: 0.8840
Epoch 7/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0976 - binary_accuracy: 0.9703 - val_loss: 0.3205 - val_binary_accuracy: 0.8834
Epoch 8/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0826 - binary_accuracy: 0.9771 - val_loss: 0.4035 - val_binary_accuracy: 0.8646
Epoch 9/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0670 - binary_accuracy: 0.9824 - val_loss: 0.3629 - val_binary_accuracy: 0.8809
Epoch 10/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0586 - binary_accuracy: 0.9839 - val_loss: 0.3924 - val_binary_accuracy: 0.8801
Epoch 11/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0483 - binary_accuracy: 0.9869 - val_loss: 0.4170 - val_binary_accuracy: 0.8764
Epoch 12/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0366 - binary_accuracy: 0.9916 - val_loss: 0.6000 - val_binary_accuracy: 0.8436
Epoch 13/20
15000/15000 [==============================] - 1s 71us/step - loss: 0.0315 - binary_accuracy: 0.9927 - val_loss: 0.4874 - val_binary_accuracy: 0.8722
Epoch 14/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0290 - binary_accuracy: 0.9930 - val_loss: 0.5032 - val_binary_accuracy: 0.8714
Epoch 15/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0222 - binary_accuracy: 0.9952 - val_loss: 0.5385 - val_binary_accuracy: 0.8701
Epoch 16/20
15000/15000 [==============================] - 1s 67us/step - loss: 0.0170 - binary_accuracy: 0.9973 - val_loss: 0.5923 - val_binary_accuracy: 0.8667
Epoch 17/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0145 - binary_accuracy: 0.9975 - val_loss: 0.6041 - val_binary_accuracy: 0.8678
Epoch 18/20
15000/15000 [==============================] - 1s 69us/step - loss: 0.0105 - binary_accuracy: 0.9985 - val_loss: 0.6557 - val_binary_accuracy: 0.8633
Epoch 19/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0101 - binary_accuracy: 0.9987 - val_loss: 0.6875 - val_binary_accuracy: 0.8633
Epoch 20/20
15000/15000 [==============================] - 1s 68us/step - loss: 0.0059 - binary_accuracy: 0.9997 - val_loss: 0.7171 - val_binary_accuracy: 0.8613
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
# acc = history.history['acc']
val_acc = history.history['val_binary_accuracy']
# val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
# model.predict(x_test)
##XP
predictions = model.predict(x_test)
print(predictions)
conf_predictions = predictions[predictions<0.1]
print(conf_predictions.size)
print(conf_predictions)
conf_predictions = predictions[predictions>0.9]
print(conf_predictions.size)
print(conf_predictions)
###Output
[[0.13888198]
[0.9996985 ]
[0.3121842 ]
...
[0.07258095]
[0.04286512]
[0.46984038]]
10005
[0.00648679 0.0040284 0.00014904 ... 0.0005933 0.07258095 0.04286512]
6881
[0.9996985 0.92581207 0.9980478 ... 0.99466467 0.96655464 0.9986902 ]
###Markdown
As you can see, the network is very confident for some samples (0.99 or more, or 0.01 or less) but less confident for others (0.6, 0.4). Further experiments* We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.* Try to use layers with more hidden units or less hidden units: 32 units, 64 units...* Try to use the `mse` loss function instead of `binary_crossentropy`.* Try to use the `tanh` activation (an activation that was popular in the early days of neural networks) instead of `relu`.These experiments will help convince you that the architecture choices we have made are all fairly reasonable, although they can still be improved! ConclusionsHere's what you should take away from this example:* There's usually quite a bit of preprocessing you need to do on your raw data in order to be able to feed it -- as tensors -- into a neural network. In the case of sequences of words, they can be encoded as binary vectors -- but there are other encoding options too.* Stacks of `Dense` layers with `relu` activations can solve a wide range of problems (including sentiment classification), and you will likely use them frequently.* In a binary classification problem (two output classes), your network should end with a `Dense` layer with 1 unit and a `sigmoid` activation, i.e. the output of your network should be a scalar between 0 and 1, encoding a probability.* With such a scalar sigmoid output, on a binary classification problem, the loss function you should use is `binary_crossentropy`.* The `rmsprop` optimizer is generally a good enough choice of optimizer, whatever your problem. That's one less thing for you to worry about.* As they get better on their training data, neural networks eventually start _overfitting_ and end up obtaining increasingly worse results on data never-seen-before. Make sure to always monitor performance on data that is outside of the training set. XP Practice
###Code
##XP practice
import numpy as np
data = np.array([[1,2,3], [4,5,6], [7,8,9,10]])
print(len(data))
MAXV = max([max(x) for x in data])
print ('max = ', MAXV)
vdata = np.zeros((len(data), MAXV+1))
print(vdata)
for index, x in enumerate(data): ## You will see index start at 3 is for real data
vdata[index, x] = 1.0
print(vdata)
td = [train_data[0]] ## note to be 2d
print('td = ', td)
vtd = np.zeros((1, 10000))
print('vtd = ', vtd)
for index, x in enumerate(td):
vtd[index, x] = 1.
print('vtd = ', vtd)
print('len(train_data) = ', len(train_data))
## XP P2
## Let's try a randomly generated dataset ... START
import numpy as np
SAMPLEX = 50000
SAMPLEY = 10000
bigdata_x = np.random.random((SAMPLEX, SAMPLEY))
bigdata_y = np.random.random((SAMPLEX,1))
bigdata_y = bigdata_y>0.5
bigdata_y = bigdata_y.astype(int)
bigdata_y
##slicing
train_x = bigdata_x[:int(SAMPLEX/2)]
test_x = bigdata_x[int(SAMPLEX/2):]
train_y = bigdata_y[:int(SAMPLEX/2)]
test_y = bigdata_y[int(SAMPLEX/2):]
train_y = np.asarray(train_y).astype('float32')
test_y = np.asarray(test_y).astype('float32')
from keras import models
from keras import layers
model_x = models.Sequential()
model_x.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model_x.add(layers.Dense(16, activation='tanh'))
model_x.add(layers.Dense(1, activation='sigmoid'))
from keras import losses
from keras import metrics
model_x.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
x_val = train_x[15000:] ##validation
partial_train_x = train_x[:15000]
y_val = train_y[15000:]
partial_train_y = train_y[:15000]
history = model_x.fit(partial_train_x,
partial_train_y,
epochs=10,
batch_size=512,
validation_data=(x_val, y_val))
results_x = model_x.evaluate(test_x, test_y)
results_x
###Output
_____no_output_____
###Markdown
Classifying movie reviews: a binary classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB datasetWe'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews.Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely _memorizing_ a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter.Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine):
###Code
from tensorflow.keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size.The variables `train_data` and `test_data` are lists of reviews, each review being a list of word indices (encoding a sequence of words). `train_labels` and `test_labels` are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive":
###Code
train_data[0]
train_labels[0]
###Output
_____no_output_____
###Markdown
Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000:
###Code
max([max(sequence) for sequence in train_data])
###Output
_____no_output_____
###Markdown
For kicks, here's how you can quickly decode one of these reviews back to English words:
###Code
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
###Output
_____no_output_____
###Markdown
Preparing the dataWe cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that:* We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape `(samples, word_indices)`, then use as first layer in our network a layer capable of handling such integer tensors (the `Embedding` layer, which we will cover in detail later in the book).* We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a `Dense` layer, capable of handling floating point vector data.We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
Here's what our samples look like now:
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
We should also vectorize our labels, which is straightforward:
###Code
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
###Output
_____no_output_____
###Markdown
Now our data is ready to be fed into a neural network. Building our networkOur input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (`Dense`) layers with `relu` activations: `Dense(16, activation='relu')`The argument being passed to each `Dense` layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such `Dense` layer with a `relu` activation implements the following chain of tensor operations:`output = relu(dot(W, input) + b)`Having 16 hidden units means that the weight matrix `W` will have shape `(input_dimension, 16)`, i.e. the dot product with `W` will project the input data onto a 16-dimensional representation space (and then we would add the bias vector `b` and apply the `relu` operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).There are two key architecture decisions to be made about such stack of dense layers:* How many layers to use.* How many "hidden units" to chose for each layer.In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use `relu` as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A `relu` (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the `[0, 1]` interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like:![3-layer network](https://s3.amazonaws.com/book.keras.io/img/ch3/3_layer_network.png) And here's the Keras implementation, very similar to the MNIST example you saw previously:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the `binary_crossentropy` loss. It isn't the only viable choice: you could use, for instance, `mean_squared_error`. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions.Here's the step where we configure our model with the `rmsprop` optimizer and the `binary_crossentropy` loss function. Note that we will also monitor accuracy during training.
###Code
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
We are passing our optimizer, loss function and metrics as strings, which is possible because `rmsprop`, `binary_crossentropy` and `accuracy` are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the `optimizer` argument:
###Code
from tensorflow.keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The latter can be done by passing function objects as the `loss` or `metrics` arguments:
###Code
from tensorflow.keras import losses
from tensorflow.keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
###Output
_____no_output_____
###Markdown
Validating our approachIn order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data:
###Code
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
###Output
_____no_output_____
###Markdown
We will now train our model for 20 epochs (20 iterations over all samples in the `x_train` and `y_train` tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the `validation_data` argument:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 3s 181us/sample - loss: 0.5105 - binary_accuracy: 0.7965 - val_loss: 0.3862 - val_binary_accuracy: 0.8681
Epoch 2/20
15000/15000 [==============================] - 1s 73us/sample - loss: 0.3057 - binary_accuracy: 0.9012 - val_loss: 0.3272 - val_binary_accuracy: 0.8691
Epoch 3/20
15000/15000 [==============================] - 1s 66us/sample - loss: 0.2255 - binary_accuracy: 0.9266 - val_loss: 0.2813 - val_binary_accuracy: 0.8886
Epoch 4/20
15000/15000 [==============================] - 1s 71us/sample - loss: 0.1788 - binary_accuracy: 0.9425 - val_loss: 0.2782 - val_binary_accuracy: 0.8892
Epoch 5/20
15000/15000 [==============================] - 1s 67us/sample - loss: 0.1450 - binary_accuracy: 0.9538 - val_loss: 0.2804 - val_binary_accuracy: 0.8856
Epoch 6/20
15000/15000 [==============================] - 1s 68us/sample - loss: 0.1186 - binary_accuracy: 0.9631 - val_loss: 0.2893 - val_binary_accuracy: 0.8874
Epoch 7/20
15000/15000 [==============================] - 1s 67us/sample - loss: 0.1011 - binary_accuracy: 0.9686 - val_loss: 0.3073 - val_binary_accuracy: 0.8843
Epoch 8/20
15000/15000 [==============================] - 1s 75us/sample - loss: 0.0787 - binary_accuracy: 0.9778 - val_loss: 0.3259 - val_binary_accuracy: 0.8815
Epoch 9/20
15000/15000 [==============================] - 1s 67us/sample - loss: 0.0662 - binary_accuracy: 0.9824 - val_loss: 0.3483 - val_binary_accuracy: 0.8807
Epoch 10/20
15000/15000 [==============================] - 1s 73us/sample - loss: 0.0518 - binary_accuracy: 0.9877 - val_loss: 0.3760 - val_binary_accuracy: 0.8784
Epoch 11/20
15000/15000 [==============================] - 1s 68us/sample - loss: 0.0424 - binary_accuracy: 0.9907 - val_loss: 0.4075 - val_binary_accuracy: 0.8737
Epoch 12/20
15000/15000 [==============================] - 1s 69us/sample - loss: 0.0341 - binary_accuracy: 0.9931 - val_loss: 0.4356 - val_binary_accuracy: 0.8725
Epoch 13/20
15000/15000 [==============================] - 1s 68us/sample - loss: 0.0285 - binary_accuracy: 0.9938 - val_loss: 0.4661 - val_binary_accuracy: 0.8724
Epoch 14/20
15000/15000 [==============================] - 1s 68us/sample - loss: 0.0203 - binary_accuracy: 0.9967 - val_loss: 0.4989 - val_binary_accuracy: 0.8716
Epoch 15/20
15000/15000 [==============================] - 1s 68us/sample - loss: 0.0163 - binary_accuracy: 0.9975 - val_loss: 0.5282 - val_binary_accuracy: 0.8694
Epoch 16/20
15000/15000 [==============================] - 1s 67us/sample - loss: 0.0136 - binary_accuracy: 0.9984 - val_loss: 0.5588 - val_binary_accuracy: 0.8680
Epoch 17/20
15000/15000 [==============================] - 1s 73us/sample - loss: 0.0079 - binary_accuracy: 0.9997 - val_loss: 0.7110 - val_binary_accuracy: 0.8501
Epoch 18/20
15000/15000 [==============================] - 1s 68us/sample - loss: 0.0063 - binary_accuracy: 0.9997 - val_loss: 0.6414 - val_binary_accuracy: 0.8690
Epoch 19/20
15000/15000 [==============================] - 1s 70us/sample - loss: 0.0068 - binary_accuracy: 0.9991 - val_loss: 0.6736 - val_binary_accuracy: 0.8657
Epoch 20/20
15000/15000 [==============================] - 1s 75us/sample - loss: 0.0034 - binary_accuracy: 0.9999 - val_loss: 0.7180 - val_binary_accuracy: 0.8659
###Markdown
On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data.Note that the call to `model.fit()` returns a `History` object. This object has a member `history`, which is a dictionary containing data about everything that happened during training. Let's take a look at it:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network.As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set.In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter.Let's train a new network from scratch for four epochs, then evaluate it on our test data:
###Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
###Output
_____no_output_____
###Markdown
Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new dataAfter having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the `predict` method:
###Code
model.predict(x_test)
###Output
_____no_output_____ |
Python_tax.ipynb | ###Markdown
###Code
x=20
if x>10:
print("x> 10")
if x< 10:
print("X<10")
if x==10:
print("x=10")
x
input( "enter the first number")
x= int( input ("enter your mark"))
if x>10:
print("pass")
if x< 10:
print("fail")
if x==10:
print("just pass")
x=int(input("enter your salary "))
if x<250000:
print("you are not taxable")
if x>250000:
print(x*10/100)
print("you are taxable")
###Output
_____no_output_____ |
module2/Rob_Hamilton_assignment_regression_classification_2.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 2 AssignmentYou'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.- [X] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.- [X] Engineer at least two new features. (See below for explanation & ideas.)- [X] Fit a linear regression model with at least two features.- [X] Get the model's coefficients and intercept.- [X] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.- [X] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!- [X] As always, commit your notebook to your fork of the GitHub repo. [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)> "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — Pedro Domingos, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)> "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — Andrew Ng, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf) > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature Ideas- Does the apartment have a description?- How long is the description?- How many total perks does each apartment have?- Are cats _or_ dogs allowed?- Are cats _and_ dogs allowed?- Total number of rooms (beds + baths)- Ratio of beds to baths- What's the neighborhood, based on address or latitude & longitude? Stretch Goals- [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression- [ ] If you want more introduction, watch [Brandon Foltz, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)(20 minutes, over 1 million views)- [ ] Do the [Plotly Dash](https://dash.plot.ly/) Tutorial, Parts 1 & 2.- [ ] Add your own stretch goal(s) !
###Code
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module1')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import numpy as np
import pandas as pd
# Read New York City apartment rental listing data
df = pd.read_csv('../data/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 2)) &
(df['price'] <= np.percentile(df['price'], 98)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
df.head()
#creatin a feature of length of the description cell
df['descirption_length'] = df['description'].str.len()
#setting blank cells which were casted as NaNs with 0
df['descirption_length'] = df['descirption_length'].fillna(0).astype(int)
df[(df['dogs_allowed']==1)&(df['cats_allowed']==1)]
#creating a variable adding beds and baths
df['beds_and_baths'] = df['bathrooms'] + df['bedrooms']
# since dogs allowed and cats allowed are boolean
# adding them together and getting the value of 2 means both are allowed
df['dogs_and_cats_allowed'] = df['dogs_allowed'] + df['cats_allowed']
#if this series has a value of 1, only one is allowed and fails
#the AND condition
df['dogs_and_cats_allowed'] = df['dogs_and_cats_allowed'].replace(1,0)
#as a boolean, setting the true condition to 1
df['dogs_and_cats_allowed'] = df['dogs_and_cats_allowed'].replace(2,1)
#converting the sell date from a string to a datetime variable
df['created'] = pd.to_datetime(df['created'])
#setting the test dataframe to the month of June
dftest = df[(df['created'].dt.month == 6)]
#setting the train dataframe to the months of April and May
dftrain = df[(df['created'].dt.month == 5) | (df['created'].dt.month ==4)]
#importing libraries
import itertools
import numpy as np
import plotly.express as px
import plotly.graph_objs as go
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
#creating a multivariable linear regression based off
#of the feature variables I created
features = ['descirption_length','dogs_and_cats_allowed','beds_and_baths']
target = 'price'
X = dftrain[features]
y = target
model = LinearRegression()
model.fit(dftrain[features], dftrain[target])
var1slope = model.coef_[1]
var2slope = model.coef_[2]
var3slope = model.coef_[0]
#printing the coeficients of the model
print(f"The intercept of this model is: {model.intercept_} ")
print(f"If Dogs AND Cats are allowed in an apartment, it increases the value of an apartment by: ${var1slope}")
print(f"For every additional bed or bath in an apartment, the apartment increases in rent on average by: ${var2slope}")
print(f"For every additional character in the description, the apartment increases in rent on average by: ${var3slope}")
#assigning the prediction variable as what the output of the model is for each dataframe
y_pred = model.predict(X)
dftrain['predicted_price'] = y_pred
y = dftrain['price']
dftrain['error'] = y_pred - y
dftrain.head()
mse = mean_squared_error(y, y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y, y_pred)
r2 = r2_score(y, y_pred)
print("TRAIN METRICS")
print('Mean Squared Error:', mse)
print('Root Mean Squared Error:', rmse)
print('Mean Absolute Error:', mae)
print('R^2:', r2)
XTest = dftest[features]
test_y_pred = model.predict(XTest)
y_test = dftest['price']
test_error = test_y_pred - y_test
mse = mean_squared_error(y_test, test_y_pred)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, test_y_pred)
r2 = r2_score(y, y_pred)
print("TEST METRICS")
print('Mean Squared Error:', mse)
print('Root Mean Squared Error:', rmse)
print('Mean Absolute Error:', mae)
print('R^2:', r2)
###Output
_____no_output_____ |
cf.Project/cf_AnalyticProcess.ipynb | ###Markdown
1단계 : 데이터 로딩 + EDA Data Loadwget을 활용하여 파일을 가져올 수 있다. 가지고 있는 파일을 파일탭에 올려서 사용하는 것도 가능하다. 다만 이렇게 로딩된 파일들은 런타임이 초기화되면 사라진다.
###Code
import pandas as pd
![ ! -f iris0.csv ]&&wget http://j.finfra.com/_file/iris0.csv
iris=pd.read_csv("iris0.csv")
iris
!ls
!cat iris0.csv
###Output
iris0.csv model.joblib sample_data
Sepal.Length,Sepal.Width,Petal.Length,Petal.Width,Species
5.1,3.5,1.4,0.2,setosa
4.9,3,1.4,0.2,setosa
4.7,3.2,1.3,0.2,setosa
4.6,3.1,1.5,0.2,setosa
5,3.6,1.4,0.2,setosa
5.4,3.9,1.7,0.4,setosa
4.6,3.4,1.4,0.3,setosa
5,3.4,1.5,0.2,setosa
4.4,2.9,1.4,0.2,setosa
4.9,3.1,1.5,0.1,setosa
5.4,3.7,1.5,0.2,setosa
4.8,3.4,1.6,0.2,setosa
4.8,3,1.4,0.1,setosa
4.3,3,1.1,0.1,setosa
5.8,4,1.2,0.2,setosa
5.7,4.4,1.5,0.4,setosa
5.4,3.9,1.3,0.4,setosa
5.1,3.5,1.4,0.3,setosa
5.7,3.8,1.7,0.3,setosa
5.1,3.8,1.5,0.3,setosa
5.4,3.4,1.7,0.2,setosa
5.1,3.7,1.5,0.4,setosa
4.6,3.6,1,0.2,setosa
5.1,3.3,1.7,0.5,setosa
4.8,3.4,1.9,0.2,setosa
5,3,1.6,0.2,setosa
5,3.4,1.6,0.4,setosa
5.2,3.5,1.5,0.2,setosa
5.2,3.4,1.4,0.2,setosa
4.7,3.2,1.6,0.2,setosa
4.8,3.1,1.6,0.2,setosa
5.4,3.4,1.5,0.4,setosa
5.2,4.1,1.5,0.1,setosa
5.5,4.2,1.4,0.2,setosa
4.9,3.1,1.5,0.2,setosa
5,3.2,1.2,0.2,setosa
5.5,3.5,1.3,0.2,setosa
4.9,3.6,1.4,0.1,setosa
4.4,3,1.3,0.2,setosa
5.1,3.4,1.5,0.2,setosa
5,3.5,1.3,0.3,setosa
4.5,2.3,1.3,0.3,setosa
4.4,3.2,1.3,0.2,setosa
5,3.5,1.6,0.6,setosa
5.1,3.8,1.9,0.4,setosa
4.8,3,1.4,0.3,setosa
5.1,3.8,1.6,0.2,setosa
4.6,3.2,1.4,0.2,setosa
5.3,3.7,1.5,0.2,setosa
5,3.3,1.4,0.2,setosa
7,3.2,4.7,1.4,versicolor
6.4,3.2,4.5,1.5,versicolor
6.9,3.1,4.9,1.5,versicolor
5.5,2.3,4,1.3,versicolor
6.5,2.8,4.6,1.5,versicolor
5.7,2.8,4.5,1.3,versicolor
6.3,3.3,4.7,1.6,versicolor
4.9,2.4,3.3,1.0,versicolor
6.6,2.9,4.6,1.3,versicolor
5.2,2.7,3.9,1.4,versicolor
5,2,3.5,1.0,versicolor
5.9,3,4.2,1.5,versicolor
6,2.2,4,1.0,versicolor
6.1,2.9,4.7,1.4,versicolor
5.6,2.9,3.6,1.3,versicolor
6.7,3.1,4.4,1.4,versicolor
5.6,3,4.5,1.5,versicolor
5.8,2.7,4.1,1.0,versicolor
6.2,2.2,4.5,1.5,versicolor
5.6,2.5,3.9,1.1,versicolor
5.9,3.2,4.8,1.8,versicolor
6.1,2.8,4,1.3,versicolor
6.3,2.5,4.9,1.5,versicolor
6.1,2.8,4.7,1.2,versicolor
6.4,2.9,4.3,1.3,versicolor
6.6,3,4.4,1.4,versicolor
6.8,2.8,4.8,1.4,versicolor
6.7,3,5,1.7,versicolor
6,2.9,4.5,1.5,versicolor
5.7,2.6,3.5,1.0,versicolor
5.5,2.4,3.8,1.1,versicolor
5.5,2.4,3.7,1.0,versicolor
5.8,2.7,3.9,1.2,versicolor
6,2.7,5.1,1.6,versicolor
5.4,3,4.5,1.5,versicolor
6,3.4,4.5,1.6,versicolor
6.7,3.1,4.7,1.5,versicolor
6.3,2.3,4.4,1.3,versicolor
5.6,3,4.1,1.3,versicolor
5.5,2.5,4,1.3,versicolor
5.5,2.6,4.4,1.2,versicolor
6.1,3,4.6,1.4,versicolor
5.8,2.6,4,1.2,versicolor
5,2.3,3.3,1.0,versicolor
5.6,2.7,4.2,1.3,versicolor
5.7,3,4.2,1.2,versicolor
5.7,2.9,4.2,1.3,versicolor
6.2,2.9,4.3,1.3,versicolor
5.1,2.5,3,1.1,versicolor
5.7,2.8,4.1,1.3,versicolor
6.3,3.3,6,2.5,virginica
5.8,2.7,5.1,1.9,virginica
7.1,3,5.9,2.1,virginica
6.3,2.9,5.6,1.8,virginica
6.5,3,5.8,2.2,virginica
7.6,3,6.6,2.1,virginica
4.9,2.5,4.5,1.7,virginica
7.3,2.9,6.3,1.8,virginica
6.7,2.5,5.8,1.8,virginica
7.2,3.6,6.1,2.5,virginica
6.5,3.2,5.1,2,virginica
6.4,2.7,5.3,1.9,virginica
6.8,3,5.5,2.1,virginica
5.7,2.5,5,2,virginica
5.8,2.8,5.1,2.4,virginica
6.4,3.2,5.3,2.3,virginica
6.5,3,5.5,1.8,virginica
7.7,3.8,6.7,2.2,virginica
7.7,2.6,6.9,2.3,virginica
6,2.2,5,1.5,virginica
6.9,3.2,5.7,2.3,virginica
5.6,2.8,4.9,2,virginica
7.7,2.8,6.7,2,virginica
6.3,2.7,4.9,1.8,virginica
6.7,3.3,5.7,2.1,virginica
7.2,3.2,6,1.8,virginica
6.2,2.8,4.8,1.8,virginica
6.1,3,4.9,1.8,virginica
6.4,2.8,5.6,2.1,virginica
7.2,3,5.8,1.6,virginica
7.4,2.8,6.1,1.9,virginica
7.9,3.8,6.4,2,virginica
6.4,2.8,5.6,2.2,virginica
6.3,2.8,5.1,1.5,virginica
6.1,2.6,5.6,1.4,virginica
7.7,3,6.1,2.3,virginica
6.3,3.4,5.6,2.4,virginica
6.4,3.1,5.5,1.8,virginica
6,3,4.8,1.8,virginica
6.9,3.1,5.4,2.1,virginica
6.7,3.1,5.6,2.4,virginica
6.9,3.1,5.1,2.3,virginica
5.8,2.7,5.1,1.9,virginica
6.8,3.2,5.9,2.3,virginica
6.7,3.3,5.7,2.5,virginica
6.7,3,5.2,2.3,virginica
6.3,2.5,5,1.9,virginica
6.5,3,5.2,2,virginica
6.2,3.4,5.4,2.3,virginica
5.9,3,5.1,1.8,virginica
###Markdown
EDA 탐구데이터 분석. 가져온 데이터를 그래프형식으로 보면서 어떤데이터를 활용할지 아니면 합쳐서 볼지 등에 대해 생각을 해본다.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("whitegrid");
sns.pairplot(iris,hue="Species");
plt.show()
###Output
_____no_output_____
###Markdown
2단계 : 학습 데이터/ 평가 데이터로 분리 학습한 데이터를 평가 데이터로 활용하게 되면 그 데이터에만 정확하고 다른 것은 답을 내지 못하게 될 수 있다. 그래서 학습데이터와 평가 데이터를 나눠줘야 한다.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris.iloc[:,0:4], iris['Species'])
display(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
Python 을 잘 모르는 분을 위해 위의 부분을 좀 더 명확히 보자면 아래와 같다.
###Code
x=train_test_split(iris.iloc[:,0:4], iris['Species'])
len(x)
X_train, X_test, y_train, y_test = x[0],x[1],x[2],x[3]
###Output
_____no_output_____
###Markdown
참고사항.- x,y,z=1,2,3 의 형태로 넣는 것은 x=1,y=2,z=3 으로 넣는 것과 같다.
###Code
x,y,z=1,2,3
y
###Output
_____no_output_____
###Markdown
3단계 : 학습(Training) 학습 데이터가 어떤 식으로 구성되는 지를 확인하여 나중에 모델을 만들때 입력될 부분과 출력될 부분을 어떻게 해야할지 확인해준다.
###Code
X_train.head
y_train.head
###Output
_____no_output_____
###Markdown
describe 라는 것을 통해 각 항목별로 최대/최소값을 비롯한 기초 통계량을 확인하는 것도 가능하다.
###Code
X_train.describe()
###Output
_____no_output_____
###Markdown
K-최근접 이웃 알고리즘을 사용해 학습을 진행한다.
###Code
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1) # 기본값은 5
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
4단계 : 평가훈련된 모델을 사용하여 평가데이터를 돌려서 얼마나 잘 훈련됐는지 알아본다. 평가 데이터의 Y 값 즉 답이 되는 부분의 클래스를 확인해준다.
###Code
y_test.__class__
###Output
_____no_output_____
###Markdown
pandas의 Series 의 경우 아래와 같이 비교를 해줄 수 있다.
###Code
import pandas as pd
pd.Series([1,2,3]) == pd.Series([2,2,3])
# Series 가 아닌 경우에는 다른 방법을 사용해야 한다.
[1,2,3]==[1,2,3]
###Output
_____no_output_____
###Markdown
Score라는 함수를 통해 평가를 진행해줄 수 있다. 원래 잘 분리된 데이터라 Score가 높게 나올 것이다.(1 이 최대 0 이 최소)
###Code
score = model.score(X_test, y_test)
print(score)
###Output
0.9473684210526315
###Markdown
직접 비교를 해서 어떤 데이터가 맞았고 틀렸는지 확인을 해볼 수도 있다.
###Code
pred_y=model.predict(X_test)
pred_y==y_test
###Output
_____no_output_____
###Markdown
5단계 : 모델 저장모델을 저장해두면 지금 학습된 모델을 나중에 다시 불러서 사용 가능하다.
###Code
from joblib import dump
dump(model,'model.joblib')
###Output
_____no_output_____
###Markdown
참고사항.- colab 에서는 위의 저장한 파일의 경우 지금 런타임 세션이 끝나면 사라지므로 나중에 활용을 하기 위해 다운로드를 해두시길 권장합니다. 6단계 : 서비스 활용 이전에 학습을 진행한 경우 저장된 모델을 불러와서 바로 평가를 진행할 수 있습니다.
###Code
from joblib import load
model_rebuild = load('model.joblib')
predict_y = model_rebuild.predict(X_test[0:1])
pred_y==y_test
###Output
_____no_output_____ |
tensorflow/lite/g3doc/models/style_transfer/overview.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.Androidexample iOSexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image must be (1, 384, 384, 3). We central crop the image and resize it.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.AndroidexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)This tutorial shows how to use a pre-trained TensorFlow Lite model to apply style transfer on any pair of content and style image. You can use the pre-trained model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.Androidexample iOSexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image must be (1, 384, 384, 3). We central crop the image and resize it.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)This tutorial shows how to use a pre-trained TensorFlow Lite model to apply style transfer on any pair of content and style image. You can use the pre-trained model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_content_image.shape)
print('Content Image Shape:', preprocessed_style_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)This tutorial shows how to use a pre-trained TensorFlow Lite model to apply style transfer on any pair of content and style image. You can use the pre-trained model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_content_image.shape)
print('Content Image Shape:', preprocessed_style_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)This tutorial shows how to use a pre-trained TensorFlow Lite model to apply style transfer on any pair of content and style image. You can use the pre-trained model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)This tutorial shows how to use a pre-trained TensorFlow Lite model to apply style transfer on any pair of content and style image. You can use the pre-trained model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.AndroidexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_384.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image must be (1, 384, 384, 3). We central crop the image and resize it.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.AndroidexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image must be (1, 384, 384, 3). We central crop the image and resize it.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.AndroidexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_384.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image must be (1, 384, 384, 3). We central crop the image and resize it.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Artistic Style Transfer with TensorFlow Lite View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook One of the most exciting developments in deep learning to come out recently is [artistic style transfer](https://arxiv.org/abs/1508.06576), or the ability to create a new image, known as a [pastiche](https://en.wikipedia.org/wiki/Pastiche), based on two input images: one representing the artistic style and one representing the content.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/formula.png)Using this technique, we can generate beautiful new artworks in a range of styles.![Style transfer example](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/table.png)If you are new to TensorFlow Lite and are working with Android, werecommend exploring the following example applications that can help you getstarted.AndroidexampleIf you are using a platform other than Android or iOS, or you are alreadyfamiliar with theTensorFlow LiteAPIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.The model is open-sourced on [GitHub](https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylizationtrain-a-model-on-a-large-dataset-with-data-augmentation-to-run-on-mobile). You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image). Understand the model architecture ![Model Architecture](https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/architecture.png)This Artistic Style Transfer model consists of two submodels:1. **Style Prediciton Model**: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.1. **Style Transform Model**: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary. Setup Import dependencies.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
###Output
_____no_output_____
###Markdown
Download the content and style images, and the pre-trained TensorFlow Lite models.
###Code
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_predict_quantized_256.tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/arbitrary_style_transfer/style_transfer_quantized_dynamic.tflite')
###Output
_____no_output_____
###Markdown
Pre-process the inputs* The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].* The style image size must be (1, 256, 256, 3). We central crop the image and resize it.* The content image can be any size. However, as we trained the model using square-cropped data, cropping the content image to a square results in better stylized image.
###Code
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process style image input.
def preprocess_style_image(style_image):
# Resize the image so that the shorter dimension becomes 256px.
target_dim = 256
shape = tf.cast(tf.shape(style_image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
style_image = tf.image.resize(style_image, new_shape)
# Central crop the image.
style_image = tf.image.resize_with_crop_or_pad(style_image, target_dim, target_dim)
return style_image
# Function to pre-process content image input.
def preprocess_content_image(content_image):
# Central crop the image.
shape = tf.shape(content_image)[1:-1]
short_dim = min(shape)
content_image = tf.image.resize_with_crop_or_pad(content_image, short_dim, short_dim)
return content_image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_content_image(content_image)
preprocessed_style_image = preprocess_style_image(style_image)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
###Output
_____no_output_____
###Markdown
Visualize the inputs
###Code
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
###Output
_____no_output_____
###Markdown
Run style transfer with TensorFlow Lite Style prediction
###Code
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
###Output
_____no_output_____
###Markdown
Style transform
###Code
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]["index"],
preprocessed_content_image.shape)
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
###Output
_____no_output_____
###Markdown
Style blendingWe can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
###Code
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_style_image(content_image)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
###Output
_____no_output_____ |
3_data_visualization.ipynb | ###Markdown
Visualizzazione Facebook
###Code
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
matplotlib.style.use('ggplot')
# Directory di Staging
dir_df = os.path.join(os.path.abspath(''),'stg')
dir_out = os.path.join(os.path.abspath(''),'out')
# Dataset Salvini
df_filename = r'df_posts_likes_salvini.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts_salvini = pd.read_pickle(df_fullpath)
# Statistiche
# Numero Posts
df_posts_salvini['ID'].count()
# Numero Likes
df_posts_salvini['Likes'].sum()
# Dataset Renzi
df_filename = r'df_posts_likes_renzi.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts_renzi = pd.read_pickle(df_fullpath)
# Statistiche
# Numero Posts
df_posts_renzi['ID'].count()
# Numero Likes
df_posts_renzi['Likes'].sum()
# Dataset M5S
df_filename = r'df_posts_likes_m5s.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts_m5s = pd.read_pickle(df_fullpath)
# Statistiche
# Numero Posts
df_posts_m5s['ID'].count()
# Numero Likes
df_posts_m5s['Likes'].sum()
###Output
_____no_output_____
###Markdown
1. Confronto Salvini, Renzi e M5S
###Code
# Dimensione Temporale -> ANNO
df_posts_m5s['Post_Date'] = df_posts_m5s['Post_Date'].str[:4]
df_posts_salvini['Post_Date'] = df_posts_m5s['Post_Date'].str[:4]
df_posts_renzi['Post_Date'] = df_posts_m5s['Post_Date'].str[:4]
df_posts_m5s = df_posts_m5s.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
df_posts_salvini = df_posts_salvini.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
df_posts_renzi = df_posts_renzi.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
df_posts_m5s.rename(columns={'ID': 'Posts_M5S', 'Likes': 'Likes_M5S'}, inplace=True)
df_posts_m5s = df_posts_m5s.set_index(['Post_Date'])
df_posts_m5s.head(2)
df_posts_renzi.rename(columns={'ID': 'Posts_Renzi', 'Likes': 'Likes_Renzi'}, inplace=True)
df_posts_renzi = df_posts_renzi.set_index(['Post_Date'])
df_posts_renzi.head(2)
df_posts_salvini.rename(columns={'ID': 'Posts_Salvini', 'Likes': 'Likes_Salvini'}, inplace=True)
df_posts_salvini = df_posts_salvini.set_index(['Post_Date'])
df_posts_salvini.head(2)
# Numero Posts
result_post = pd.concat([df_posts_renzi, df_posts_salvini, df_posts_m5s], axis=1)
del result_post['Likes_Renzi']
del result_post['Likes_Salvini']
del result_post['Likes_M5S']
result_post.rename(columns={'Posts_Renzi': 'Renzi', 'Posts_Salvini': 'Salvini', 'Posts_M5S': 'M5S'}, inplace=True)
result_post.plot(
kind='bar'
)
result_post.to_csv(os.path.join(dir_out,r'Distr_Posts.csv'),header=True, index=True)
# Numero Likes
result_likes = pd.concat([df_posts_renzi, df_posts_salvini, df_posts_m5s], axis=1)
del result_likes['Posts_Renzi']
del result_likes['Posts_Salvini']
del result_likes['Posts_M5S']
result_likes.rename(columns={'Likes_Renzi': 'Renzi', 'Likes_Salvini': 'Salvini', 'Likes_M5S': 'M5S'}, inplace=True)
result_likes.plot(
kind='bar'
)
result_likes.to_csv(os.path.join(dir_out,r'Distr_Likes.csv'),header=True, index=True)
###Output
_____no_output_____
###Markdown
2. Dettaglio Salvini
###Code
# Dataset Salvini
df_filename = r'df_posts_likes_salvini.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df_posts = pd.read_pickle(df_fullpath)
# Estraggo la Data da Str
df_posts['Post_Date'] = df_posts['Post_Date'].str[:10]
# Converto in Date
df_posts['Post_Date'] = pd.to_datetime(df_posts['Post_Date'])
# Ordino per Data
df_posts = df_posts.sort_values(by='Post_Date')
# Mi tengo DS totale per Data per analisi successive
df_posts_dett = df_posts
df_posts_dett = df_posts_dett.set_index(['Post_Date'])
# Raggruppo per Data
df_posts = df_posts.groupby('Post_Date',as_index=False).agg({'ID':'count', 'Likes': 'sum'})
# Elimino i le date per cui non ho post con likes (privacy ?)
df_posts = df_posts[np.isfinite(df_posts['Likes'])]
# Setto indice la Data
df_posts = df_posts.set_index(['Post_Date'])
# Lavoro con TimeSeries, raggruppo tutto per Anno/Mese (la data era per giorno)
df_posts = df_posts.groupby(pd.TimeGrouper("M")).sum()
# Elimino Numero di Posts
del df_posts['ID']
# Ok, i numeri tornano dopo le elaborazioni
df_posts['Likes'].sum()
###Output
_____no_output_____
###Markdown
2.1. Distribuzione Totale dei Likes ai Post di Salvini
###Code
df_posts.sort_values(by='Likes').head(5)
# Costruisco il Grafico, l'obiettivo è analizzare i picchi e capire a quale evento è collegato
tp = df_posts.plot(
marker='o',
markersize=7,
# x-axis da 0 a 84
markevery=[60,63,65,70])
tp.set_xlabel("Data del Post")
vals = tp.get_yticks()
tp.set_yticklabels(['{:,.0f}'.format(x) for x in vals])
fig_posts = tp.get_figure()
fig_posts.tight_layout()
fig_posts.savefig(os.path.join(dir_out,'Distr_Posts_Salvini.png'), format='png', dpi=1000)
###Output
_____no_output_____
###Markdown
2. Focus su Anni 2014, 2015, 2016
###Code
df_post_14 = df_posts['20140101':'20141231']
tp_14 = df_post_14.plot()
fig_posts_14 = tp_14.get_figure()
fig_posts_14.tight_layout()
fig_posts_14.savefig(os.path.join(dir_out,'posts_2014.png'), format='png', dpi=300)
# Dettaglio 2014
df_posts_dett['20140101':'20141231'].sort_values(by=['Likes'],ascending=False).head(5)
# Analizzo gli ID direttamente dall API Graph Tool di Facebook
###Output
_____no_output_____
###Markdown
Principali post tra Ottobre e Novembre1. (90.000 likes) Uno STUPRATORE tunisino di 28 anni, già in galera per violenza sessuale, è evaso dal carcere di Pordenone e ha violentato una ragazza di 28 anni. È stato arrestato. Fossi ministro, applicherei (come già sperimentato in numerosi Paesi europei) la CASTRAZIONE CHIMICA e poi lo rimanderei in Tunisia. Che dite?",
###Code
df_post_15 = df_posts['20150101':'20151231']
tp_15 = df_post_15.plot()
fig_posts_15 = tp_15.get_figure()
fig_posts_15.tight_layout()
fig_posts_15.savefig(os.path.join(dir_out,'posts_2015.png'), format='png', dpi=300)
# Controllo 2015
df_posts_dett['20150101':'20151231'].sort_values(by=['Likes'],ascending=False).head(10)
# Analizzo gli ID direttamente dall API Graph Tool di Facebook
###Output
_____no_output_____
###Markdown
1. (135.000 likes) Ragazzi, da non credere! Ascoltate e divulgate.Sabato e domenica tutti in piazza, vieni a firmare. (nel video vengono intervistate due minorenni rom che si vantavano di rubare, video scoperto poi essere un falso)2. "Una mamma di 41 anni, separata e con due figli, si è impiccata vicino a Bologna. Le avevano staccato il gas, e per luglio rischiava lo sfratto.Una preghiera per questa mamma, un abbraccio ai suoi due cuccioli di 10 e 11 anni che non lasceremo soli, e tanta rabbia. Stato italiano, dove sei?",3. Da FARE SUBITO.Sostegno militare alla Russia per annientare l’ISIS, controllo delle frontiere, blocco degli sbarchi ed espulsione dei clandestini, verifica a tappeto di tutte le occupazioni abusive nei nostri quartieri popolari, da Milano a Palermo. Ci hanno dichiarato GUERRA. E alla guerra non si risponde con le chiacchiere di Renzi e dell’inutile Alfano!",
###Code
df_post_16 = df_posts['20160101':'20161231']
tp_16 = df_post_16.plot()
fig_posts_16 = tp_16.get_figure()
fig_posts_16.tight_layout()
fig_posts_16.savefig(os.path.join(dir_out,'posts_2016.png'), format='png', dpi=300)
###Output
_____no_output_____
###Markdown
TODO:- [x] Ask Farbod if all ray-casted data (MSW + BMBF) is on the folder `MSW-BMBF data/Processed Raycast Data/`, inside the folders `CsvData MSW-Left` and `CsvData MSW-Right`- [x] Ask about the meaning of the different hits `centerHit`, `centerHitGroup`, `boxHit`, `boxHitGroup`. Also, what is `presentObjectName` and `presentObjectGroup`- [x] Best way (column) to check possible poor estimate of fixations? => `centerHitGroup` or `boxHitGroup`- [x] FPS of the experiment (average)? -> last frame count (it should be 3070 if experiment finished) - first (400) / 90 (seconds). => Should be 29.6 FPS (~ 30 FPS)- [x] Why the ray-casted CSV files sizes differ so much? Is it because unfinished experiments? => Yes, but mostly because the colliders values differ (text length on plain text files)- [x] TAM scores. Do they come from each column or do they need to be calculated? So, `trust` column? => Direct scores, no need to calculate anything- [x] Check Jasmine's presentation to know how to differentiate between saccades and fixations and apply it- [x] Select all ray-casted data participants that answered the questionnaire- [x] Find out why the participants count (colliders and questionnaires) differ from Farbod's notebook: https://github.com/farbod69/TAM-Data-Analyis/blob/master/Readme.md- [x] Check if the `HitGroups` can be directly categorized between traffic-relevant and non-traffic-relevant- [x] Take the preprocessed questionnaires, process them dropping the participants with missing trust values, remove duplicates- [ ] Check NaNs properly on data_cleanup- [ ] Remove examples that make no sense (e.g. age90)- [ ] Plot the Trust variable, explore it like Age- [x] Reorganize dataframe- [x] Add multiprocessing while checking all participants- [ ] Plot most fixated vs saccaded objects- [ ] Plot Trust vs most-fixated objects/group (TR vs non-TR) __Dependencies__
###Code
import copy # copy big/deep objects by value
import datetime # datetime operations
import itertools # operate with iterators
import json # read/write from/into json format
import os # OS operations (read/write files/folders)
import warnings # hide warnings
# process parallelization
from multiprocessing import Manager, Pool, RawArray, cpu_count
import matplotlib.pyplot as plt # mother of plots focr Python
import matplotlib.ticker as ticker # matplotlib ticker utils
import numpy as np # array/matrix operations (e.g. linear algebra)
import pandas as pd # operate with dataframes
import seaborn as sns # matplotlib plotting nice with shortcuts
from IPython.display import display # print nicely
from tqdm.notebook import tqdm # mother of progressbars for Python
# from matplotlib.ticker import FormatStrFormatter # tick formatter
###Output
_____no_output_____
###Markdown
__Options__
###Code
warnings.filterwarnings("ignore")
# set default float display format to 2 decimals
pd.options.display.float_format = "{:.2f}".format
style = "darkgrid"
sns.set_style(style) # set seaborn plotting style
# static plots
%matplotlib inline
# interactive plots
# %matplotlib widget
cores = cpu_count() # number of cpu threads for multiprocessing
print(f"Total CPU threads: {cores}")
###Output
Total CPU threads: 12
###Markdown
__Read participants cleaned data__
###Code
parts = pd.read_csv("./data/participants.csv", keep_default_na=False)
parts = parts.set_index("uid") # set uid column as index (remove default)
parts
###Output
_____no_output_____
###Markdown
__Descriptive statistics from each numerical variable__
###Code
cols = list(parts.describe().columns.difference(["frames"]))
parts[cols].describe()
###Output
_____no_output_____
###Markdown
__Distribution of each numerical variable__
###Code
plt.figure(figsize=(20, 12))
sns.boxplot(data=parts[parts.columns.difference(["frames"])], order=cols)
plt.show()
###Output
_____no_output_____
###Markdown
__Correlation matrix of the numerical variables__
###Code
# remove the number of frames as it's uninformative
parts_an = copy.deepcopy(parts[parts.columns.difference(["frames"])])
cm_parts = parts_an.corr(method="pearson")
display(cm_parts)
# generate a mask to drop the upper part of the matrix (duplicated info)
mask = np.triu(np.ones_like(cm_parts, dtype=np.bool))
# figure size
plt.figure(figsize=(10, 8))
# display the correlation matrix as a heatmap
sns.heatmap(cm_parts, annot=True, mask=mask)
plt.show()
###Output
_____no_output_____
###Markdown
__Population count and density for each categorical variable__
###Code
# categorical columns to compute descriptive statistics
cols = ["Gender", "VR", "expo", "condition"]
for col in cols: # for each defined column
counts = parts[col].value_counts()
percents = parts[col].value_counts(normalize=True).mul(100)
df = pd.DataFrame({"count": counts, "%": percents})
df.index.name = col
# show counts vs percentage
display(df)
# plot directly without the previous calculations (seaborn takes care of it)
plt.figure(figsize=(10, 8))
ax = sns.histplot(data=parts, x=col, hue=col, stat="density", legend=False)
ax.grid(False, axis="x")
plt.show()
###Output
_____no_output_____
###Markdown
__Age distribution__
###Code
df_age = pd.DataFrame(parts.Age.describe())
df_age.loc["mode"] = parts.Age.mode()[0]
display(df_age)
ax = sns.histplot(
data=parts,
x="Age",
kde=True,
discrete=True,
stat="density",
)
xticks = [a for a in range(100)]
ax.set(xticks=xticks, xticklabels=xticks)
ax.grid(False, axis="x")
ax.margins(x=0)
plt.title("Age distribution (MSW+BMBF)")
plt.gcf().set_size_inches(23, 10)
plt.show()
# normalize and get percentage
age_counts = parts.Age.value_counts(normalize=True).mul(100)
ages = age_counts.index.values
age_per = pd.DataFrame({"%": age_counts.values}, index=ages)
age_per.index.name = "Age"
display(age_per)
ax = sns.barplot(
data=age_per, x=age_per.index, y="%", order=ages, color="teal"
)
ax.grid(False, axis="x")
plt.title("Age distribution (MSW+BMBF) - Largest to smallest")
plt.gcf().set_size_inches(23, 10)
plt.show()
###Output
_____no_output_____
###Markdown
__Gaze Definition__ Participants data
###Code
parts
###Output
_____no_output_____
###Markdown
Select a participant to test Jasmine's Gaze definition (saccades and fixations differentiation)
###Code
# select first participant colliders file
file = parts.iloc[0].file # get filename
part_col = pd.read_csv(f"./data/colliders/{file}", keep_default_na=False)
###Output
_____no_output_____
###Markdown
Get average framerate
###Code
first_frame = part_col.frameNumber.iloc[0]
last_frame = part_col.frameNumber.iloc[-1]
frame_diff = last_frame - first_frame
afps = frame_diff / 90
afps
###Output
_____no_output_____
###Markdown
Assuming constant framerate (sampling frequency, FPS), the sampling period (time distance between samples, in seconds)
###Code
asp = 1 / afps
asp
###Output
_____no_output_____
###Markdown
From here on we follow this assumption. So we're going to use the latter average FPS (sampling frequency) and sampling period (1/SF).
###Code
# Every consecutive hitpoints between 0 and 100ms after
# a fixation/gaze (>=260-330ms) will be labeled as saccades
saccade_l = (0, 0.1) # saccade length (<=100ms)
gaze_l = (0.26, 0.33) # gaze length (min, max)
###Output
_____no_output_____
###Markdown
Given our SF/SP (≈29.67FPS/0.337s), how many hit points do we need to find a saccade and a gaze?
###Code
hp_saccade = 0.1 / asp
hp_saccade
###Output
_____no_output_____
###Markdown
A saccade shouldn't take at least 1-3 hit points
###Code
hp_gaze = (0.26 / asp, 0.33 / asp)
hp_gaze
###Output
_____no_output_____
###Markdown
A gaze should at least have 7-10 hit points Thus we will consider anything between 1 and 6 hitpoints a saccade and over 7 a gaze/fixation __Distribution of hits__ Read processed hits data
###Code
# without index_col=0 (default None), generates an index instead of using the first column
hits_df = pd.read_csv("./data/consecutive_hits.csv", keep_default_na=False)
hits_df
###Output
_____no_output_____
###Markdown
Calculate nulls vs rest
###Code
# all columns except for nulls counter (0), and hitType
cols = list(hits_df.describe().columns.difference(["hitType", "0"]))
hits_count = {"uid": [], "type": [], "nulls": [], "hits": []}
pbar = tqdm(iterable=hits_df.index.values)
for row in pbar:
current = hits_df.loc[row]
hits_count["uid"].append(current.uid)
hits_count["type"].append(current.hitType)
hits_count["nulls"].append(current["0"])
total = sum([int(col) * current[col] for col in cols])
hits_count["hits"].append(total)
hits_count = pd.DataFrame(hits_count)
display(hits_count)
pbar = tqdm(iterable=hits_count.index.values)
perc = {
"c-nulls": [],
"c-hits": [],
"b-nulls": [],
"b-hits": [],
}
for row in pbar:
current = hits_count.loc[row]
nulls = current.nulls
hits = current.hits
total = nulls + hits
nulls_per = nulls * 100 / total
hits_per = hits * 100 / total
if current.type == "center":
perc["c-nulls"].append(nulls_per)
perc["c-hits"].append(hits_per)
else:
perc["b-nulls"].append(nulls_per)
perc["b-hits"].append(hits_per)
print(f"Average % center nulls: {sum(perc['c-nulls'])/len(perc['c-nulls'])}")
print(f"Average % center hits: {sum(perc['c-hits'])/len(perc['c-hits'])}")
print(f"Average % box nulls: {sum(perc['b-nulls'])/len(perc['b-nulls'])}")
print(f"Average % box hits: {sum(perc['b-hits'])/len(perc['b-hits'])}")
%matplotlib widget
hits_count = pd.DataFrame(hits_count)
ax = sns.histplot(
data=hits_count[hits_count.type == "center"],
# x="nulls",
# hue="type",
kde=True,
# discrete=True,
stat="probability",
# multiple="stack"
)
# xticks = [a for a in range(100)]
# ax.set(xticks=xticks, xticklabels=xticks)
# ax.grid(False, axis="x")
# ax.margins(x=0)
plt.title("Center Hits PDF")
plt.gcf().set_size_inches(20, 10)
plt.show()
%matplotlib widget
hits_count = pd.DataFrame(hits_count)
ax = sns.histplot(
data=hits_count[hits_count.type == "box"],
# x="nulls",
# hue="type",
kde=True,
# discrete=True,
stat="probability",
# multiple="stack"
)
# xticks = [a for a in range(100)]
# ax.set(xticks=xticks, xticklabels=xticks)
# ax.grid(False, axis="x")
# ax.margins(x=0)
plt.title("Box Hits PDF")
plt.gcf().set_size_inches(20, 10)
plt.show()
# percents = hits_count[col].value_counts(normalize=True).mul(100)
# df = pd.DataFrame({"count": counts, "%": percents})
# avg_hits = {"type"}
# types = hits_count.type.unique()
# for t in types:
# nulls = hits_count[hits_count.type == t].nulls.sum()
# hits = hits_count[hits_count.type == t].hits.sum()
display(hits_count[["nulls", "hits"]].count())
###Output
_____no_output_____
###Markdown
Nulls vs apparent saccades and apparent gazes
###Code
# participants
ids = parts.index.to_list()
N = len(ids)
uids = (uid for uid in ids)
# progress bar format definitons
m_format = """📄 {n_fmt} of {total_fmt} {desc} processed: {bar}
{percentage:3.0f}% ⏱️{elapsed} ⏳{remaining} 📅{eta:%d/%m/%y}
🕒{eta:%H:%M}"""
# cs progress bar
parts_progress = tqdm(
iterable=uids,
total=N,
desc="📂 participants",
dynamic_ncols=True,
mininterval=0.001,
bar_format=m_format,
)
nsg_comp = pd.DataFrame(
columns=[
"b-null",
"c-null",
"b-saccades",
"c-saccades",
"b-gazes",
"c-gazes",
],
index=hits_df.index,
)
nsg_perc = copy.deepcopy(nsg_comp)
b_cols = hits_df.filter(like="b").columns
c_cols = hits_df.filter(like="c").columns
bcols = nsg_comp.filter(like="b-").columns
ccols = nsg_comp.filter(like="c-").columns
for uid in parts_progress:
nsg_comp.loc[uid]["b-null"] = hits_df.loc[uid][b_cols[0]]
nsg_comp.loc[uid]["c-null"] = hits_df.loc[uid][c_cols[0]]
nsg_comp.loc[uid]["b-saccades"] = hits_df.loc[uid][b_cols[1:7]].sum()
nsg_comp.loc[uid]["c-saccades"] = hits_df.loc[uid][c_cols[1:7]].sum()
nsg_comp.loc[uid]["b-gazes"] = hits_df.loc[uid][
b_cols[7 : len(b_cols)]
].sum()
nsg_comp.loc[uid]["c-gazes"] = hits_df.loc[uid][
c_cols[7 : len(c_cols)]
].sum()
# turn into percentages
btotal = nsg_comp.loc[uid][bcols].sum()
ctotal = nsg_comp.loc[uid][ccols].sum()
for col in bcols:
nsg_perc.loc[uid][col] = nsg_comp.loc[uid][col] * 100 / btotal
for col in ccols:
nsg_perc.loc[uid][col] = nsg_comp.loc[uid][col] * 100 / ctotal
nsg_comp
nsg_comp = nsg_comp.astype("int32")
nsg_comp.describe()
nsg_perc
nsg_perc = nsg_perc.astype("float")
nsg_perc.describe()
###Output
_____no_output_____
###Markdown
Check population vs nulls relation in percentages
###Code
percentages = [i + 1 for i in range(100)]
percentages
null_per = pd.DataFrame(columns=["box", "center"], index=percentages)
null_per.index.name = "% null"
total = nsg_perc.index.size
for htyp in null_per.columns:
for p in percentages:
count = nsg_perc[nsg_perc[f"{htyp[0]}-null"] >= p].index.size
null_per.loc[p][htyp] = count * 100 / total
null_per = null_per.astype("float")
display(null_per)
ax = sns.lineplot(data=null_per)
ax.set_ylabel("% Population")
plt.title("Population vs Nulls in %")
plt.gcf().set_size_inches(23, 10)
plt.show()
###Output
_____no_output_____
###Markdown
Check saccades and gazes vs population
###Code
percentages = [i + 1 for i in range(100)]
percentages
sg_per = pd.DataFrame(
columns=["b-saccades", "c-saccades", "b-gazes", "c-gazes"],
index=percentages,
)
sg_per.index.name = "%"
total = nsg_perc.index.size
for col in sg_per.columns:
for p in percentages:
count = nsg_perc[nsg_perc[col] >= p].index.size
sg_per.loc[p][col] = count * 100 / total
sg_per = sg_per.astype("float")
display(sg_per)
ax = sns.lineplot(data=sg_per)
ax.set_ylabel("% Population")
plt.title("Population vs Saccades and Gazes%")
plt.gcf().set_size_inches(23, 10)
plt.show()
combined_per = pd.concat([null_per, sg_per])
ax = sns.lineplot(data=combined_per)
ax.set_ylabel("% Population")
plt.title("Population vs Saccades and Gazes%")
plt.gcf().set_size_inches(23, 10)
plt.show()
display(nsg_perc)
bcols = nsg_perc.filter(like="b-").columns
ccols = nsg_perc.filter(like="c-").columns
bren = {col: col.split("-")[1] for col in bcols}
cren = {col: col.split("-")[1] for col in ccols}
nsg = copy.deepcopy(nsg_perc[bcols]).reset_index()
nsg = nsg.rename(columns=bren)
nsg_c = copy.deepcopy(nsg_perc[ccols]).reset_index()
nsg_c = nsg_c.rename(columns=cren)
nsg["hit"] = "center"
nsg_c["hit"] = "box"
nsg = pd.concat([nsg, nsg_c])
display(nsg)
ax = sns.catplot(data=nsg, kind="bar", x="hit")
# ax = sns.catplot(data=cmean, kind="bar")
# ax.set_ylabel("%")
plt.title("Population vs Saccades vs Gazes distribution")
plt.gcf().set_size_inches(23, 10)
plt.show()
ax = sns.boxplot(data=nsg_perc)
# ax.set_ylabel("% Population")
plt.title("Distribution (%) of nulls, saccades and gazes")
plt.gcf().set_size_inches(23, 10)
plt.show()
###Output
_____no_output_____
###Markdown
How many participants have hits larger than 10s?
###Code
# participants
uids = parts.index
# column index to start looking for consecutive hp > 10s
threshold = 296 # 296 * asp = 9.978s
bcols = hits_df.filter(like="b-").columns
ccols = hits_df.filter(like="c-").columns
# progress bar format definitons
m_format = """📄 {n_fmt} of {total_fmt} {desc} processed: {bar}
{percentage:3.0f}% ⏱️{elapsed} ⏳{remaining} 📅{eta:%d/%m/%y}
🕒{eta:%H:%M}"""
# cs progress bar
parts_progress = tqdm(
uids,
desc="📂 participants",
dynamic_ncols=False,
ncols=None, # "100%" breaks the pbar
mininterval=0.001,
bar_format=m_format,
)
def check_participant(uid):
for col in bcols[threshold : len(bcols)]:
if hits_df.loc[uid][col] != 0:
bout.append(uid)
break
for col in ccols[threshold : len(ccols)]:
if hits_df.loc[uid][col] != 0:
cout.append(uid)
break
manager = Manager()
bout = manager.list()
cout = manager.list()
pool = Pool(processes=cores)
pool.imap(check_participant, parts_progress)
pool.close()
150 * asp
###Output
_____no_output_____
###Markdown
Outliers
###Code
print("Box:")
print(len(bout))
# display(list(bout))
print("Center:")
print(len(cout))
# display(list(cout))
# for col in nsg_perc.columns:
# display(nsg_perc[col].value_counts())
display(pd.DataFrame(nsg_perc["b-null"].value_counts()).sort_index())
display(pd.DataFrame(nsg_perc["c-null"].value_counts()).sort_index())
###Output
_____no_output_____
###Markdown
Detect outliers (participants that removed their HMDs but the experiment kept running)
###Code
cols_time = []
# gs_dist = {key * asp: value for key, value in gs_dist.items() if value != 0}
cols = [f"{l}-{i}" for i in range(10) for l in ["b", "c"]]
cols.append("b-null")
cols.append("c-null")
sample = df_hits.head(10)[cols]
display(sample)
# display(sample.columns)
for c in cols:
ax = sns.barplot(
data=sample,
x=c,
y=sample[c].values,
# hue=sample.index,
# color="teal"
# order=cols
)
ax.grid(False, axis="x")
ax.margins(x=0)
plt.title("Frequency of the length of consecutive hits")
plt.gcf().set_size_inches(23, 10)
plt.show()
###Output
_____no_output_____
###Markdown
Group data for an easier visualization and understanding
###Code
count_accum = 0
gdist_group = {}
bins = 0
gaze_or_not = {}
no_gaze = 0
gaze = 0
for key, value in gs_dist.items():
if key <= 0.34:
gdist_group[f"{key:.3f}"] = value
else:
count_accum += value
bins += 1
if key < 0.237:
no_gaze += value
else:
gaze += value
gaze_or_not["saccade"] = no_gaze
gaze_or_not["gaze"] = gaze
gdist_group[">0.33"] = count_accum
print(f"Bins on >0.33: {bins}")
gdist_group
display(gaze_or_not)
plt.pie(gaze_or_not.values(), labels=gaze_or_not.keys(), autopct="%1.1f%%")
plt.title("Saccades vs Gazes")
plt.gcf().set_size_inches(23, 10)
plt.show()
df_gdist = pd.DataFrame(gdist_group, index=["count"]).transpose()
display(df_gdist)
# does not work
# ax = sns.histplot(
# data=df_gdist,
# x=df_gdist.index,
# y="count",
# kde=True,
# discrete=True,
# element="bars",
# stat="density",
# )
# clrs = ["blue" if x <= 0.100 else "green" for x in df_gdist.index.values]
ax = sns.barplot(data=df_gdist, x=df_gdist.index, y="count", color="teal")
# xticks = [a for a in range(25)]
# ax.set(xticks=xticks, xticklabels=xticks)
ax.grid(False, axis="x")
ax.margins(x=0)
plt.title("Frequency of the length of consecutive hits")
plt.gcf().set_size_inches(23, 10)
plt.show()
df_gdist = pd.DataFrame(gdist_group, index=["count"]).transpose()
display(df_gdist)
# does not work
ax = sns.displot(
data=df_gdist,
x=df_gdist.index,
y="count",
kind="hist",
discrete=True,
stat="density",
)
# clrs = ["blue" if x <= 0.100 else "green" for x in df_gdist.index.values]
# ax = sns.barplot(
# data=df_gdist,
# x=df_gdist.index,
# y="count",
# color="teal"
# )
# xticks = [a for a in range(25)]
# ax.set(xticks=xticks, xticklabels=xticks)
# ax.grid(False, axis="x")
# ax.margins(x=0)
plt.title("Frequency of the length of consecutive hits")
plt.gcf().set_size_inches(23, 10)
plt.show()
###Output
_____no_output_____
###Markdown
__Hit Group exploration__ _Center Hit Group_ Unique center hit group values
###Code
display(list(part_col.centerHitGroup.unique()))
###Output
_____no_output_____
###Markdown
Unique center hit group value counts (%)
###Code
chg_counts = part_col.centerHitGroup.value_counts(normalize=True) * 100
chg_vals = chg_counts.index.values
chg_per = pd.DataFrame({"%": chg_counts.values}, index=chg_vals)
chg_per.index.name = "CenterHitGroup"
display(chg_per)
###Output
_____no_output_____
###Markdown
_Box Hit Group_ Unique box hit group values
###Code
display(list(part_col.boxHitGroup.unique()))
bhg_counts = part_col.boxHitGroup.value_counts(normalize=True) * 100
bhg_vals = bhg_counts.index.values
bhg_per = pd.DataFrame({"%": bhg_counts.values}, index=bhg_vals)
bhg_per.index.name = "BoxHitGroup"
display(bhg_per)
###Output
_____no_output_____
###Markdown
_Comparison (center vs box)_
###Code
cbhg_comp = copy.deepcopy(chg_per)
cbhg_comp.index.name = "HitGroup"
cbhg_comp.rename(columns={"%": "Center"}, inplace=True)
cbhg_comp["Box"] = bhg_per["%"]
cbhg_comp.rename(columns={"%": "Box"}, inplace=True)
display(cbhg_comp.transpose())
###Output
_____no_output_____ |
Day_6_Assignment.ipynb | ###Markdown
###Code
#QUESTION-1
#class for BANK ACCOUNT
class BankAccount:
def __init__(self):
self.ownerName="Ranjith"
self.Balance=0
def deposit(self):
Amount=float(input("Enter amount to be Deposited : "))
self.Balance += Amount
print("Amount Deposited is :",Amount)
def withdraw(self):
Amount = float(input("Enter amount to be Withdrawn : "))
if self.Balance >= Amount:
self.Balance -= Amount
print("You have Withdrew :", Amount)
else:
print("Insufficient balance in the account...")
print("---WELCOME TO BACK ACCOUNT PROGRAM---")
BA = BankAccount()
print("Account Holder Name is :",BA.ownerName)
print("Initial Account Balance is :",BA.Balance)
BA.deposit();
BA.withdraw();
print("Net Avaliable balance is : ",BA.Balance)
#QUESTION-2
#PROGRAM FOR CONE'S VOLUME AND SURFACE AREA
import math
pi = math.pi
class cone:
def __init__(self,r,h):
self.r=r
self.h=h
def volume(self):
result = (1 / 3) * pi * self.r * self.r * self.h
print("\nVolume Of Cone is :",result)
def surfacearea(self):
result = pi * self.r * self.h + pi * self.r * self.r
print("\nSurface Area Of Cone is :",result)
ra = float(input("\nEnter the radius of cone : "))
he = float(input("\nEnter the height of cone : "))
c = cone( ra, he)
c.volume()
c.surfacearea()
###Output
Enter the radius of cone : 789
Enter the height of cone : 680
Volume Of Cone is : 443293677.4025508
Surface Area Of Cone is : 3641234.6908093677
###Markdown
Assignment Day 6 Answers (Python Essentials | Batch 7) Name : Shrinidhi AEmail Id : [email protected] Question 1For this challenge,create a bank account class that has two attributes*ownerName* *Balance*And two methods*deposit**withdraw*As an added requirement,withdrawals may not exceed the available balance.Instantiate your class,make several deposits and withdrawals,and test to make sure the account cant be overdrawn.
###Code
class Bank_Account:
def __init__(self):
self.balance=0
print("Hello!!! Welcome to the Deposit & Withdrawal Machine ")
def deposit(self):
amount=float(input("Enter amount to be Deposited: "))
self.balance += amount
print("Amount Deposited:",amount)
def withdraw(self):
amount = float(input("Enter amount to be Withdrawn: "))
if self.balance>=amount:
self.balance-=amount
print("\n You Withdrew:", amount)
else:
print("\n Insufficient balance ")
def display(self):
print("\n Net Available Balance=",self.balance)
s = Bank_Account()
s.deposit()
s.withdraw()
s.display()
###Output
Hello!!! Welcome to the Deposit & Withdrawal Machine
Enter amount to be Deposited: 1500
Amount Deposited: 1500.0
Enter amount to be Withdrawn: 1500
You Withdrew: 1500.0
Net Available Balance= 0.0
###Markdown
Question 2For this challenge,create a cone class that has two attributes:R=Radiush=HeightAnd two methods:Volume = Π * r2 = (h/3)Surface area : base : Π * r2 , side : Π * r * √(r2 + h2)Make only one class with functions,as in where required import Math.
###Code
import math
pi = math.pi
def volume(r, h):
return (1 / 3) * pi * r * r * h
def surfacearea(r, s):
return pi * r * s + pi * r * r
radius = float(input("Enter the Radius: "))
height = float(input("Enter the Height: "))
slant_height = float(input("Enter Slant Height: "))
print( "Volume Of Cone : ", volume(radius, height) )
print( "Surface Area Of Cone : ", surfacearea(radius, slant_height) )
###Output
_____no_output_____
###Markdown
Bank account problem
###Code
class banckacc():
def __init__(self,accholder,balance):
self.accholder = accholder
self.balance = balance
print("your account is created")
def deposit(self):
print("Account holder name: ",self.accholder)
print("bakance: ",self.balance)
amt = int(input("Enter amount to deposit : "))
self.balance += amt
print("your balance now is : ",self.balance)
def withdrawal(self):
print("Account holder name: ",self.accholder)
print("bakance: ",self.balance)
amt = int(input("Enter amount to withdraw : "))
if amt>self.balance:
print("Insufficient balance")
else:
self.balance -= amt
print("your remanining balance is: ",self.balance)
acc = banckacc("Dijith nair",500000)
acc.deposit()
acc.withdrawal()
###Output
your account is created
Account holder name: Dijith nair
bakance: 500000
Enter amount to deposit : 1000000
your balance now is : 1500000
Account holder name: Dijith nair
bakance: 1500000
Enter amount to withdraw : 0
your remanining balance is: 1500000
###Markdown
Area problem
###Code
import math
class conearea():
def __init__(self,r,h):
self.r = r
self.h = h
def volume (self):
volume = (1/3)* 3.14 * self.r * self.r *self.h
print("volume of cone :",volume)
def surfarea(self):
base = 3.14 * self.r * self.r
print("base:",base)
side = 3.14 * self.r * math.sqrt(self.r * self.r + self.h*self.h)
print("sides:",side)
surface = base+side
print("area is :",surface)
area= conearea(20,10)
area.volume()
area.surfarea()
###Output
_____no_output_____ |
SC_integrador_scipy.ipynb | ###Markdown
Neste tutorial usamos o pacote scipy.odeint para integrar as equações de movimento de um íon em um campo cruzado $\vec{E}\times \vec{B}$
###Code
import scipy as sci
import scipy.integrate
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')
#Initial velocity [m/s]
vy0 = 1.0e6
y0 = np.array( [ 0, 0.0, 0.0, 0.0, vy0, 0.0 ] )
# Definindo a função dy/dx = f(x,y)
def funLarmor(Y,t):
x = Y[0]
y = Y[1]
z = Y[2]
vx = Y[3]
vy = Y[4]
vz = Y[5]
# Parâmetros físicos:
qe = 1.60217662e-19
me = 9.10938356e-31
B0 = 0.1
# Razao carga/ massa (q/m)
qm = -qe/me
# Campo elétrico [V/m]
Ex = 0.0
Ey = 100
Ez = 0.0
# Campo magnético B [T]
Bx = 0.0
By = 0.0
Bz = 1.0e-4
# Equacoes de Newton-Lorentz (em coordenadas Cartesianas)
ax = qm * Ex + qm*( Bz*vy - By*vz )
ay = qm * Ey + qm*( Bx*vz - Bz*vx )
az = qm * Ez + qm*( By*vx - Bx*vy )
ydot = np.array(( vx, vy, vz, ax, ay, az ))
return ydot
# Parametros para a integracao
tmax = 1.1e-6
N = 100
time_span=np.linspace(0,tmax,N)
sol =sci.integrate.odeint(funLarmor,y0,time_span)
sci.integrate.odeint?
plt.figure(figsize = (12, 8))
plt.plot( sol[:,0], sol[:,1], 'ro-', label='Integrador Scipy' )
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.legend(loc='lower left')
plt.show()
###Output
_____no_output_____ |
examples/synthetic_network_example.ipynb | ###Markdown
-------------------------------------------------------------**If any part of this notebook is used in your research, please cite with the reference found in [README.md](https://github.com/jGaboardi/tigernetcitations)**.------------------------------------------------------------- Example usage: synthetic lattice network Author: James D. Gaboardi jgaboardi@gmail.com-------------------------------------------------------------
###Code
%config InlineBackend.figure_format = "retina"
%load_ext watermark
%watermark
import tigernet
%load_ext autoreload
%autoreload 2
%matplotlib inline
%watermark -w
%watermark -iv
###Output
Watermark: 2.2.0
json : 2.0.9
tigernet: 0.2.4
###Markdown
-------------------------------------------------------------
###Code
print(dir(tigernet))
###Output
['Network', 'Observations', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'generate_data', 'generate_lattice', 'generate_obs', 'generate_sine_lines', 'get_discard_mtfcc_by_desc', 'get_discard_segms', 'get_mtfcc_types', 'info', 'obs2obs_cost_matrix', 'stats', 'testing_data', 'tigernet', 'utils']
###Markdown
Generate a synthetic lattice
###Code
lattice = tigernet.generate_lattice(n_hori_lines=1, n_vert_lines=1)
lattice.plot()
lattice
print(help(tigernet.Network))
###Output
Help on class Network in module tigernet.tigernet:
class Network(builtins.object)
| Network(s_data, from_raw=False, sid_name='SegID', nid_name='NodeID', geo_col='geometry', len_col='length', xyid='xyid', tnid='TNID', tnidf='TNIDF', tnidt='TNIDT', attr1=None, attr2=None, mtfcc_types=None, mtfcc_discard=None, discard_segs=None, mtfcc_split=None, mtfcc_intrst=None, mtfcc_ramp=None, mtfcc_serv=None, mtfcc_split_by=None, mtfcc_split_grp=None, skip_restr=False, calc_len=False, record_components=False, record_geom=False, largest_component=False, calc_stats=False, def_graph_elems=False)
|
| Methods defined here:
|
| __init__(self, s_data, from_raw=False, sid_name='SegID', nid_name='NodeID', geo_col='geometry', len_col='length', xyid='xyid', tnid='TNID', tnidf='TNIDF', tnidt='TNIDT', attr1=None, attr2=None, mtfcc_types=None, mtfcc_discard=None, discard_segs=None, mtfcc_split=None, mtfcc_intrst=None, mtfcc_ramp=None, mtfcc_serv=None, mtfcc_split_by=None, mtfcc_split_grp=None, skip_restr=False, calc_len=False, record_components=False, record_geom=False, largest_component=False, calc_stats=False, def_graph_elems=False)
| Parameters
| ----------
| s_data : geopandas.GeoDataFrame
| Segments dataframe.
| from_raw : bool
| Input ``s_data`` is raw TIGER/Line Edge data. Default is ``False``.
| sid_name : str
| Segment column name. Default is ``'SegID'``.
| nid_name : str
| Node column name. Default is ``'NodeID'``.
| geo_col : str
| Geometry column name. Default is ``'geometry'``.
| len_col : str
| Length column name. Default is ``'length'``.
| xyid : str
| Combined x-coord + y-coords string ID. Default is ``'xyid'``.
| tnid : str
| TIGER/Line node ID variable used for working with
| TIGER/Line edges. Default is ``'TNID'``.
| tnidf : str
| TIGER/Line 'From Node' variable used for building topology
| in TIGER/Line edges. Default is ``'TNIDF'``.
| tnidt : str
| TIGER/Line 'To Node' variable used for building topology in
| TIGER/Line edges. Default is ``'TNIDT'``.
| attr1 : str
| Auxillary variable being used. Default is ``None``.
| attr2 : str
| Auxillary variable being used. Either ``'TLID'`` for tiger edges
| or ``'LINEARID'`` for tiger roads. Default is ``None``.
| mtfcc_types : dict
| MTFCC road type descriptions. Default is ``None``.
| from [utils.get_mtfcc_types()]
| mtfcc_discard : list
| MTFCC types (by code) to discard. Default is ``None``.
| from [utils.get_discard_mtfcc_by_desc()]
| discard_segs : list
| specifc segment ids to discard. Default is ``None``.
| from [utils.discard_troublemakers()]
| mtfcc_split : str
| MTFCC codes for segments to weld and then split during the
| line splitting process. Default is ``None``.
| mtfcc_intrst : str
| MTFCC codes for interstates. Default is ``None``.
| mtfcc_ramp : str
| MTFCC codes for on ramps. Default is ``None``.
| mtfcc_serv : str
| MTFCC codes for service drives. Default is ``None``.
| mtfcc_split_by : list
| MTFCC codes to eventually split the segments of
| `mtfcc_no_split` with. Default is ``None``.
| mtfcc_split_grp : str
| After subseting this road type, group by this attribute
| before welding. Default is ``None``.
| skip_restr : bool
| Skip re-welding restricted segments. Default is ``False``.
| calc_len : bool
| Calculate length and add column. Default is ``False``.
| record_components : bool
| Record connected components in graph. This is used for teasing out the
| largest connected component. Default is ``False``.
| record_geom : bool
| Create associated between IDs and shapely geometries.
| Default is ``False``.
| largest_component : bool
| Keep only the largest connected component in the graph. Default is ``False``.
| calc_stats : bool
| Calculate network stats. Default is ``False``.
| def_graph_elems : bool
| Define graph elements. Default is ``False``.
|
| Attributes
| ----------
| segm2xyid : dict
| Segment to xyID lookup.
| node2xyid : dict
| Node to xyID lookup.
| segm2node : dict
| Segment to node lookup.
| node2segm : dict
| Node to segment lookup.
| segm2segm : dict
| Segment to segment lookup.
| node2node : dict
| Node to node lookup.
| segm_cc : dict
| Root segment ID to connected component segment IDs lookup.
| cc_lens : dict
| Root segment ID to connected component length lookup.
| node_cc : dict
| Root node ID to connected component node IDs lookup.
| largest_segm_cc : dict
| Root segment ID to largest connected component segment IDs lookup.
| largest_node_cc : dict
| Root node ID to largest connected component node IDs lookup.
| n_ccs : int
| The number of connected components.
| s_ids : list
| Segment IDs.
| n_ids : list
| Node IDs.
| n_segm : int
| Network segment count.
| n_node : int
| Network node count.
| segm2len : dict
| Segment to segment length lookup.
| network_length : float
| Full network length.
| node2degree : dict
| Node to node degree lookup.
| segm2tlid : dict
| Segment to TIGER/Line ID lookup.
| segm2elem : dict
| Segment to network element lookup.
| node2elem : dict
| Node to network element lookup.
| diameter : float
| The longest shortest path between two nodes in the network.
| radius : float
| The shortest path between two nodes in the network.
| d_net : float
| Cumulative network diameter.
| d_euc : float
| Cumulative euclidean diameter.
| circuity : float
| Network circuity. See ``stats.circuity()``.
| n2n_matrix : numpy.array
| All node-to-node shortest path lengths in the network.
| n2n_paths : dict
| All node-to-node shortest paths in the network.
| max_sinuosity : float
| Maximum segment sinuosity.
| min_sinuosity : float
| Minimum segment sinuosity.
| net_mean_sinuosity : float
| Network segment sinuosity mean.
| net_std_sinuosity : float
| Network segment sinuosity standard deviation.
| max_node_degree : int
| Maximum node degree.
| min_node_degree : int
| Minimum node degree.
| mean_node_degree : float
| Network node degree mean.
| std_node_degree : float
| Network node degree standard deviation.
| alpha : float
| Network alpha measure. See ``stats.connectivity()``.
| beta : float
| Network beta measure. See ``stats.connectivity()``.
| gamma : float
| Network gamma measure. See ``stats.connectivity()``.
| eta : float
| Network eta measure. See ``stats.connectivity()``.
| entropies_{} : dict
| Segment/Node ID to {variable/attribute} entropies.
| network_entropy_{} : float
| Network {variable/attribute} entropy.
| corrected_rings : int
| Number of corrected rings in the network.
| lines_split : int
| Number of split lines in the network.
| welded_mls : int
| Number of welded multilinestrings in the network.
| segm2geom : dict
| Segment to geometry lookup.
| node2geom : dict
| Node to geometry lookup.
| segm2coords : dict
| Segment to endpoint coordinates lookup.
| node2coords : dict
| Node to coordinates lookup.
|
| Examples
| --------
|
| >>> import tigernet
| >>> lat = tigernet.generate_lattice(n_hori_lines=1, n_vert_lines=1)
| >>> net = tigernet.Network(s_data=lat)
| >>> net.s_data[["SegID", "MTFCC", "length", "xyid", "s_neigh", "n_neigh"]]
| SegID MTFCC length xyid s_neigh n_neigh
| 0 0 S1400 4.5 ['x4.5y0.0', 'x4.5y4.5'] [1, 2, 3] [0, 1]
| 1 1 S1400 4.5 ['x4.5y4.5', 'x4.5y9.0'] [0, 2, 3] [1, 2]
| 2 2 S1400 4.5 ['x0.0y4.5', 'x4.5y4.5'] [0, 1, 3] [1, 3]
| 3 3 S1400 4.5 ['x4.5y4.5', 'x9.0y4.5'] [0, 1, 2] [1, 4]
|
| >>> net.n_data[["NodeID", "xyid", "s_neigh", "n_neigh", "degree"]]
| NodeID xyid s_neigh n_neigh degree
| 0 0 ['x4.5y0.0'] [0] [1] 1
| 1 1 ['x4.5y4.5'] [0, 1, 2, 3] [0, 2, 3, 4] 4
| 2 2 ['x4.5y9.0'] [1] [1] 1
| 3 3 ['x0.0y4.5'] [2] [1] 1
| 4 4 ['x9.0y4.5'] [3] [1] 1
|
| >>> net.segm2xyid[0]
| ['x4.5y0.0', 'x4.5y4.5']
|
| >>> net.node2xyid[0]
| ['x4.5y0.0']
|
| >>> net.segm2node[3]
| [1, 4]
|
| >>> net.node2segm[4]
| [3]
|
| >>> net.segm2segm[3]
| [0, 1, 2]
|
| >>> net.node2node[4]
| [1]
|
| build_associations(self, record_geom=False)
| Associate graph elements with geometries, coordinates,
| segment lengths, node degrees, and other information.
|
| Parameters
| ----------
| record_geom : bool
| Create an ID-to-geometry lookup (``True``). Default is ``False``.
|
| build_base(self, s_data)
| Extract nodes from segment endpoints and relate
| segments and nodes to a location ID (``xyid``).
|
| Parameters
| ----------
| s_data : geopandas.GeoDataFrame
| Segments data.
|
| build_components(self, largest_cc=False)
| Find the rooted connected components of the graph (either largest or longest).
| *** Must choose either largest or longest. If both ``largest_cc`` and
| ``longest_cc`` are ``True``, ``largest_cc`` will be selected by default. ***
|
| Parameters
| ----------
| largest_cc : bool
| Keep only the largest connected component (the most
| edges/nodes) in the graph. Default is ``False``.
|
| build_network(self, s_data, record_components=False, record_geom=False, largest_component=False, def_graph_elems=False)
| Top-level method for full network object creation from a
| geopandas.GeoDataFrame of lines.
|
| Parameters
| ----------
| s_data : geopandas.GeoDataFrame
| Segments data.
| record_components : bool
| Find rooted connected components in the network (``True``),
| or ignore (``False``). Default is ``False``.
| largest_component : bool
| Keep only the largest connected compnent of the network
| (``True``), or keep all components (``False``). Default is ``False``.
| record_geom : bool
| Create an id to geometry lookup (``True``), or ignore (``False``).
| Default is ``False``.
| def_graph_elems : bool
| Define each element of the graph as either a branch
| [connected to two or more other elements], or a leaf
| [connected to only one other element] (``True``), or ignore
| (``False``). Default is ``False``.
|
| build_topology(self)
| Relate all graph elements.
|
| calc_entropy(self, ent_col, frame_name)
| Network entropy statistics. For descriptions see ``stats.entropies()``.
|
| Parameters
| ----------
| ent_col : str
| The column name in ``frame_name`` to calculate entropy on.
| frame_name : str
| The name of the network element dataframe.
|
| calc_net_stats(self, conn_stat=None)
| Calculate network analyis descriptive statistics.
|
| Parameters
| ----------
| conn_stat : {None, str}
| Either ``'alpha'``, ``'beta'``, ``'gamma'``, ``'eta'``.
| Set to ``'all'`` toc calculate all available statistics.
| For descriptions see ``stats.connectivity()``.
|
| cost_matrix(self, wpaths=False, asattr=True)
| Network node-to-node cost matrix calculation with options for generating
| shortest paths along tree. For best results the network should be simplified
| prior to running this method.
|
| Parameters
| ----------
| wpaths : bool
| Generate shortest paths tree. Default is ``False``.
| asattr : bool
| Set ``n2n_matrix`` and ``paths`` as attributes of ``Network`` if ``True``,
| otherwise return them. Default is ``True``.
|
| Returns
| -------
| n2n_matrix : numpy.ndarray
| Shortest path costs between all nodes.
| paths : dict
| Graph traveral paths.
|
| define_graph_elements(self)
| Define all segments and nodes as either a leaf (incident with one other
| element) or a branch (incident with more than one other graph element).
|
| nodes_kdtree(self, only_coords=False)
| Build a kdtree from the network node coords for observations lookup.
|
| Parameters
| ----------
| only_coords : bool
| Flag for only coordinated being passed in.
|
| Returns
| -------
| kdtree : scipy.spatial.kdtree.KDTree
| All network nodes lookup.
|
| simplify_network(self, record_components=False, largest_component=False, record_geom=False, def_graph_elems=False, inplace=False)
| Remove all non-articulation points in the network.
|
| Parameters
| ----------
| record_components : bool
| Record connected components in graph. This is used for teasing out the
| largest connected component. Default is ``False``.
| largest_component : bool
| Keep only the largest connected component in the graph. Default is ``False``.
| record_geom : bool
| Create associated between IDs and shapely geometries. Default is ``False``.
| def_graph_elems : bool
| Define graph elements. Default is ``False``.
| inplace : bool
| Overwrite the original network with the simplified. Default is ``False``.
|
| Returns
| -------
| simp_net : geopandas.GeoDataFrame
| The simplified network (if ``inplace`` is set to ``False``).
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
None
###Markdown
------------------------------------------------------------- Create a network instance & examine its attributes
###Code
network = tigernet.Network(
s_data=lattice,
record_components=True,
record_geom=True,
largest_component=False,
def_graph_elems=True
)
print(dir(network))
network.s_data
network.n_data
skws = {"color": "k", "alpha": .5, "zorder": 0}
nkws = {"color": "r", "markersize": 90, "alpha": .5, "ec":"k", "zorder": 1}
ax = network.s_data.plot(figsize=(6,6), **skws)
network.n_data.plot(ax=ax, **nkws);
###Output
_____no_output_____
###Markdown
Network element IDs
###Code
network.s_ids, network.n_ids
###Output
_____no_output_____
###Markdown
Network segment count and length
###Code
network.n_segm, network.network_length
###Output
_____no_output_____
###Markdown
Network node count and degrees
###Code
network.n_node, network.node2degree
###Output
_____no_output_____
###Markdown
Network connected components
###Code
network.n_ccs, network.segm_cc, network.node_cc
###Output
_____no_output_____
###Markdown
Network element topology x->x
###Code
network.segm2segm, network.node2node
###Output
_____no_output_____
###Markdown
x->y
###Code
network.segm2node, network.node2segm
###Output
_____no_output_____
###Markdown
Network element type
###Code
network.segm2elem, network.node2elem
###Output
_____no_output_____
###Markdown
------------------------------------------------------------- Simply the network (no effect in this case)
###Code
network.simplify_network(inplace=True)
###Output
_____no_output_____
###Markdown
------------------------------------------------------------- Generate a network cost matrix with shortest path trees
###Code
network.cost_matrix(asattr=True, wpaths=True)
network.n2n_matrix
network.n2n_paths
###Output
_____no_output_____
###Markdown
------------------------------------------------------------- Generate synthetic observations
###Code
observations = tigernet.generate_obs(5, network.s_data, seed=404)
observations["obs_id"] = ["a", "b", "c", "d", "e"]
observations
okws = {"color": "b", "markersize": 60, "alpha": .75, "ec":"k", "zorder": 2}
ax = network.s_data.plot(figsize=(6,6), **skws)
network.n_data.plot(ax=ax, **nkws)
observations.plot(ax=ax, **okws);
###Output
_____no_output_____
###Markdown
------------------------------------------------------------- Associate the observations with the network
###Code
network_observations = tigernet.Observations(
network,
observations.copy(),
df_name="obs1",
df_key="obs_id",
)
print(dir(network_observations))
network_observations.snapped_points
network_observations.obs2segm
bkws = {"color": "y", "markersize": 30, "alpha": .85, "ec":"k", "zorder": 3}
ax = network.s_data.plot(figsize=(6,6), **skws)
network.n_data.plot(ax=ax, **nkws)
network_observations.df.plot(ax=ax, **okws)
network_observations.snapped_points.plot(ax=ax, **bkws);
###Output
_____no_output_____
###Markdown
------------------------------------------------------------- Generate a cost matrix between all observations
###Code
print(help(tigernet.obs2obs_cost_matrix))
tigernet.obs2obs_cost_matrix(network_observations, network)
###Output
_____no_output_____ |
module3-understanding-linear-regression/Unit_2_Sprint_1_Module_3_Assignment.ipynb | ###Markdown
###Code
# Import New York Rent Data
LOCAL = '../data/nyc/nyc-rent-2016.csv'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv'
import pandas as pd
import numpy as np
df = pd.read_csv(WEB)
assert df.shape == (48300, 34)
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, r2_score, mean_squared_error
import matplotlib.pyplot as plt
# Create a column for the total perks
perk_cols = ['elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed',
'doorman', 'dishwasher', 'no_fee', 'laundry_in_building',
'fitness_center', 'pre-war', 'laundry_in_unit', 'roof_deck',
'outdoor_space', 'dining_room', 'high_speed_internet', 'balcony',
'swimming_pool', 'new_construction', 'exclusive', 'terrace',
'loft', 'garden_patio', 'common_outdoor_space',
'wheelchair_access']
df['perk_count'] = df[perk_cols].sum(axis=1)
# Encode the interest_level column with 1, 2, and 3
int_level = {'low': 1, 'medium': 2, 'high': 3}
df['interest_level'] = df['interest_level'].replace(int_level)
# Change Created column to a datetime
df['created'] = pd.to_datetime(df['created'])
# Test Train Split
train = df[df['created'].dt.month < 6]
test = df[df['created'].dt.month == 6]
features = ['perk_count', 'interest_level', 'bedrooms',
'bathrooms', 'latitude', 'longitude']
target = 'price'
# Create model and fit training data to it
model = LinearRegression()
model.fit(train[features], train[target])
# Get predictions from test data with model
y_pred = model.predict(test[features])
coef = model.coef_
intercept = model.intercept_
print(coef, intercept)
# MAE
mae = mean_absolute_error(test['price'], y_pred)
r2 = r2_score(test['price'], y_pred)
mse = mean_squared_error(test['price'], y_pred)
smse = np.sqrt(mse)
print('Mean Absolute Error: ', mae)
print('R^2 Score: ', r2)
print('Mean Squared Error: ', mse)
print('Root Mean Error: ', smse)
# https://stackoverflow.com/a/47230966
# Plotly notebook mode with google colaboratory
# You need to define this function
# And call it in each offline plotting cell
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-latest.min.js?noext',
},
});
</script>
'''))
import itertools
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=True)
def viz3D(fitted_model, X, features, target='', num=100):
"""
Visualize model predictions in 3D, for regression model fit on 2 features
Parameters
----------
fitted_model : scikit-learn model, already fitted
X : pandas dataframe, which was used to fit model
features : list of strings, column names of the 2 features
target : string, name of target
num : int, number of grid points for each feature
References
----------
https://plot.ly/python/3d-charts/
"""
feature1, feature2 = features
min1, max1 = X[feature1].min(), X[feature1].max()
min2, max2 = X[feature2].min(), X[feature2].max()
x1 = np.linspace(min1, max1, num)
x2 = np.linspace(min2, max2, num)
combos = list(itertools.product(x1, x2))
Z = fitted_model.predict(combos).reshape(num, num)
configure_plotly_browser_state()
data = [go.Surface(x=x1, y=x2, z=Z)]
layout = go.Layout(
scene={'xaxis': {'title': feature1, 'range': [min1,max1], 'showticklabels': True},
'yaxis': {'title': feature2, 'range': [min2,max2], 'showticklabels': True},
'zaxis': {'title': target, 'showticklabels': True}},
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
# TODO
features = ['perk_count', 'interest_level']
X = df[features]
viz3D(model, X, features, target)
###Output
_____no_output_____ |
lab1/Lab_1_04_solution.ipynb | ###Markdown
Sklearn library.
###Code
import numpy as np
import matplotlib as plt
from sklearn import datasets #various toy datasets
from sklearn import metrics #Check accuracy of model
from sklearn.linear_model import LogisticRegression #various supervised and unsupervised learning algorithms
diabetes = datasets.load_diabetes()
print(diabetes)
X = diabetes.data
y= diabetes.target
print("Feature names:", diabetes.feature_names)
print("\nFirst 10 rows of X:\n", X[:10])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state=1)
print(X_train,X_test, y_train, y_test)
###Output
[[ 0.01991321 -0.04464164 0.00457217 ... -0.03949338 -0.02364456
-0.04664087]
[ 0.05987114 0.05068012 0.0164281 ... -0.00259226 -0.00239668
-0.02178823]
[-0.0854304 -0.04464164 0.02073935 ... -0.00259226 -0.02364456
0.00306441]
...
[-0.0854304 0.05068012 -0.03099563 ... -0.03949338 -0.09643322
-0.03421455]
[ 0.06713621 0.05068012 -0.01482845 ... 0.01290621 -0.00514531
0.04862759]
[-0.00914709 -0.04464164 0.01103904 ... -0.03949338 0.01703713
-0.0052198 ]] [[ 0.04170844 -0.04464164 -0.03207344 ... -0.00997249 0.04506617
-0.05906719]
[-0.07816532 -0.04464164 -0.04069594 ... -0.0763945 -0.02028875
-0.05078298]
[-0.07090025 -0.04464164 0.09295276 ... 0.00035983 -0.05454415
-0.0010777 ]
...
[-0.03457486 0.05068012 -0.05578531 ... -0.03949338 -0.05295879
0.02791705]
[ 0.00538306 0.05068012 0.0347509 ... 0.18523444 0.01556684
0.07348023]
[ 0.03444337 0.05068012 0.11127556 ... -0.00259226 0.02801651
0.07348023]] [ 59. 225. 246. 310. 219. 166. 53. 151. 124. 53. 67. 120. 103. 71.
48. 187. 252. 288. 178. 131. 220. 51. 150. 261. 174. 259. 257. 236.
93. 61. 53. 272. 60. 44. 125. 49. 208. 182. 144. 84. 248. 123.
128. 303. 128. 65. 141. 104. 127. 104. 108. 272. 136. 197. 168. 235.
182. 96. 275. 259. 129. 264. 297. 89. 70. 232. 245. 152. 113. 257.
110. 65. 110. 341. 83. 132. 281. 243. 252. 131. 242. 128. 42. 263.
75. 321. 142. 87. 102. 128. 202. 246. 173. 53. 225. 185. 113. 49.
258. 90. 280. 184. 265. 99. 103. 275. 73. 198. 121. 248. 154. 296.
52. 139. 200. 140. 64. 279. 258. 102. 179. 135. 47. 40. 182. 237.
86. 142. 138. 90. 198. 78. 274. 72. 61. 90. 116. 118. 96. 196.
292. 91. 101. 87. 150. 191. 275. 37. 65. 72. 265. 273. 68. 118.
42. 158. 200. 116. 64. 126. 72. 141. 59. 308. 150. 50. 164. 230.
185. 88. 55. 198. 292. 85. 148. 47. 67. 134. 262. 60. 233. 45.
80. 85. 70. 210. 310. 186. 54. 200. 99. 122. 74. 281. 95. 71.
230. 77. 242. 202. 72. 71. 171. 31. 197. 81. 306. 114. 137. 170.
92. 265. 206. 293. 142. 201. 183. 129. 173. 229. 209. 248. 145. 77.
220. 69. 75. 163. 68. 190. 191. 63. 48. 317. 55. 77. 177. 263.
160. 155. 242. 113. 25. 91. 258. 168. 221. 310. 283. 81. 94. 277.
72. 270. 268. 174. 96. 83. 222. 69. 153. 202. 43. 124. 276.] [ 78. 152. 200. 59. 311. 178. 332. 132. 156. 135. 220. 233. 91. 51.
195. 109. 217. 94. 89. 111. 129. 181. 168. 97. 115. 202. 84. 147.
253. 144. 262. 115. 68. 65. 252. 212. 142. 215. 180. 163. 151. 283.
66. 83. 214. 189. 302. 93. 178. 241. 52. 144. 102. 200. 232. 97.
109. 55. 63. 98. 88. 233. 235. 97. 243. 59. 138. 220. 137. 72.
109. 71. 74. 219. 196. 170. 199. 71. 155. 52. 63. 88. 97. 100.
64. 107. 49. 60. 346. 104. 259. 143. 190. 104. 77. 141. 214. 51.
175. 167. 90. 39. 160. 101. 180. 69. 281. 281. 214. 96. 146. 268.
249. 55. 107. 172. 162. 134. 48. 150. 63. 245. 237. 185. 131. 144.
79. 127. 216. 90. 178. 122. 92. 270. 172. 78. 94. 131. 109. 52.
275. 101. 85. 111. 95. 111. 178. 151. 164. 58. 217. 244. 161. 170.
200. 179. 91. 277. 57. 143. 84. 85. 140. 88. 206. 295. 181. 192.
42. 158. 121. 66. 118. 139. 39. 84. 336.]
|
S1_B_Py_controlOfExecution.ipynb | ###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt #getting function sqrt from the a free library math
value=100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print ('Result', rootValue) #should be in the same level of indentation; print - inside the operation (condition) you ask for the result
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
Result 10.0
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8 #using parenthisis and &
#Always use () with & in conditions!!!
if (value <= 10) & (value%2==0) : #% - module - calculates reminder btw value and 2
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8 #using no parenthisis and and function
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number greater than 10
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for element in values: # for each value in values...
print(sqrt(element)) # do this
#you can use different names: element, value, x, etc.
#produces the separate values, not a list
###Output
3.0
5.0
10.0
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100, 16, 64, -2]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None) #None is a good choice ncs is puts a missing value. We can also put text "N/A", then we can be in trouble as it is text.
# to see the results:
rootValues
###Output
We added a missing value (None) when we received a negative input
We added a missing value (None) when we received a negative input
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
We need to stop, invalid value detected
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
3.0 is the root of 9
missing values as input
nan is the root of nan
string as input
negative value as input
12.0 is the root of 144
negative value as input
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
-100
-72
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
True
this guy is True <class 'bool'>
False
True
this guy is True <class 'bool'>
True
this guy is True <class 'bool'>
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
3.1622776601683795
-10 is a Wrong number!
10 is Not even a number!!
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that).Comprehensions are like loops. As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension - creating thr loop with 'for' and saving the result at the same time
#It is list because we use []. Also, we can create tuples, dics, etc, depending how we want to save our results.
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
#you cannot create tuple using a loop, because you cannot change elements in tuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dict comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
newDict.items() #try this at home and check what elements are in the result and play with the rest
[[key,value] for key,value in newDict.items()]
#key and value are not unique elements or names, we can use x and y, or other
#Also:
[[item[0], item [1]] for item in newDict.items()] #calling each item in element
[[x,y] for (x,y) in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers)) #list of tuples
#zip function produces tuples
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
#import numpy as np in case it doesnt work
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
#numbers**2 will not work as numbers is a list which is a container, that's why we need to create a vector from a list numbers by using np.array(numbers)**2
#np.array is a mathimatical structure now and we can use it for exponent
#other option (creating doubles as a vector with np.array)
doubles=np.array(numbers)**2
[(number,double) for number,double in zip(numbers,doubles)]
#other option - no need to use zip now
[(number,number**2) for number in numbers]
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this: 2. Create a list of tuples, where each tuple is a pair (name,country), using comprehensions
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
_____no_output_____
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
_____no_output_____
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
_____no_output_____
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
_____no_output_____
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
_____no_output_____
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
True
this guy is True <class 'bool'>
False
True
this guy is True <class 'bool'>
True
this guy is True <class 'bool'>
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[key,value] for key,value in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers))
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
_____no_output_____
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number greater than 10
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
3.0
5.0
10.0
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
We added a missing value (None) when we received a negative input
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
We need to stop, invalid value detected
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
3.0 is the root of 9
missing values as input
nan is the root of nan
string as input
negative value as input
12.0 is the root of 144
negative value as input
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
-100
-72
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
True
this guy is True <class 'bool'>
False
True
this guy is True <class 'bool'>
True
this guy is True <class 'bool'>
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
3.1622776601683795
-10 is a Wrong number!
10 is Not even a number!!
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As **lists** are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension, loop and save value at the same time
rootsInList
###Output
_____no_output_____
###Markdown
As **tuples** are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension, if you use loop and tuple you can't change a tuple.
rootsInTuple
###Output
_____no_output_____
###Markdown
**Dicts** can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[key,value] for key,value in newDict.items()] # input is a dict, output is a list, every item in the new list is a list
[[item[0],item[1]] for item in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create **tuples** using parallel association:only works with two list which has same length
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers)) # each item in the new list is a tuple
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
[(number,lalala) for number,lalala in zip(numbers,np.array(numbers)**2)]
# what's the meaning of that?
zip(numbers,np.array(numbers)**2)
numbers
np.a
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
#
excise_data={'names':names, 'woman':woman,'ages':ages, 'country':country, 'education':education}
excise_data
#
from pandas import DataFrame as df # calling a function from the library and renaming the function name
friends=df.from_dict(excise_data)
friends
###Output
_____no_output_____
###Markdown
2. Create a list of tuples, where each tuple is a pair (name,country), using comprehensions
###Code
list(zip(names,country))
###Output
_____no_output_____
###Markdown
3. Implement a _for_ loop to count how many peruvian there are in the data frame. Try using **not** in one solution and **~** in another one.
###Code
CurrentCount=0
for lalala in friends.country:
if lalala =='Peru':
CurrentCount+=1
CurrentCount
###Output
_____no_output_____
###Markdown
4. Implement a _for_ loop to get the count of men. Try using **not** in one solution and **~** in another one.
###Code
CurrentCount=0
for biu in friends.woman:
if biu ==False:
CurrentCount+=1
CurrentCount
###Output
_____no_output_____
###Markdown
Solve this in a new Jupyter notebook, and then upload it to GitHub. Name the notebook as 'ex_controlOfEx'. Homework 1. Implement a _for_ loop to get the count of men that have a Bach degree in the data frame. I recommend the use of **zip** (somwehere)
###Code
list(zip(woman,education))
CurrentCount=0
for woman,education in zip(woman,education):
if woman ==False and education == 'Bach' :
CurrentCount+=1
CurrentCount
###Output
_____no_output_____
###Markdown
2. Implement a _for_ loop to get the count of people whose current age is an even number.
###Code
EvenCount=0
for old in ages:
if old %2==0 :
EvenCount+=1
EvenCount
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
Sorry, I do not compute square roots of negative numbers
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
3.0
5.0
10.0
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
We added a missing value (None) when we received a negative input
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
We need to stop, invalid value detected
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
3.0 is the root of 9
missing values as input
nan is the root of nan
string as input
negative value as input
12.0 is the root of 144
negative value as input
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
-100
-72
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
True
this guy is True <class 'bool'>
False
True
this guy is True <class 'bool'>
True
this guy is True <class 'bool'>
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
3.0
5.0
-100 is a Wrong number!
12.0
-72 is a Wrong number!
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(x) for x in values] #List comprehension
rootsInList
#comprehension is in brackets.
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
#Notes
#creating everything first saving it later as a tuple.
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
#Notes
#key is value
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
#[[x,y] for x,y in newDict.items()]
#If only newDict.items, each element in the list is a tuple. Inside each tuple is
#a list of the elements.
[ [item[0],item[1]] for item in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers))
#Notes
#each combination is producing a tuple
#this produces a list of tuples
#A list is a container. what we need a mathematicla structure/vector in order ot turna list into a mathemtical structure.
#np.arry(numbers)**2
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
import numpy as np
#[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
doubles = np.array(numbers)**2
[(number, double) for number, double in zip(numbers,doubles)]
#With this function, we don't need the zip read.
#np.array is transforming the list into a mathematical vector.
#for each number in numbers, we create a tuple with two differnet numbers.
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
_____no_output_____
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
_____no_output_____
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
_____no_output_____
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
_____no_output_____
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
_____no_output_____
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
_____no_output_____
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[key,value] for key,value in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers))
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
Sorry, I do not compute square roots of negative numbers
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number greater than 10
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
3.0
5.0
10.0
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
We added a missing value (None) when we received a negative input
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
We need to stop, invalid value detected
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
3.0 is the root of 9
missing values as input
nan is the root of nan
string as input
negative value as input
12.0 is the root of 144
negative value as input
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
-100
-72
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
True
this guy is True <class 'bool'>
False
True
this guy is True <class 'bool'>
True
this guy is True <class 'bool'>
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
3.1622776601683795
-10 is a Wrong number!
10 is Not even a number!!
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[key,value] for key,value in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers))
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
#math is the library, sqrt is the function
value=100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
10.0
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
#'and' does not strictly require parethesis
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
#the '&' strictly requires paranthesis!!!!
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number greater than 10
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
perfsqs=[9,25,100]
for cat in perfsqs: # for each value in values...
print(sqrt(cat)) # do this
#'cat' was originally value, and perfsqs was values - the names are unimportant
#for needs a container - either a list or tuple - if it were a value, for wouldn't work, but a list of a single value does
###Output
3.0
5.0
10.0
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100, 16, 24, -2]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None) #'None' will still allow us to do some mathematics
# to see the results:
rootValues
###Output
We added a missing value (None) when we received a negative input
We added a missing value (None) when we received a negative input
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
We need to stop, invalid value detected
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
#nan means missing
###Output
3.0 is the root of 9
missing values as input
nan is the root of nan
string as input
negative value as input
12.0 is the root of 144
negative value as input
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter #increase by one
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # this is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
#So, if the value is neg, then you append the "position" to the list;
#it checks each value if there should be a position appended, and then adds 1 to the counter before checking the next number
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
#guessing that "pos" means "position" in Python
###Output
_____no_output_____
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
_____no_output_____
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
from math import sqrt
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
3.1622776601683795
-10 is a Wrong number!
10 is Not even a number!!
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension
#saving the result as a list - because of [] it is a 'list' comprehension
#creating a loop and saving the result in one line - faster than making a 'for' loop
#not all coding languages have this so it might be tough to get used to
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
#not adding values to the tuple one at a time - doing all the math, then saving the values as a tuple
rootsInTuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121] #these are going to be the keys, all of them will be in the dict
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dict comprehension
rootsInDict
#showing the flexibility of the comprehension approach
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[x, y] for x,y in newDict.items()]
#elements of the list will be lists
#newDict.items() <- gives us a list of tuples
#using items() allows us to using a dictionary as an input in a comprehension
#another way to get the same output as above:
[[item[0],item[1]] for item in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
names=['jim','jan','bob']
list(zip(letters,numbers,names))
#creating a list of tuples
#actually a fairly complex operation programming whys
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
np.array(numbers) #transforms the list 'numbers' into a vector - which is a mathematical structure
import numpy as np
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
#same as above, without numpy:
[(number,number**2) for number in numbers]
#another way to do it:
doubles=np.array(numbers)**2
[(number,double) for number,double in zip(numbers,doubles)]
#For comparison, "multiplying a list" give you this
numbers*2 #each element twice
#multiplying a vector gives you this:
np.array(numbers)*2
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
import pandas
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
data={'names':names, 'woman':woman, 'ages':ages, 'country':country, 'education':education}
data
friends=pandas.DataFrame.from_dict(data)
friends
###Output
_____no_output_____
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
_____no_output_____
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
_____no_output_____
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[key,value] for key,value in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers))
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
_____no_output_____
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
_____no_output_____
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
_____no_output_____
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
_____no_output_____
###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Session 1: Programming Fundamentals Part B: Control of Execution in Python You can not be an effective programmer, if you can not master the concept of control of execution when writing a code. I will introduce three main schemes:1. [Conditional Execution.](part1) 2. [Loops.](part2) 3. [Error Handling.](part3) I will also introduce the concept of **[comprehensions](comprehension)** that Python supports (but not R).---- Conditional Execution This is how you tell the computer what part of a code to execute depending if an event is true or false.
###Code
from math import sqrt
value=-100
#condition
if value >= 0:
# what to do if condition is true:
rootValue=sqrt(value)
print (rootValue)
else:
# what to do if condition is false:
print('Sorry, I do not compute square roots of negative numbers')
###Output
Sorry, I do not compute square roots of negative numbers
###Markdown
Notice the condition follows *if* immediately. Notice also the use of **indentation** to indicate a group of instructions under the effect of the condition. This is very different from *R*. If you omitted the whole **else** section, the program will still run, but it will neither send any message nor value when the input is invalid.When condition is complex, besides using **&**/**|**/**~** as in pandas, you can use **and** / **or** / **not**:
###Code
value=8
if (value <= 10) & (value%2==0) :
print('This is an even number less than 11')
elif (value <= 10) & (value%2>0) :
print('This is an odd number less than 11')
elif (value > 10) & (value%2>0) :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
value=8
if value <= 10 and value%2==0 :
print('This is an even number less than 11')
elif value <= 10 and value%2>0 :
print('This is an odd number less than 11')
elif value > 10 and value%2>0 :
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number less than 11
###Markdown
Notice what happens if you do not use parenthesis with the '&' (or that family)
###Code
value=8
if value <= 10 & value%2==0 :
print('This is an even number less than 11')
elif value <= 10 & value%2>0:
print('This is an odd number less than 11')
elif value > 10 & value%2>0:
print('This is an odd number greater than 10')
else:
print('This is an even number greater than 10')
###Output
This is an even number greater than 10
###Markdown
[Go to page beginning](beginning)---- Loops This is how you tell the computer to do something many times (and to stop when it has to):
###Code
from math import sqrt # no need for this in R
values=[9,25,100]
for value in values: # for each value in values...
print(sqrt(value)) # do this
###Output
3.0
5.0
10.0
###Markdown
Notice that Python does not have a *sqrt* function in its base. The package **math** took care of that.You do not need to show each result, you could save the results.
###Code
values=[9,25,100]
rootValues=[] # empty list, we will populate it later!
for value in values:
rootValues.append(sqrt(value)) # appending an element to the list (populating the list)
# This list started empty, now see what its elements are:
rootValues
###Output
_____no_output_____
###Markdown
It is evident that combining *loops* and *conditonals* we can make better programs. This code is NOT controlling well the process:
###Code
values=[9,25,-100]
rootValues=[]
for value in values:
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
_____no_output_____
###Markdown
Above, you saw that Python gives an error ('ValueError'), it is because _sqrt_ is not defined for negative values; then the process ended abruptly. The code below controls the execution better:
###Code
values=[9,25,-100,16,64,-2]
rootValues=[]
for value in values:
if value >=0:
rootValues.append(sqrt(value))
else:
print('We added a missing value (None) when we received a negative input')
rootValues.append(None)
# to see the results:
rootValues
###Output
We added a missing value (None) when we received a negative input
We added a missing value (None) when we received a negative input
###Markdown
We are producing an output with the same size as input. If we omit the **else** structure, we will produce an output with smaller size than the input. You can also use **break** when you consider the execution should stop:
###Code
values=[9,25,-100,144,-72]
rootValues=[]
for value in values:
# checking the value:
if value <0:
print('We need to stop, invalid value detected')
break
# you will get here if the value is not negative
rootValues.append(sqrt(value))
# to see the results:
rootValues
###Output
We need to stop, invalid value detected
###Markdown
The code above halted the program. You can use **continue** when you consider the execution should not halt:
###Code
import numpy as np
values=[9,None,np.nan, '1000',-100, 144,-72]
for value in values: # notice the order of 'IFs'
if value==None: # condition1
print ('missing values as input')
continue
if isinstance(value, str): #condition2
print ('string as input')
continue
if value < 0: # condition3
print ('negative value as input')
continue
print (sqrt(value), 'is the root of ',value)
###Output
3.0 is the root of 9
missing values as input
nan is the root of nan
string as input
negative value as input
12.0 is the root of 144
negative value as input
###Markdown
The _None_ and _NAN_ have a different nature:
###Code
type(None),type(np.nan)
###Output
_____no_output_____
###Markdown
You use both values to denote a missing value, but NAN is common in structures containing only numbers, while None in any structure. Becareful when doing math:
###Code
10 + None
###Output
_____no_output_____
###Markdown
In the previous case, Python complains because '+' can not be used to add those two different data types. It is like trying this:
###Code
10 + '10'
###Output
_____no_output_____
###Markdown
As previously mentioned, nan is used with numerical data to denote missing values, so this operation is allowed:
###Code
10 + np.nan
###Output
_____no_output_____
###Markdown
_Loops_ are also needed when you want to count the presence of a particular value:
###Code
values=[9,25,-100,144,-72]
counterOfInvalids=0 # counter
for value in values:
if value <0:
counterOfInvalids +=1 #updating counter
# to see the results:
counterOfInvalids
###Output
_____no_output_____
###Markdown
You may want to save particular positions (here is another difference with R):
###Code
values=[9,25,-100,144,-72]
positionInvalids=[]
currentPosition=0 # ithis is the 'accumulator' initial position
for value in values:
if value <0:
positionInvalids.append(currentPosition)
currentPosition+=1 # becareful where you put the 'accumulator'
# to see the results:
positionInvalids
# testing:
for pos in positionInvalids:
print (values[pos])
###Output
-100
-72
###Markdown
If you have boolean values, you can profit by using boolean operators:
###Code
bvalues=[True,False,True,True]
for element in bvalues:
if element:
print('this guy is True')
bvalues=[True,False,True,True]
for element in bvalues:
print (element)
if element:
print('this guy is True',type(element))
###Output
True
this guy is True <class 'bool'>
False
True
this guy is True <class 'bool'>
True
this guy is True <class 'bool'>
###Markdown
Notice this are not boolean:
###Code
# this is wrong
for element in bvalues:
if ~element:
print('this guy is True')
for element in bvalues:
print (element)
if ~element:
print('this guy is True',~element,type(~element))
# this is wrong
for element in bvalues:
if !element:
print('this guy is True')
###Output
_____no_output_____
###Markdown
[Go to page beginning](beginning)---- Error Handling We have controlled errors before, using *if-else*; however, Python has particular functions to take care of that:
###Code
# what kind of error you get:
print (sqrt(-10))
# what kind of error you get:
print (sqrt('10'))
###Output
_____no_output_____
###Markdown
Python is giving different types of **errors** (*Type* and *Value*), let's use that:
###Code
values=[10,-10,'10']
for value in values:
try:
print (sqrt(value))
except ValueError:
print (value,'is a Wrong number!')
except TypeError:
print (value,'is Not even a number!!')
###Output
3.1622776601683795
-10 is a Wrong number!
10 is Not even a number!!
###Markdown
[Go to page beginning](beginning)____ ComprehensionsPython has implemented ways to create data structures using a technique called comprehensions (R can not do that). As lists are mutable, this operation is creating a list on the run.
###Code
from math import sqrt
values=[9,25,49,121]
rootsInList=[sqrt(value) for value in values] #List comprehension
rootsInList
###Output
_____no_output_____
###Markdown
As tuples are immutable, this operation is not creating a tuple on the run. We are in fact generating values that will later become a tuple.
###Code
values=[9,25,49,-121]
rootsInTuple=tuple(sqrt(value) for value in values if value > 0) #tuple comprehension
rootsInTuple
###Output
_____no_output_____
###Markdown
Dicts can also be created that way:
###Code
values=[9,25,49,-121]
rootsInDict={value:(sqrt(value) if value > 0 else None) for value in values} #Dic comprehension
rootsInDict
###Output
_____no_output_____
###Markdown
When you have a dict as input in comprehensions you can visit its values using _items()_ like this:
###Code
newDict={'name':'John', 'age':40, 'State':'WA'}
[[key,value] for key,value in newDict.items()]
###Output
_____no_output_____
###Markdown
The function **zip** allows you to create tuples using parallel association:
###Code
letters=['a','b','c']
numbers=[10,20,30]
list(zip(letters,numbers))
###Output
_____no_output_____
###Markdown
_Zipped_ lists are common in comprehensions:
###Code
[(number,double) for number,double in zip(numbers,np.array(numbers)**2)]
###Output
_____no_output_____
###Markdown
Class exercises:Make a function that:1. Create a data frame with this:
###Code
names=["Tomás", "Pauline", "Pablo", "Bjork","Alan","Juana"]
woman=[False,True,False,False,False,True]
ages=[32,33,28,30,32,27]
country=["Chile", "Senegal", "Spain", "Norway","Peru","Peru"]
education=["Bach", "Bach", "Master", "PhD","Bach","Master"]
###Output
_____no_output_____ |
Webscraping Data Collection.ipynb | ###Markdown
**Space X Falcon 9 First Stage Landing Prediction** Web scraping Falcon 9 and Falcon Heavy Launches Records from Wikipedia Estimated time needed: **40** minutes In this lab, you will be performing web scraping to collect Falcon 9 historical launch records from a Wikipedia page titled `List of Falcon 9 and Falcon Heavy launches`https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module\_1\_L2/images/Falcon9\_rocket_family.svg) Falcon 9 first stage will land successfully ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/landing\_1.gif) Several examples of an unsuccessful landing are shown here: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/crash.gif) More specifically, the launch records are stored in a HTML table shown below: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module\_1\_L2/images/falcon9-launches-wiki.png) ObjectivesWeb scrap Falcon 9 launch records with `BeautifulSoup`:* Extract a Falcon 9 launch records HTML table from Wikipedia* Parse the table and convert it into a Pandas data frame First let's import required packages for this lab
###Code
!pip3 install beautifulsoup4
!pip3 install requests
import sys
import requests
from bs4 import BeautifulSoup
import re
import unicodedata
import pandas as pd
###Output
_____no_output_____
###Markdown
and we will provide some helper functions for you to process web scraped HTML table
###Code
def date_time(table_cells):
"""
This function returns the data and time from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
return [data_time.strip() for data_time in list(table_cells.strings)][0:2]
def booster_version(table_cells):
"""
This function returns the booster version from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=''.join([booster_version for i,booster_version in enumerate( table_cells.strings) if i%2==0][0:-1])
return out
def landing_status(table_cells):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=[i for i in table_cells.strings][0]
return out
def get_mass(table_cells):
mass=unicodedata.normalize("NFKD", table_cells.text).strip()
if mass:
mass.find("kg")
new_mass=mass[0:mass.find("kg")+2]
else:
new_mass=0
return new_mass
def extract_column_from_header(row):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
if (row.br):
row.br.extract()
if row.a:
row.a.extract()
if row.sup:
row.sup.extract()
colunm_name = ' '.join(row.contents)
# Filter the digit and empty names
if not(colunm_name.strip().isdigit()):
colunm_name = colunm_name.strip()
return colunm_name
###Output
_____no_output_____
###Markdown
To keep the lab tasks consistent, you will be asked to scrape the data from a snapshot of the `List of Falcon 9 and Falcon Heavy launches` Wikipage updated on`9th June 2021`
###Code
static_url = "https://en.wikipedia.org/w/index.php?title=List_of_Falcon_9_and_Falcon_Heavy_launches&oldid=1027686922"
###Output
_____no_output_____
###Markdown
Next, request the HTML page from the above URL and get a `response` object TASK 1: Request the Falcon9 Launch Wiki page from its URL First, let's perform an HTTP GET method to request the Falcon9 Launch HTML page, as an HTTP response.
###Code
# use requests.get() method with the provided static_url
# assign the response to a object
html = requests.get(static_url).text
###Output
_____no_output_____
###Markdown
Create a `BeautifulSoup` object from the HTML `response`
###Code
# Use BeautifulSoup() to create a BeautifulSoup object from a response text content
soup = BeautifulSoup(html, 'html5lib')
###Output
_____no_output_____
###Markdown
Print the page title to verify if the `BeautifulSoup` object was created properly
###Code
# Use soup.title attribute
soup.title
###Output
_____no_output_____
###Markdown
TASK 2: Extract all column/variable names from the HTML table header Next, we want to collect all relevant column names from the HTML table header Let's try to find all tables on the wiki page first. If you need to refresh your memory about `BeautifulSoup`, please check the external reference link towards the end of this lab
###Code
# Use the find_all function in the BeautifulSoup object, with element type `table`
# Assign the result to a list called `html_tables`
html_tables = soup.find_all('table')
###Output
_____no_output_____
###Markdown
Starting from the third table is our target table contains the actual launch records.
###Code
# Let's print the third table and check its content
first_launch_table = html_tables[2]
print(first_launch_table)
###Output
<table class="wikitable plainrowheaders collapsible" style="width: 100%;">
<tbody><tr>
<th scope="col">Flight No.
</th>
<th scope="col">Date and<br/>time (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)
</th>
<th scope="col"><a href="/wiki/List_of_Falcon_9_first-stage_boosters" title="List of Falcon 9 first-stage boosters">Version,<br/>Booster</a> <sup class="reference" id="cite_ref-booster_11-0"><a href="#cite_note-booster-11">[b]</a></sup>
</th>
<th scope="col">Launch site
</th>
<th scope="col">Payload<sup class="reference" id="cite_ref-Dragon_12-0"><a href="#cite_note-Dragon-12">[c]</a></sup>
</th>
<th scope="col">Payload mass
</th>
<th scope="col">Orbit
</th>
<th scope="col">Customer
</th>
<th scope="col">Launch<br/>outcome
</th>
<th scope="col"><a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">Booster<br/>landing</a>
</th></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">1
</th>
<td>4 June 2010,<br/>18:45
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-0"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0003.1<sup class="reference" id="cite_ref-block_numbers_14-0"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/Dragon_Spacecraft_Qualification_Unit" title="Dragon Spacecraft Qualification Unit">Dragon Spacecraft Qualification Unit</a>
</td>
<td>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a>
</td>
<td><a href="/wiki/SpaceX" title="SpaceX">SpaceX</a>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success
</td>
<td class="table-failure" style="background: #ffbbbb; color: black; vertical-align: middle; text-align: center;">Failure<sup class="reference" id="cite_ref-ns20110930_15-0"><a href="#cite_note-ns20110930-15">[9]</a></sup><sup class="reference" id="cite_ref-16"><a href="#cite_note-16">[10]</a></sup><br/><small>(parachute)</small>
</td></tr>
<tr>
<td colspan="9">First flight of Falcon 9 v1.0.<sup class="reference" id="cite_ref-sfn20100604_17-0"><a href="#cite_note-sfn20100604-17">[11]</a></sup> Used a boilerplate version of Dragon capsule which was not designed to separate from the second stage.<small>(<a href="#First_flight_of_Falcon_9">more details below</a>)</small> Attempted to recover the first stage by parachuting it into the ocean, but it burned up on reentry, before the parachutes even deployed.<sup class="reference" id="cite_ref-parachute_18-0"><a href="#cite_note-parachute-18">[12]</a></sup>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">2
</th>
<td>8 December 2010,<br/>15:43<sup class="reference" id="cite_ref-spaceflightnow_Clark_Launch_Report_19-0"><a href="#cite_note-spaceflightnow_Clark_Launch_Report-19">[13]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-1"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0004.1<sup class="reference" id="cite_ref-block_numbers_14-1"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_Dragon" title="SpaceX Dragon">Dragon</a> <a class="mw-redirect" href="/wiki/COTS_Demo_Flight_1" title="COTS Demo Flight 1">demo flight C1</a><br/>(Dragon C101)
</td>
<td>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a href="/wiki/International_Space_Station" title="International Space Station">ISS</a>)
</td>
<td><div class="plainlist">
<ul><li><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Orbital_Transportation_Services" title="Commercial Orbital Transportation Services">COTS</a>)</li>
<li><a href="/wiki/National_Reconnaissance_Office" title="National Reconnaissance Office">NRO</a></li></ul>
</div>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-ns20110930_15-1"><a href="#cite_note-ns20110930-15">[9]</a></sup>
</td>
<td class="table-failure" style="background: #ffbbbb; color: black; vertical-align: middle; text-align: center;">Failure<sup class="reference" id="cite_ref-ns20110930_15-2"><a href="#cite_note-ns20110930-15">[9]</a></sup><sup class="reference" id="cite_ref-20"><a href="#cite_note-20">[14]</a></sup><br/><small>(parachute)</small>
</td></tr>
<tr>
<td colspan="9">Maiden flight of <a class="mw-redirect" href="/wiki/Dragon_capsule" title="Dragon capsule">Dragon capsule</a>, consisting of over 3 hours of testing thruster maneuvering and reentry.<sup class="reference" id="cite_ref-spaceflightnow_Clark_unleashing_Dragon_21-0"><a href="#cite_note-spaceflightnow_Clark_unleashing_Dragon-21">[15]</a></sup> Attempted to recover the first stage by parachuting it into the ocean, but it disintegrated upon reentry, before the parachutes were deployed.<sup class="reference" id="cite_ref-parachute_18-1"><a href="#cite_note-parachute-18">[12]</a></sup> <small>(<a href="#COTS_demo_missions">more details below</a>)</small> It also included two <a href="/wiki/CubeSat" title="CubeSat">CubeSats</a>,<sup class="reference" id="cite_ref-NRO_Taps_Boeing_for_Next_Batch_of_CubeSats_22-0"><a href="#cite_note-NRO_Taps_Boeing_for_Next_Batch_of_CubeSats-22">[16]</a></sup> and a wheel of <a href="/wiki/Brou%C3%A8re" title="Brouère">Brouère</a> cheese.
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">3
</th>
<td>22 May 2012,<br/>07:44<sup class="reference" id="cite_ref-BBC_new_era_23-0"><a href="#cite_note-BBC_new_era-23">[17]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-2"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0005.1<sup class="reference" id="cite_ref-block_numbers_14-2"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_Dragon" title="SpaceX Dragon">Dragon</a> <a class="mw-redirect" href="/wiki/Dragon_C2%2B" title="Dragon C2+">demo flight C2+</a><sup class="reference" id="cite_ref-C2_24-0"><a href="#cite_note-C2-24">[18]</a></sup><br/>(Dragon C102)
</td>
<td>525 kg (1,157 lb)<sup class="reference" id="cite_ref-25"><a href="#cite_note-25">[19]</a></sup>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a href="/wiki/International_Space_Station" title="International Space Station">ISS</a>)
</td>
<td><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Orbital_Transportation_Services" title="Commercial Orbital Transportation Services">COTS</a>)
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-26"><a href="#cite_note-26">[20]</a></sup>
</td>
<td class="table-noAttempt" style="background: #ececec; color: black; vertical-align: middle; white-space: nowrap; text-align: center;">No attempt
</td></tr>
<tr>
<td colspan="9">Dragon spacecraft demonstrated a series of tests before it was allowed to approach the <a href="/wiki/International_Space_Station" title="International Space Station">International Space Station</a>. Two days later, it became the first commercial spacecraft to board the ISS.<sup class="reference" id="cite_ref-BBC_new_era_23-1"><a href="#cite_note-BBC_new_era-23">[17]</a></sup> <small>(<a href="#COTS_demo_missions">more details below</a>)</small>
</td></tr>
<tr>
<th rowspan="3" scope="row" style="text-align:center;">4
</th>
<td rowspan="2">8 October 2012,<br/>00:35<sup class="reference" id="cite_ref-SFN_LLog_27-0"><a href="#cite_note-SFN_LLog-27">[21]</a></sup>
</td>
<td rowspan="2"><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-3"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0006.1<sup class="reference" id="cite_ref-block_numbers_14-3"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td rowspan="2"><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_CRS-1" title="SpaceX CRS-1">SpaceX CRS-1</a><sup class="reference" id="cite_ref-sxManifest20120925_28-0"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><br/>(Dragon C103)
</td>
<td>4,700 kg (10,400 lb)
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a href="/wiki/International_Space_Station" title="International Space Station">ISS</a>)
</td>
<td><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Resupply_Services" title="Commercial Resupply Services">CRS</a>)
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success
</td>
<td rowspan="2" style="background:#ececec; text-align:center;"><span class="nowrap">No attempt</span>
</td></tr>
<tr>
<td><a href="/wiki/Orbcomm_(satellite)" title="Orbcomm (satellite)">Orbcomm-OG2</a><sup class="reference" id="cite_ref-Orbcomm_29-0"><a href="#cite_note-Orbcomm-29">[23]</a></sup>
</td>
<td>172 kg (379 lb)<sup class="reference" id="cite_ref-gunter-og2_30-0"><a href="#cite_note-gunter-og2-30">[24]</a></sup>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a>
</td>
<td><a href="/wiki/Orbcomm" title="Orbcomm">Orbcomm</a>
</td>
<td class="table-partial" style="background: wheat; color: black; vertical-align: middle; text-align: center;">Partial failure<sup class="reference" id="cite_ref-nyt-20121030_31-0"><a href="#cite_note-nyt-20121030-31">[25]</a></sup>
</td></tr>
<tr>
<td colspan="9">CRS-1 was successful, but the <a href="/wiki/Secondary_payload" title="Secondary payload">secondary payload</a> was inserted into an abnormally low orbit and subsequently lost. This was due to one of the nine <a href="/wiki/SpaceX_Merlin" title="SpaceX Merlin">Merlin engines</a> shutting down during the launch, and NASA declining a second reignition, as per <a href="/wiki/International_Space_Station" title="International Space Station">ISS</a> visiting vehicle safety rules, the primary payload owner is contractually allowed to decline a second reignition. NASA stated that this was because SpaceX could not guarantee a high enough likelihood of the second stage completing the second burn successfully which was required to avoid any risk of secondary payload's collision with the ISS.<sup class="reference" id="cite_ref-OrbcommTotalLoss_32-0"><a href="#cite_note-OrbcommTotalLoss-32">[26]</a></sup><sup class="reference" id="cite_ref-sn20121011_33-0"><a href="#cite_note-sn20121011-33">[27]</a></sup><sup class="reference" id="cite_ref-34"><a href="#cite_note-34">[28]</a></sup>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">5
</th>
<td>1 March 2013,<br/>15:10
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-4"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0007.1<sup class="reference" id="cite_ref-block_numbers_14-4"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_CRS-2" title="SpaceX CRS-2">SpaceX CRS-2</a><sup class="reference" id="cite_ref-sxManifest20120925_28-1"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><br/>(Dragon C104)
</td>
<td>4,877 kg (10,752 lb)
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a class="mw-redirect" href="/wiki/ISS" title="ISS">ISS</a>)
</td>
<td><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Resupply_Services" title="Commercial Resupply Services">CRS</a>)
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success
</td>
<td class="table-noAttempt" style="background: #ececec; color: black; vertical-align: middle; white-space: nowrap; text-align: center;">No attempt
</td></tr>
<tr>
<td colspan="9">Last launch of the original Falcon 9 v1.0 <a href="/wiki/Launch_vehicle" title="Launch vehicle">launch vehicle</a>, first use of the unpressurized trunk section of Dragon.<sup class="reference" id="cite_ref-sxf9_20110321_35-0"><a href="#cite_note-sxf9_20110321-35">[29]</a></sup>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">6
</th>
<td>29 September 2013,<br/>16:00<sup class="reference" id="cite_ref-pa20130930_36-0"><a href="#cite_note-pa20130930-36">[30]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.1" title="Falcon 9 v1.1">F9 v1.1</a><sup class="reference" id="cite_ref-MuskMay2012_13-5"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B1003<sup class="reference" id="cite_ref-block_numbers_14-5"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a class="mw-redirect" href="/wiki/Vandenberg_Air_Force_Base" title="Vandenberg Air Force Base">VAFB</a>,<br/><a href="/wiki/Vandenberg_Space_Launch_Complex_4" title="Vandenberg Space Launch Complex 4">SLC-4E</a>
</td>
<td><a href="/wiki/CASSIOPE" title="CASSIOPE">CASSIOPE</a><sup class="reference" id="cite_ref-sxManifest20120925_28-2"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><sup class="reference" id="cite_ref-CASSIOPE_MDA_37-0"><a href="#cite_note-CASSIOPE_MDA-37">[31]</a></sup>
</td>
<td>500 kg (1,100 lb)
</td>
<td><a href="/wiki/Polar_orbit" title="Polar orbit">Polar orbit</a> <a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a>
</td>
<td><a href="/wiki/Maxar_Technologies" title="Maxar Technologies">MDA</a>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-pa20130930_36-1"><a href="#cite_note-pa20130930-36">[30]</a></sup>
</td>
<td class="table-no2" style="background: #ffdddd; color: black; vertical-align: middle; text-align: center;">Uncontrolled<br/><small>(ocean)</small><sup class="reference" id="cite_ref-ocean_landing_38-0"><a href="#cite_note-ocean_landing-38">[d]</a></sup>
</td></tr>
<tr>
<td colspan="9">First commercial mission with a private customer, first launch from Vandenberg, and demonstration flight of Falcon 9 v1.1 with an improved 13-tonne to LEO capacity.<sup class="reference" id="cite_ref-sxf9_20110321_35-1"><a href="#cite_note-sxf9_20110321-35">[29]</a></sup> After separation from the second stage carrying Canadian commercial and scientific satellites, the first stage booster performed a controlled reentry,<sup class="reference" id="cite_ref-39"><a href="#cite_note-39">[32]</a></sup> and an <a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">ocean touchdown test</a> for the first time. This provided good test data, even though the booster started rolling as it neared the ocean, leading to the shutdown of the central engine as the roll depleted it of fuel, resulting in a hard impact with the ocean.<sup class="reference" id="cite_ref-pa20130930_36-2"><a href="#cite_note-pa20130930-36">[30]</a></sup> This was the first known attempt of a rocket engine being lit to perform a supersonic retro propulsion, and allowed SpaceX to enter a public-private partnership with <a href="/wiki/NASA" title="NASA">NASA</a> and its Mars entry, descent, and landing technologies research projects.<sup class="reference" id="cite_ref-40"><a href="#cite_note-40">[33]</a></sup> <small>(<a href="#Maiden_flight_of_v1.1">more details below</a>)</small>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">7
</th>
<td>3 December 2013,<br/>22:41<sup class="reference" id="cite_ref-sfn_wwls20130624_41-0"><a href="#cite_note-sfn_wwls20130624-41">[34]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.1" title="Falcon 9 v1.1">F9 v1.1</a><br/>B1004
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SES-8" title="SES-8">SES-8</a><sup class="reference" id="cite_ref-sxManifest20120925_28-3"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><sup class="reference" id="cite_ref-spx-pr_42-0"><a href="#cite_note-spx-pr-42">[35]</a></sup><sup class="reference" id="cite_ref-aw20110323_43-0"><a href="#cite_note-aw20110323-43">[36]</a></sup>
</td>
<td>3,170 kg (6,990 lb)
</td>
<td><a href="/wiki/Geostationary_transfer_orbit" title="Geostationary transfer orbit">GTO</a>
</td>
<td><a href="/wiki/SES_S.A." title="SES S.A.">SES</a>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-SNMissionStatus7_44-0"><a href="#cite_note-SNMissionStatus7-44">[37]</a></sup>
</td>
<td class="table-noAttempt" style="background: #ececec; color: black; vertical-align: middle; white-space: nowrap; text-align: center;">No attempt<br/><sup class="reference" id="cite_ref-sf10120131203_45-0"><a href="#cite_note-sf10120131203-45">[38]</a></sup>
</td></tr>
<tr>
<td colspan="9">First <a href="/wiki/Geostationary_transfer_orbit" title="Geostationary transfer orbit">Geostationary transfer orbit</a> (GTO) launch for Falcon 9,<sup class="reference" id="cite_ref-spx-pr_42-1"><a href="#cite_note-spx-pr-42">[35]</a></sup> and first successful reignition of the second stage.<sup class="reference" id="cite_ref-46"><a href="#cite_note-46">[39]</a></sup> SES-8 was inserted into a <a href="/wiki/Geostationary_transfer_orbit" title="Geostationary transfer orbit">Super-Synchronous Transfer Orbit</a> of 79,341 km (49,300 mi) in apogee with an <a href="/wiki/Orbital_inclination" title="Orbital inclination">inclination</a> of 20.55° to the <a href="/wiki/Equator" title="Equator">equator</a>.
</td></tr></tbody></table>
###Markdown
You should able to see the columns names embedded in the table header elements `` as follows: ```Flight No.Date andtime (UTC)Version,Booster [b]Launch sitePayload[c]Payload massOrbitCustomerLaunchoutcomeBoosterlanding``` Next, we just need to iterate through the `` elements and apply the provided `extract_column_from_header()` to extract column name one by one
###Code
column_names = []
# Apply find_all() function with `th` element on first_launch_table
# Iterate each th element and apply the provided extract_column_from_header() to get a column name
# Append the Non-empty column name (`if name is not None and len(name) > 0`) into a list called column_names
th = first_launch_table.find_all('th')
for i,cell in enumerate(th):
name = extract_column_from_header(cell)
if (name is not None and len(name) > 0):
column_names.append(name)
###Output
_____no_output_____
###Markdown
Check the extracted column names
###Code
print(column_names)
###Output
['Flight No.', 'Date and time ( )', 'Launch site', 'Payload', 'Payload mass', 'Orbit', 'Customer', 'Launch outcome']
###Markdown
TASK 3: Create a data frame by parsing the launch HTML tables We will create an empty dictionary with keys from the extracted column names in the previous task. Later, this dictionary will be converted into a Pandas dataframe
###Code
launch_dict= dict.fromkeys(column_names)
# Remove an irrelvant column
del launch_dict['Date and time ( )']
# Let's initial the launch_dict with each value to be an empty list
launch_dict['Flight No.'] = []
launch_dict['Launch site'] = []
launch_dict['Payload'] = []
launch_dict['Payload mass'] = []
launch_dict['Orbit'] = []
launch_dict['Customer'] = []
launch_dict['Launch outcome'] = []
# Added some new columns
launch_dict['Version Booster']=[]
launch_dict['Booster landing']=[]
launch_dict['Date']=[]
launch_dict['Time']=[]
###Output
_____no_output_____
###Markdown
Next, we just need to fill up the `launch_dict` with launch records extracted from table rows. Usually, HTML tables in Wiki pages are likely to contain unexpected annotations and other types of noises, such as reference links `B0004.1[8]`, missing values `N/A [e]`, inconsistent formatting, etc. To simplify the parsing process, we have provided an incomplete code snippet below to help you to fill up the `launch_dict`. Please complete the following code snippet with TODOs or you can choose to write your own logic to parse all launch tables:
###Code
extracted_row = 0
#Extract each table
for table_number,table in enumerate(soup.find_all('table',"wikitable plainrowheaders collapsible")):
# get table row
for rows in table.find_all("tr"):
#check to see if first table heading is as number corresponding to launch a number
if rows.th:
if rows.th.string:
flight_number=rows.th.string.strip()
flag=flight_number.isdigit()
else:
flag=False
#get table element
row=rows.find_all('td')
#if it is number save cells in a dictonary
if flag:
extracted_row += 1
# Flight Number value
# TODO: Append the flight_number into launch_dict with key `Flight No.`
launch_dict['Flight No.'].append(flight_number)
#print(flight_number)
datatimelist=date_time(row[0])
# Date value
# TODO: Append the date into launch_dict with key `Date`
date = datatimelist[0].strip(',')
launch_dict['Date'].append(date)
#print(date)
# Time value
# TODO: Append the time into launch_dict with key `Time`
time = datatimelist[1]
launch_dict['Time'].append(time)
#print(time)
# Booster version
# TODO: Append the bv into launch_dict with key `Version Booster`
bv=booster_version(row[1])
if not(bv):
bv=row[1].a.string
#print(bv)
launch_dict['Version Booster'].append(bv)
# Launch Site
# TODO: Append the bv into launch_dict with key `Launch Site`
launch_site = row[2].a.string
launch_dict['Launch site'].append(launch_site)
#print(launch_site)
# Payload
# TODO: Append the payload into launch_dict with key `Payload`
payload = row[3].a.string
launch_dict['Payload'].append(payload)
#print(payload)
# Payload Mass
# TODO: Append the payload_mass into launch_dict with key `Payload mass`
payload_mass = get_mass(row[4])
launch_dict['Payload mass'].append(payload_mass)
#print(payload)
# Orbit
# TODO: Append the orbit into launch_dict with key `Orbit`
orbit = row[5].a.string
launch_dict['Orbit'].append(orbit)
#print(orbit)
# Customer
# TODO: Append the customer into launch_dict with key `Customer`
if (row[6].find('a') is not None):
customer = row[6].a.string
else:
customer = row[6].string
#customer = row[6].a.string
launch_dict['Customer'].append(customer)
#print(customer)
# Launch outcome
# TODO: Append the launch_outcome into launch_dict with key `Launch outcome`
launch_outcome = list(row[7].strings)[0]
launch_dict['Launch outcome'].append(launch_outcome)
#print(launch_outcome)
# Booster landing
# TODO: Append the launch_outcome into launch_dict with key `Booster landing`
booster_landing = landing_status(row[8])
launch_dict['Booster landing'].append(booster_landing)
#print(booster_landing)
#launch_dict
###Output
_____no_output_____
###Markdown
After you have fill in the parsed launch record values into `launch_dict`, you can create a dataframe from it.
###Code
df=pd.DataFrame(launch_dict)
df.head()
###Output
_____no_output_____ |
mfdsnm-08.ipynb | ###Markdown
Metode Numerik Integral Numerik(C) Taufik Sutantotaudata Analytics ~ https://taudata.blogspot.com/2022/04/mfdsnm-08.html Notes and Disclaimer* This notebook is part of the free (open knowledge) eLearning course at: https://tau-data.id/courses/* Some images are taken from several resources, we respect those images ownerships and put a reference/citation from where it is originated. Nevertheless, sometimes we are having trouble to find the origin of the image(s). If you are the owner of the image and would like the image taken-out (or want the citation to be revised) from this open knowledge course resources please contact us here with the details: https://tau-data.id/contact/ * Unless stated otherwise, in general tau-data permit its resources to be copied and-or modified for non-commercial purposes. With condition proper acknowledgement/citation is given. IX. Integral Numerik:* Pendahuluan* Quadrature* Trapezoidal * Simpson Integral Numerik - Quadrature (Kalkulus)* Definite Integral (berbatas) $ \int_a^b f(x) dx $* Apa maknanya?* Apa syaratnya?* Apa contoh aplikasinya?* Segi empat sebanyak-banyaknya atau sekecil-kecilnya?* Dengan kata lain, sebaiknya parameternya lebar segi empat atau banyak segi empat?* Bagaimana jika kurva dibawah sumbu x ?* Solusi Eksak $ \int_a^b f(x) dx = F(b) - F(a)$ dimana F adalah integral dari f Definisi menurut Kalkulus Luas segi empat (persegi panjang/Quadrilateral-Rectangle) Error Menurut Teori (Analitik)* Bagaimana memahami teorema ini?Secara visualEksperimen Numerik (empiris)? Iterasi Quadrature* Misal lebar segi empat sama = $h$* Jika $a=x_0 < x_1 < x_2 < ... < x_n = b$* Maka total luas segi empat = $$ h \sum_{i=1}^n f(x_i) $$ Studi Kasus* f(x) = sin(x) pada interval $[0, \pi]$* h = 0.01 ==> $n = round(\frac{\pi}{h})$* Solusi Eksak = $- cos(\pi) + cos(0) = 2$
###Code
import numpy as np
np.pi
def f(x):
return x**2+x-7
def F(x):
return (1/3)*x**3 + 0.5*x**2-7*x
eksak = abs(F(1)-F(0))
eksak
#Interval
a = 0
b = 1 # np.pi ##314.1592653589793
n = 10
Xi = np.linspace(a, b, n)
h = Xi[1]-Xi[0]
Xi
Hasil_Integral = 0
for x in Xi:
Hasil_Integral = Hasil_Integral + abs(f(x)) # Mengapa absolut?
Hasil_Integral = Hasil_Integral * h
print('Hasil integral = ', Hasil_Integral)
print('Error mutlak = ', abs(Hasil_Integral-eksak))
print('Error Relatif = ', abs(Hasil_Integral-eksak)/eksak)
###Output
Hasil integral = 6.8312757201646095
Error mutlak = 0.6646090534979425
Error Relatif = 0.10777444110777445
###Markdown
Eksperimen jika "h" semakin mengecil (n membesar)?
###Code
N = [n for n in range(2,100)]
e1 = [] # Error
for n in N:
Xi = np.linspace(a,b,n)
h = (b-a)/n
integral = 0
for x in Xi:
integral = integral + abs(f(x))
e1.append(abs(h*integral-eksak))
# Let's plot the error
import matplotlib.pyplot as plt
plt.plot(N, e1, 'r')
plt.xlabel('# of partisi');plt.ylabel('error');plt.show()
# Secara numerik permasalahan integral jauh lebih mudah ketimbang turunan .... kenapa?
###Output
_____no_output_____
###Markdown
Let's stop and think of how this works* Secara logika berarti seharusnya kalau pakai trapesium dan bukan segi empat, maka errornya lebih kecil. Integral Numerik - Trapezoid (trapesium)Luas Trapesium: Error secara teori
###Code
# https://github.com/markjay4k/fourier-transform/blob/master/numerical%20integration.ipynb
%config InlineBackend.figure_format = 'svg'
plt.rcParams['figure.figsize'] = (13, 8)
plt.rcParams.update({'font.size': 19})
def f(x):
return np.sin(x) # change this to any function
def trap_plot(n_points, namaFungsi = ''):
x = np.linspace(0, np.pi, 1000) # continuous
x_i = np.linspace(0, np.pi, n_points) # discrete
plt.plot(x, f(x), label=namaFungsi)
plt.plot(x_i, f(x_i), '-o', label=r'$Trap$({})'.format(n_points))
plt.fill(x_i, f(x_i), color='C1', alpha=0.15)
plt.vlines(x_i, 0, f(x_i), color='C1', linestyle=':')
plt.xticks(x_i, [r'$x_{}$'.format(n) for n in range(n_points)])
plt.yticks([0, 1], ['$0$', '$1$'])
plt.legend(loc='best'); plt.ylim(0, 1.05); plt.show()
trap_plot(10,'sin(x)')
###Output
_____no_output_____
###Markdown
trapezoid rule equation
###Code
def trap(Xi):
# computes the integral of f using trapezoid rule
area = 0
N = len(Xi)
h = Xi[1] - Xi[0]
for k in range(1, N):
area += (abs(f(Xi[k - 1])) + abs(f(Xi[k])))
return area * h / 2
###Output
_____no_output_____
###Markdown
Numerical Experiment
###Code
n = 10
a = 0
b = np.pi #314.1592653589793
Xi = np.linspace(a,b,n)
eksak = 2
Hasil_Integral_Trapezoid = trap(Xi)
print('Hasil integral = ', Hasil_Integral_Trapezoid)
print('Error mutlak = ', abs(Hasil_Integral_Trapezoid-eksak))
print('Error Relatif = ', abs(Hasil_Integral_Trapezoid-eksak)/eksak)
# Plot Error
N = [n for n in range(2,100)]
e2 = [] # Error
for n in N:
Xi = np.linspace(a,b,n)
#h = (b-a)/n
integral = trap(Xi)
e2.append(abs(integral-eksak))
plt.plot(N, e2, 'g')
plt.xlabel('# of partisi');plt.ylabel('error');plt.show()
###Output
_____no_output_____
###Markdown
Hhhhhmmm... kalau trapesium berhasil lebih baik, berarti kalau kurva akan lebih baik lagi? Simpson&39;s Rule Error Metode Simpson Secara Analytic
###Code
# Warning, this code is "dumb"
def simpson(x):
# computes the integral of f using Simpson's rule
N = len(x)
area = 0
h = x[1] - x[0]
area += abs(f(x[0]))+abs(f(x[-1]))
p = True
for k in range(1, N):
if p:
area += abs(4*f(x[k]))
p = False
else:
area += abs(2*f(x[k]))
p = True
return area*(h/3)
eksak = 2
a = 0
b = np.pi #314.1592653589793
n = 4
Xi = np.linspace(a,b,2*n)
Hasil_Integral_Simpson = simpson(Xi)
print('Hasil integral = ', Hasil_Integral_Simpson)
print('Error mutlak = ', abs(Hasil_Integral_Simpson-eksak))
print('Error Relatif = ', abs(Hasil_Integral_Simpson-eksak)/eksak)
# Plot Error
def f(x):
return np.sin(x)
N = [n for n in range(2,100)]
e3 = [] # Error
for n in N:
Xi = np.linspace(a,b,2*n)
h = (b-a)/n
integral = simpson(Xi)
e3.append(abs(integral-eksak))
plt.plot(N, e3, 'b')
plt.xlabel('# of partisi');plt.ylabel('error');plt.show()
plt.plot(N, e1, 'r')
plt.plot(N, e2, 'g')
plt.plot(N, e3, 'b')
plt.xlabel('# of partisi');plt.ylabel('error');plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/Team13_FinalProject-checkpoint.ipynb | ###Markdown
Final project for course W261 - Machine Learning in Scale - Spring 2019 __Team 13:__ Clayton Leach, James Kajdasz, Marcelo Queiroz, Peter Trenkwalder, Vishal Agarwal. Introduction:Online advertising is a growing industry whose revenues reached 83 billion dollars in 2017, a 14% growth from 2016. Final users, however, view online advertisement as an unwanted distraction, with little to no benefits to their browsing experience, which led to widespread use of ad-blockers, stressing the model of "free" content generation on the internet. For that reason, improving advertisement techniques is an active research field, trying to improve both user and seller experience and profits. From a user perspective, it would be better to see only adds that are related to his/her interests, avoiding distractions that are "useless". From the seller perspective, posting the adds in better places and for interested users means more sales with less advertisement, in other words, more efficiency. This kind of targeting is what we call Click-Through-Rate analisys and, for a better understanding, we add two examples bellow:__Example 1:__In display advertising, content providers are often charged per thousand impressions (CPM), as opposed to sponsored search and product listing ads which generally charge on a per click basis (Chapelle et al., 2014). As such, there is significant uncertainty from the viewpoint of a purchaser as to whether their marketing spend will produce results. For instance, they might pay 10 dollars per 1000 impressions, (a cent per impression), but receive a single click, for an effective cost per click (CPC) of 10 dollars. For most advertisers this would be unacceptable and they would cancel their contract. As such, it is in the interest of an entity serving ads to find a balance between serving ads which pay the most, and serving ads to the correct individuals such that returns for their clients are acceptable. The best way to manage these sometimes competing priorities is to be able to effectively predict the likelihood of a click conditional on a specific ad being displayed. In this fashion a company is able to approach serving ads as an optimization problem, where the goal is to maximize revenue constrained by the fact that marketers will churn either due to poor performance, or lack of activity. A company could also elect to forego the revenue maximization route, and instead target growth by appealing to users; in this framework a company serving ads would look to serve ads which are most relevant, which could be proxied as those most likely to be clicked. This sentiment is captured by researchers at Facebook: “Machine learning plays a central role in computing the expected utility of a candidate ad to a user, and in this way increases the efficiency of the marketplace” (He et al., 2014). And the utility of this task becomes even more important under a cost per click scheme, such as what Facebook uses. In this framework the value of showing an ad can be predicted directly, as CPC * P(click|ad), where P(click|ad) is predicted using machine learning.__Example 2:__While CTR analysis is extremely useful from the perspective of a company serving ads, it is also useful from the perspective of a company purchasing ads. For instance, if a company can understand the behavioral characteristics of the individual most likely to click on their ad, they can gain valuable insight into their customers. And there is value on the opposite side of the spectrum as well: understanding the characteristics of a user who is least likely to click on an ad can help inform decisions made with regards to other marketing channels (E.g. if we know users who demonstrate quality x don’t click, then we should avoid T.V/Radio/Magazines/etc. who appeal to a market with a high propensity for quality x).This work uses data from [Criteo](https://www.criteo.com/) on online advertisement and was first posted in a Kaggle competition in 2015. The goal is to use the information about adds presented during 7 days to web users and labeled as a success (add was clicked) or a fail (add was not clicked). We will train a model based on a Binary Logistic Regression algorithm to predict an add success probability that could be embedded in a platform and used to show add that are more interesting to the final user. 1. Question FormulationThe dataset we are provided consists of 13 integer valued features, 26 categorical features, and a boolean indicator for whether the ad was clicked. Also important to note that we are not provided the meaning of any of the features, so no semantic knowledge or domain expertise can be used. Our analysis of the this data seeks to answer the following:* Based on the given data, how accurate can we predict click-through-rate (CTR)?* What are the distributions for our target variable and integer valued features? Are there any anomalous values, or significant missing values? * Are there variables which are strongly correlated with the target variable, or with each other?* What are the most influential features/variables in predicting whether a click will happen?* Are there approaches we can take to manage the dimension of our data? 2. Algorithm Explanation 2.1 Overview of Logistic RegressionOur group will apply a logistic regression model to the problem. We chose logistic regression because it is a very common and widely used algorithm, and the results of a logistic regression are relatively easy to communicate and interpret. Logistic regression is a classification algorithm that uses predictor variables (continuous or categorical) to classify an outcome variable into one of two categories. The categories are arbitrarily given labels of '0' or '1', but can apply to any decision with two possible outcomes: cancerous/non-cancerous, spam/not-spam, fraudulent/legitimate... Logistic regression can be applied to a problem with more than two categories as well by creating a separate equation for each category: A/not A, B/not B, C/not C... The outcome variable is a probability (ranging 0 to 1) of group membership. The classification with the highest probability is the predicted category label. 2.2 Logistic Regression EquationLogistic regression aggregates the predictor variables similar to what is done in a standard linear regression. Each input $X_j$ is multiplied times a weight $\beta_j$ and each input/weight product $X_j \beta_j$ is added together. Or, in the summarised form: $$\displaystyle f(X)= \beta_0 + \Sigma_{j=1}^p X_j \beta_j$$In matrix algebra form, this can be written as $\displaystyle f(X)= \theta^TX$, where $\theta$ is a vector of weights (including $\beta_0$), and $X$ is a vector of inputs (with an input of 0 for $\beta_0$). The modification that logistic regression makes is to then embed the output of $\theta^TX$ in a new funtion $g(z)$ where $\displaystyle g(z)=\frac{1}{1+e^{-z}}$. To put all this together, $h_\theta (x) = g(\theta^Tx)$ where $g(z)=\frac{1}{1+e^{-z}}$. The function $g(z)$ is the sigmoid function, and it has the beneficial property of scaling all outputs between values of 0 and 1. We can write the equations above even more succinctly by substituting in $\theta^TX$ for $z$. Our final simplified equation is then: $$\displaystyle h_\theta (x) = \frac{1}{1+e^{-\theta^TX}}$$ We treat the value $h_\theta(x)$ as our estimate of probability that x is a member of category $y=1$. The probability that $y=0$ will then be $1 - h_\theta(x)$. Both probabilities will add to one. Recall that $h_\theta(x)$ ranges from 0 to 1 thanks to our application of the sigmoid function. 2.3 Cost FunctionThe weights of a logistic regression equation can vary, and so there must be a way to compare one hypothetical set of weights to another to judge if one model fits the data better than another. This is done by calculating the overall error of the model when attempting to predict $y$ to summarize model fit. The function that computes the error of the model is known as the cost or loss function. The goal of a model is to fit the data in such a way as to minimize the cost function. For a given set of weights $\theta$ attempting to predict a label $y_i$, the cost function of $h_\theta(X)$ can be quantified by simply squaring the errors where cost is $Cost \displaystyle (h_\theta (x_i), y_i) = \frac{1}{2} (h_\theta(X_i)-y_i)^2$. This is the standard cost function (known as squared loss) for linear regression. For logistic regression, however, the squared loss function is not convex and has many local minima. Another alternative for the cost function must, therefore, be used. Alternatives include hinge loss and logistic loss. The logistic loss function is used by Apache Spark, according to the [documentation](https://spark.apache.org/docs/latest/mllib-linear-methods.htmllogistic-regression). For logistic loss (log loss for short), we will take the negative log of the logistic regression output when the actual value of $y$ is 1. When the actual value of $y$ is 0, we will take the negative log of 1 minus the logistic regression output. Written mathematically: $\displaystyle Cost(h_\theta(x),y)= \begin{cases} -log(h_\theta(X)) & y=1\\-log(1-h_\theta(X)) & y=0\end{cases}$ The log loss function has some nice properties. When the logistic regression correctly predicts that $\hat{y}=1$ with a probability of 1, then $-log(1)=0$ and the loss function is zero, reflecting a perfect prediction. Similarly, if we (correctly) predict that $\hat{y}:0$ with a probability of 1, the cost function will be $-log(1-1)=0$. If however we make an incorrect prediction that $P(\hat{y}:0)=.999$, (and the corresponding probability $P(\hat{y}:1)=.001$) when in actuality $y=1$, then the log loss function will be $-log(.001)\approx3$, reflecting a higher amount of error. Note that we can't take the log of 0, so instead, we use values of .999 and .001. As the correct prediction approaches a probability of 0, the log loss function will approach infinity. the prediction is $y=0$ 2.4 Finding the Correct ModelOne could select logistic regression weights at random, and see if the new model is an improvement over the last model by evaluating the loss function, and continue iterating. This is inefficient however and there is no guarantee we'd ever stumble across the best model by chance. It's better to have an algorithm that systematically moves us to better and better models. There are many different algorithms to choose from. When dealing with data at scale, consideration of algorithm will also need to take into account the number of iterations required and how fast an algorithm will converge. For working with data at scale, the Apache Spark [documentation](https://spark.apache.org/docs/latest/mllib-optimization.htmlchoosing-an-optimization-method) recommends the Limited-memory Broyden-Fletcher-Golfarb-Shanno algorithm (L-BFGS for short). 2.5 Toy ImplementationIn order to illustrate the concepts mentioned in this section, we will implement a Bynary Logistic Regression model using mini-batches gradient descent from scratch, so the steps are clear. For that purpose we created a small dataset with 4 numerical, 2 categorical features and the labels, of course.
###Code
import re
import ast
import math
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from os import path
# import Spark dependencies
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.sql import SQLContext, functions
from pyspark.sql.types import *
# suppressing or enabling warnings. Usefull for debugging.
warnings.filterwarnings('ignore')
# warnings.resetwarnings()
# setting notebook paths:
PWD = !pwd
PWD = PWD[0]
# Creating the Spark Session:
from pyspark.sql import SparkSession
app_name = "w261-FinalProject"
master = "local[*]"
spark = SparkSession\
.builder\
.appName(app_name)\
.master(master)\
.getOrCreate()
sc = spark.sparkContext
# loading the data
# setting schema and reading in pre-processed data to pyspark dataframe
intFeatures = ['intFeature1','intFeature2','intFeature3','intFeature4']
catFeatures = ['catFeature5','catFeature6']
outcomeField = [StructField("click", IntegerType(), True)]
quantFields = [StructField(f, DoubleType(), True) for f in intFeatures]
qualFields = [StructField(f, StringType(), True) for f in catFeatures]
schema = StructType(outcomeField + quantFields + qualFields)
toyDf = spark.read \
.schema(schema) \
.format("csv") \
.option("header", "true") \
.load("gs://w261_final-project_team13/toySample/*.csv")
toyDf.show(20)
###Output
+-----+-------------------+--------------------+--------------------+--------------------+-----------+-----------+
|click| intFeature1| intFeature2| intFeature3| intFeature4|catFeature5|catFeature6|
+-----+-------------------+--------------------+--------------------+--------------------+-----------+-----------+
| 0|0.38044728201922134| 1.0461280017194063| 0.8317161330745142| 0.3659735546106383| 25c83c98| 6f6d9be8|
| 1|0.38044728201922134| -1.2072044937875812| -1.0969510532590658| -0.808690496702659| 25c83c98| 7e0ccccf|
| 0|0.38044728201922134| -1.2072044937875812| 1.7753909926499136| 0.3659735546106383| 25c83c98| fbad5c96|
| 0|-1.3933721049834424| -1.2072044937875812| 1.9415838693847296| 1.681697323928138| 25c83c98| 7e0ccccf|
| 1|-1.3933721049834424| -1.2072044937875812| 1.1861284542225656| 1.681697323928138| 384874ce| 7e0ccccf|
| 1|-0.5064624114821106| -1.2072044937875812| 0.7709669997151446|0.021438776840939814| 25c83c98| 7e0ccccf|
| 0|-0.5064624114821106| -0.8792259093197071| 0.8018840847466963| 0.5070332726148404| 43b19349| NA_Bucket|
| 1| 1.2673569755205532| -0.6873707363663946|-0.37664099210799884| 0.20605323600656728| 25c83c98| fbad5c96|
| 0|-1.3933721049834424|-0.16753697894520792| -1.0969510532590658| 0.6332154901527597| 25c83c98| fe6b92e5|
| 1| 0.6659684299852395| 0.23338155706897726| 0.8317161330745142| 0.3659735546106383| 25c83c98| fbad5c96|
| 0|0.38044728201922134| 0.5262942442982196| -0.5076885148317183| 2.1672918197020383| 25c83c98| 7e0ccccf|
| 0|0.38044728201922134| 1.8494483761453233| -1.0969510532590658| -1.2942849924765598| 25c83c98| fbad5c96|
| 1|-1.3933721049834424| 1.0461280017194063| 0.8317161330745142| 0.3659735546106383| 25c83c98| fbad5c96|
| 0|-1.3933721049834424| -1.2072044937875812| 0.21262154631934865|0.021438776840939814| 25c83c98| fbad5c96|
| 0|-0.5064624114821106| -1.2072044937875812| -0.5076885148317183|-0.19691378339083973| 43b19349| NA_Bucket|
| 0|0.38044728201922134| -0.8792259093197071| -1.4416475413188234| 1.730586699238479| 25c83c98| NA_Bucket|
| 1|0.38044728201922134| -0.6873707363663946| -1.0969510532590658| 0.3659735546106383| 25c83c98| 7e0ccccf|
| 0|-1.3933721049834424|-0.16753697894520792| 0.8317161330745142| -0.808690496702659| 25c83c98| 7e0ccccf|
| 0|0.38044728201922134| 0.2965650170372276| 0.8317161330745142| -0.4641557189329606| 25c83c98| fe6b92e5|
| 0|-1.3933721049834424| 1.0254621375192867| 0.8317161330745142| -1.2942849924765598| 25c83c98| fe6b92e5|
+-----+-------------------+--------------------+--------------------+--------------------+-----------+-----------+
only showing top 20 rows
###Markdown
Note that the data was previously processed for this demonstrations. The process we followed to process the full dataset will be explained shortly.Now, as our purpose here is to work with the Logistic Regression and the Gradient Descent of the loss function, let's use PySpark ML API and its one-hot-encoder:
###Code
def OneHotEncoder(dataframe,columns):
'''takes a dataframe and corresponding list of columns
to one-hot encode'''
for c in columns:
# collect unique levels in category
levels = dataframe.select(c).distinct().rdd.flatMap(lambda x: x).collect()
# generate dummy variables and associated values
dummy_vals = [functions.when(functions.col(c) == level, 1).otherwise(0).alias("encoded_" + level) for level in levels]
# update dataframe with new dummy columns (indicator features)
dataframe = dataframe.select('*',*dummy_vals)
# drop unencoded categorical columns from dataframe
dataframe = dataframe.drop(*columns)
return dataframe
# encode all categorical columns
categories = [c for c in toyDf.columns if 'cat' in c]
toy_df_encoded = OneHotEncoder(toyDf,categories)
print('there are now ' + str(len(toy_df_encoded.columns)) + ' columns')
###Output
there are now 42 columns
###Markdown
Now our categorical feature levels are encoded features, but if we take a look on them, we will see that our dataset now is very sparse, and therefore, linear algorithms like Logistic Regression will be sensitive to this dataset.
###Code
# Let's take a look on some of the encoded features
toy_df_encoded.select('encoded_65be028e',
'encoded_2c6b8ded',
'encoded_89ff5705',
'encoded_3a136cf2',
'encoded_43b19349',
'encoded_afcf7897',).show(5)
###Output
+----------------+----------------+----------------+----------------+----------------+----------------+
|encoded_65be028e|encoded_2c6b8ded|encoded_89ff5705|encoded_3a136cf2|encoded_43b19349|encoded_afcf7897|
+----------------+----------------+----------------+----------------+----------------+----------------+
| 0| 0| 0| 0| 0| 0|
| 0| 0| 0| 0| 0| 0|
| 0| 0| 0| 0| 0| 0|
| 0| 0| 0| 0| 0| 0|
| 0| 0| 0| 0| 0| 0|
+----------------+----------------+----------------+----------------+----------------+----------------+
only showing top 5 rows
###Markdown
There are a few techniques that we can (and will) apply to solve this problem, but for now, we will work with the data as it is. Let's convert the dataframe to RDD so we can have more control of user defined functions without losing too much performance. After that we will define a LogLoss function that will be the one that we want to optimize:
###Code
# convert dataframe to RDD
toyRDD = toy_df_encoded.rdd.map(lambda x: (x[0],x[1:])).cache()
# setting coefficient of the "bias" as the mean click rate
meanClick = toyRDD.map(lambda x: (x[0])).mean()
feature_cols = len(toyRDD.take(1)[0][1])
coefs = np.array([meanClick] + [0.0]*(feature_cols))
def LogLoss(RDD,W):
"""
augments rdd and returns log loss
- why we augment: add a vector
entry of 1 to correspond with the bias term
so that we can apply the model to the data point
using vector multiplication without the added
step of adding the bias.
Args:
dataframe - columns (target,features...)
W - (array) model coefficients with bias at index 0
Reference
def sigmoid(z):
return 1 / (1 + np.exp(-z))
z = np.dot(X, theta)
h = sigmoid(z)
def loss(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
"""
#helper function to compute sigmoid
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#generate augmented rdd of (features,target)
augmentedData = RDD.map(lambda x: (np.append([1.0],x[1:]),x[0]))
log_loss = augmentedData \
.map(lambda x: (np.dot(x[0],W),x[1])) \
.map(lambda x: (sigmoid(x[0]),x[1])) \
.map(lambda x: (-x[1]*np.log(x[0]) - (1-x[1])*np.log(1-x[0]))) \
.mean()
return log_loss
LogLoss(toyRDD,coefs)
###Output
_____no_output_____
###Markdown
Now we want to set a function to iterate our through each step updating the logloss metric and converging to the minimum of the loss function:
###Code
# broadcasting the total count to all workers.
N = sc.broadcast(toyRDD.count())
# function to perform a single GD step
def GDUpdate(RDD, W, learningRate = 0.1):
"""
Perform one OLS gradient descent step/update.
Args:
dataRDD - records are tuples of (features_array, y)
W - (array) model coefficients with bias at index 0
Returns:
new_model - (array) updated coefficients, bias at index 0
Reference: gradient = np.dot(X.T, (h - y)) / num_observations
- see above LogLoss function for definition of h and y
"""
# add a bias 'feature' of 1 at index 0 and convert to array
#generate augmented rdd of (features,target)
augmentedData = RDD.map(lambda x: (np.append([1.0],x[1:]),x[0]))
#helper function to compute sigmoid
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#calculate gradient
getVals = augmentedData \
.map(lambda x: (np.dot(x[0],W),x[0],x[1])) \
.map(lambda x: (sigmoid(x[0]),x[1],x[2])) \
.collect()
features = []
predictions = []
labels = []
for v in getVals:
features.append(v[1])
predictions.append(v[0])
labels.append(v[2])
f = np.transpose(features)
l = np.array(labels)
p = np.array(predictions)
gradient = np.dot(f,(p-l))/N.value
#apply learning rate to gradient and generate new coefficients
update = np.multiply(gradient,learningRate)
#original model is the bias + assigned coefficients; update the model with the adjusted coefficients
new_model = W - update
return new_model
# initiating iterations:
nSteps = 10
model = coefs
print(f"BASELINE: Loss = {LogLoss(toyRDD,model)}")
for idx in range(nSteps):
print("----------")
print(f"STEP: {idx+1}")
model = GDUpdate(toyRDD, model)
loss = LogLoss(toyRDD, model)
print(f"Loss: {loss}")
print(f"Model: {[round(w,3) for w in model]}")
###Output
BASELINE: Loss = 0.7644838394523966
----------
STEP: 1
Loss: 0.7496780805769258
Model: [0.248, 0.003, 0.004, -0.0, -0.002, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.002, 0.0, -0.0, -0.0, -0.0, -0.0, -0.02, -0.0, -0.0, -0.0, -0.001, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.004, -0.0, -0.0, -0.0, -0.0, -0.0, -0.006, -0.001, -0.003, 0.0, -0.001, -0.001, -0.006, -0.011]
----------
STEP: 2
Loss: 0.7361257669189282
Model: [0.22, 0.006, 0.008, -0.001, -0.004, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.005, 0.0, -0.0, -0.0, -0.0, -0.0, -0.039, -0.0, -0.001, -0.0, -0.002, -0.0, -0.0, -0.001, -0.0, -0.0, -0.0, -0.007, -0.0, -0.0, -0.0, -0.0, -0.0, -0.012, -0.001, -0.007, 0.0, -0.002, -0.002, -0.012, -0.021]
----------
STEP: 3
Loss: 0.7237263552677959
Model: [0.194, 0.009, 0.012, -0.001, -0.007, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.007, 0.0, -0.0, -0.0, -0.001, -0.0, -0.057, -0.0, -0.001, -0.0, -0.003, -0.0, -0.0, -0.001, -0.0, -0.001, -0.0, -0.011, -0.0, -0.0, -0.0, -0.0, -0.0, -0.018, -0.002, -0.01, 0.0, -0.003, -0.003, -0.018, -0.031]
----------
STEP: 4
Loss: 0.7123852346752153
Model: [0.168, 0.011, 0.016, -0.002, -0.009, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.009, 0.0, -0.001, -0.0, -0.001, -0.0, -0.075, -0.0, -0.001, -0.0, -0.004, -0.001, -0.0, -0.001, -0.0, -0.001, -0.0, -0.014, -0.0, -0.0, -0.001, -0.0, -0.0, -0.024, -0.002, -0.013, 0.0, -0.004, -0.003, -0.023, -0.041]
----------
STEP: 5
Loss: 0.7020139073112601
Model: [0.144, 0.014, 0.02, -0.002, -0.011, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.011, 0.0, -0.001, -0.0, -0.001, -0.0, -0.091, -0.0, -0.001, -0.0, -0.005, -0.001, -0.001, -0.001, -0.0, -0.001, -0.0, -0.017, -0.0, -0.0, -0.001, -0.0, -0.0, -0.029, -0.002, -0.016, 0.0, -0.004, -0.004, -0.028, -0.05]
----------
STEP: 6
Loss: 0.6925300184178393
Model: [0.12, 0.017, 0.023, -0.002, -0.013, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.013, 0.0, -0.001, -0.0, -0.001, -0.0, -0.107, -0.0, -0.002, -0.0, -0.006, -0.001, -0.001, -0.002, -0.0, -0.001, -0.0, -0.02, -0.0, -0.0, -0.001, -0.0, -0.0, -0.034, -0.003, -0.019, 0.0, -0.005, -0.005, -0.033, -0.058]
----------
STEP: 7
Loss: 0.6838572672946225
Model: [0.098, 0.019, 0.027, -0.003, -0.015, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.015, 0.0, -0.001, -0.0, -0.002, -0.0, -0.122, -0.001, -0.002, -0.0, -0.007, -0.001, -0.001, -0.002, -0.0, -0.001, -0.0, -0.023, -0.0, -0.0, -0.001, -0.0, -0.0, -0.039, -0.003, -0.022, 0.0, -0.006, -0.005, -0.038, -0.067]
----------
STEP: 8
Loss: 0.6759252279409275
Model: [0.076, 0.021, 0.03, -0.003, -0.017, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.017, 0.0, -0.001, -0.0, -0.002, -0.0, -0.136, -0.001, -0.002, -0.0, -0.008, -0.001, -0.001, -0.002, -0.0, -0.002, -0.0, -0.025, -0.0, -0.0, -0.001, -0.0, -0.0, -0.044, -0.004, -0.024, 0.0, -0.007, -0.006, -0.042, -0.074]
----------
STEP: 9
Loss: 0.6686691041411006
Model: [0.056, 0.024, 0.034, -0.004, -0.019, 0.0, 0.0, -0.0, -0.0, -0.0, 0.0, -0.019, 0.0, -0.001, -0.0, -0.002, -0.0, -0.15, -0.001, -0.002, -0.0, -0.009, -0.001, -0.001, -0.002, -0.0, -0.002, -0.0, -0.028, -0.0, -0.0, -0.001, -0.0, -0.0, -0.049, -0.004, -0.027, 0.0, -0.007, -0.007, -0.047, -0.082]
----------
STEP: 10
Loss: 0.6620294398267622
Model: [0.036, 0.026, 0.037, -0.004, -0.021, 0.0, 0.0, -0.001, -0.001, -0.001, 0.0, -0.02, 0.0, -0.001, -0.0, -0.002, -0.0, -0.163, -0.001, -0.002, -0.0, -0.01, -0.001, -0.001, -0.003, -0.0, -0.002, -0.0, -0.03, -0.0, -0.0, -0.002, -0.0, -0.001, -0.053, -0.005, -0.029, 0.0, -0.008, -0.007, -0.051, -0.089]
###Markdown
As seen, the loss function results converge to a minimum value, where new iterations will not bring significatn improve. Now, with a good background on how the algorithms work, let's move to the real data. 3. EDA and Discussion of Challenges 3.1 Infrastructure:Handling a 45 million rows dataset with 10.38 GB is not easy when using a single machine, even using Spark Dataframes. For that reason, our approach was to use a cluster in Google Cloud with a storage bucket attached to it. However those operations can incur in high cost, so we decided to export a 5% random sample from our dataset to a local machine to perform our exploratory data analysis and implement our a test model. After that, we can run the same notebook in the cluster again using a PySpark kernel and taking advantage of distributed worker nodes. In this notebook, you will find cells made to run on a cluster, with the whole dataset, unless explicitly stated otherwise in comments at the first line of the cell.To spin up the cluster we [installed the Google Cloud Platform command line interface](https://cloud.google.com/sdk/docs/quickstart-macos) in our local machine and used the `GCP-create-cluster.sh` shell script set with Spark and other dependencies needed for this project on 6 worker and 1 master node, all consisting of Google's `n1-standard-8` machines (up to 100 GB of disk and 30 GB of memory, with 8 virtual CPUs). After that we can run the other two scripts: `GCP-create-proxy.sh` and `GCP-use-proxy.sh` in this order and in different comand line windows. This will open a Chrome browser connected to the cluster via proxy and where we can run notebooks directly. The scrips can be found in this repository in the `shell-scripts` folder. With that set, we can start working on our cluster to handle the dataset: 3.2 Creating Sample Dataset: Now, let's check the dataset file for it's form: After downloadin and uncompressing the dataset, we see that we have three text files: `train.txt`, `test.txt`, and `readme.txt`. To start dealing with our dataset, we want to define a schema for it. Reading the file `readme.txt`, we have:```====================================================Format:The columns are tab separeted with the following schema: ... ... When a value is missing, the field is just empty.There is no label field in the test set.====================================================```This way, we see that using the test set is not feasible, once we don't know the correct answers to our predictions. Our solution is to withhold 30% of our train dataset and use it to test our model. Based on the schema described, we will read our dataset:
###Code
# 13 features taking integer values and 26 taking categorical values (we can't assume ordinal) - making total of 39 features + 1 outcome
outcomeField = [StructField("click", IntegerType(), True)]
# create StructField for pySpark DF, but also a list with the features names for future use:
quantFields = [StructField("intFeature"+str(i+1), IntegerType(), True) for i in np.arange(13)]
intFields = ["intFeature"+str(i+1) for i in np.arange(13)]
# same for categorical features:
qualFields = [StructField("catFeature"+str(i+1), StringType(), True) for i in np.arange(26)]
catFields = ["catFeature"+str(i+1) for i in np.arange(26)]
schema = StructType(outcomeField + quantFields + qualFields)
# read in the txt file
file = "gs://w261_final-project_team13/train.txt"
df_full = spark.read.schema(schema) \
.option("delimiter", "\t") \
.option("ignoreLeadingWhiteSpace",True) \
.option("ignoreTrailingWhiteSpace",True) \
.csv(file, nullValue = "")
# Counting:
print('Total dataset count:', df_full.count(), 'observations.')
df_full.cache()
###Output
Total dataset count: 45840617 observations.
###Markdown
As mentioned, we will take 5% of the dataset as sample, which will be roughly $45,840,617 \cdot 0.05 \cdot 0.7 = 1,604,421.595$ observations and $10.38E3 \cdot 0.05 \cdot 0.7 = 360$ MB in a txt form, which can be reasonobly handled in a single node machine and still relevant. Using a more efficient file form would make things even better, so we decided to use parquet files.
###Code
# Creating the toy set for single-node handling:
toyTrainDF, toyTestDF = df_full.sample(False, 0.05).randomSplit([0.7,0.3], seed=2019)
# writing to our CGP storage bucket as a parquet file:
# toyTrainDF.write.parquet("gs://w261_final-project_team13/toy_train.txt")
# toyTestDF.write.parquet("gs://w261_final-project_team13/toy_test.txt")
###Output
_____no_output_____
###Markdown
Now, we are able to download this parquet file in our local machines and run in our own notebooks for performing EDA and even a the implementation of our algorithms in a small data set. To download it we can use GCP's user interface or the command line interface. The following code will copy the files to a local folder called `data`:```gsutil -m cp gs://w261_final-project_team13/toy_test.txt/* .data/toy_test.txt/gsutil -m cp gs://w261_final-project_team13/toy_train.txt/* .data/toy_train.txt/```Additionally we may want to copy the notebook as well:```gsutil cp gs://w261_final-project_team13/notebooks/* ./QuestionFormulation.ipynb```Now we can read that data and start working on it: 3.3 Running on local single-node machine:
###Code
# CAN RUN IN LOCAL NODE
# read the parquet files and print the first observations of each:
toyTrainDF = spark.read.parquet("./data/toy_train.txt")
toyTestDF = spark.read.parquet("./data/toy_test.txt")
# set total count to avoid recounting
totalCount = toyTrainDF.count()
# print main numbers from our DF:
print('Train dataset count:', totalCount, 'observations.')
print('Test dataset count:', toyTestDF.count(), 'observations.')
toyTrainDF.printSchema()
###Output
Train dataset count: 1604289 observations.
Test dataset count: 688521 observations.
root
|-- click: integer (nullable = true)
|-- intFeature1: integer (nullable = true)
|-- intFeature2: integer (nullable = true)
|-- intFeature3: integer (nullable = true)
|-- intFeature4: integer (nullable = true)
|-- intFeature5: integer (nullable = true)
|-- intFeature6: integer (nullable = true)
|-- intFeature7: integer (nullable = true)
|-- intFeature8: integer (nullable = true)
|-- intFeature9: integer (nullable = true)
|-- intFeature10: integer (nullable = true)
|-- intFeature11: integer (nullable = true)
|-- intFeature12: integer (nullable = true)
|-- intFeature13: integer (nullable = true)
|-- catFeature1: string (nullable = true)
|-- catFeature2: string (nullable = true)
|-- catFeature3: string (nullable = true)
|-- catFeature4: string (nullable = true)
|-- catFeature5: string (nullable = true)
|-- catFeature6: string (nullable = true)
|-- catFeature7: string (nullable = true)
|-- catFeature8: string (nullable = true)
|-- catFeature9: string (nullable = true)
|-- catFeature10: string (nullable = true)
|-- catFeature11: string (nullable = true)
|-- catFeature12: string (nullable = true)
|-- catFeature13: string (nullable = true)
|-- catFeature14: string (nullable = true)
|-- catFeature15: string (nullable = true)
|-- catFeature16: string (nullable = true)
|-- catFeature17: string (nullable = true)
|-- catFeature18: string (nullable = true)
|-- catFeature19: string (nullable = true)
|-- catFeature20: string (nullable = true)
|-- catFeature21: string (nullable = true)
|-- catFeature22: string (nullable = true)
|-- catFeature23: string (nullable = true)
|-- catFeature24: string (nullable = true)
|-- catFeature25: string (nullable = true)
|-- catFeature26: string (nullable = true)
###Markdown
To avoid confusion and using the test dataset in our model, we will now use only the `toyTrainDF` for EDA. Starting with exploring the outcome feature (`label`), we have:
###Code
# CAN RUN IN LOCAL NODE
toyTrainDF.describe("click").show()
###Output
+-------+-------------------+
|summary| click|
+-------+-------------------+
| count| 1603427|
| mean|0.25604595656677853|
| stddev| 0.4364476411807236|
| min| 0|
| max| 1|
+-------+-------------------+
###Markdown
Observations:1. No missing labels2. About 25% of the data has label of 1, rest is 0 3.3.1 Exploring the integer features:
###Code
# CAN RUN IN LOCAL NODE
toyTrainDF.describe(intFields[:5]).show()
toyTrainDF.describe(intFields[6:11]).show()
toyTrainDF.describe(intFields[11:]).show()
###Output
+-------+-----------------+------------------+------------------+------------------+------------------+
|summary| intFeature1| intFeature2| intFeature3| intFeature4| intFeature5|
+-------+-----------------+------------------+------------------+------------------+------------------+
| count| 877603| 1605034| 1259603| 1256091| 1563404|
| mean|3.501338304449734|106.41920482681364|26.769022461839167|7.3238499439929114|18570.669470591096|
| stddev|9.626274190709838|391.43111731066466| 399.3768436318432| 8.77939633000014| 69726.05840998878|
| min| 0| -3| 0| 0| 0|
| max| 1539| 17939| 65535| 681| 2495496|
+-------+-----------------+------------------+------------------+------------------+------------------+
+-------+------------------+------------------+------------------+------------------+-----------------+
|summary| intFeature7| intFeature8| intFeature9| intFeature10| intFeature11|
+-------+------------------+------------------+------------------+------------------+-----------------+
| count| 1535515| 1604225| 1535515| 877603| 1535515|
| mean|16.348678456413648|12.528776823699918|106.06326020911551|0.6163470270726057|2.733826761705356|
| stddev| 70.3240554021905|18.719397946912853|220.96217268092875|0.6835663988214532|5.203535697890547|
| min| 0| 0| 0| 0| 0|
| max| 34536| 5770| 14198| 8| 154|
+-------+------------------+------------------+------------------+------------------+-----------------+
+-------+------------------+-----------------+
|summary| intFeature12| intFeature13|
+-------+------------------+-----------------+
| count| 376359| 1256091|
| mean|0.9929615074968314|8.237400793413853|
| stddev| 5.342621115945669|18.33108620636844|
| min| 0| 0|
| max| 953| 4848|
+-------+------------------+-----------------+
###Markdown
Observations:1. `intFeature12` has a very large number of nulls (~80%). We must consider removing it from the analysis.
###Code
# CAN RUN IN LOCAL NODE
# setting a list with features to drop
features_to_drop = ['intFeature12']
###Output
_____no_output_____
###Markdown
Observations (continued):2. The spread in values is very large for almost all the features. We can draw a histogram to understand the distribution of the feature values.3. Feature intFeature4 seems to be very concentrated around the mean. We can draw a histogram to better understand the distribution.4. intFeature2 has at least one negative value. Let's take a closer look at those values:
###Code
# CAN RUN IN LOCAL NODE
toyTrainDF.select('intFeature2').filter(toyTrainDF['intFeature2'] < 0).describe().show()
###Output
+-------+-------------------+
|summary| intFeature2|
+-------+-------------------+
| count| 166628|
| mean|-1.0061994382696786|
| stddev|0.07856872829777965|
| min| -3|
| max| -1|
+-------+-------------------+
###Markdown
We see that the value count is less than 1% of the dataset, containing `-1`, `-2`, and `-3` values. As this dataset were most probably collected without human interference, this is not likely to be a typo. For this analisys, we will consider that negative values are error codes, and as we are not able to infer if those errors have semantic meaning or not, we can simply discard those values replacing them by `null` values.
###Code
# CAN RUN IN LOCAL NODE
# use a simple user defined function to replace negative values by null:
replace_negatives = functions.udf(lambda x: None if x < 0 else x, IntegerType())
toyTrainDF = toyTrainDF.withColumn('intFeature2', replace_negatives(toyTrainDF.intFeature2))
###Output
_____no_output_____
###Markdown
Now, addressing topics 2 and 3, we can take a look on the histograms:
###Code
# CAN RUN IN LOCAL NODE
# transform to pandas DF to use histogram
intFields = ["intFeature"+str(i+1) for i in np.arange(13)]
toyTrainDF.select(intFields).toPandas().hist(figsize=(14,14), bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Observations:1. Most features are highly right-skewed. We can minimize that problem using log transformations.2. The features should be normalized so they all appear on the same scale.Additionally, we can check for correlations between variables, once logistic regression models assume no multicolinearity and highly correlated variables can introduce bias in our predictions. To do both feature transformations and correlations calculation at the same time, we can use a scatterplot matrix:
###Code
# CAN BE RUN IN LOCAL NODE
def corrdot(*args, **kwargs):
"""
Helper function to plot correlation indexes in the upper side of the scatterplot matrix.
Reference: https://github.com/mwaskom/seaborn/issues/1444
"""
corr_r = args[0].corr(args[1], 'spearman')
# Axes
ax = plt.gca()
ax.axis('off')
x_min, x_max = ax.get_xlim()
x_centroid = x_min + (x_max - x_min) / 2
y_min, y_max = ax.get_ylim()
y_true = y_max - (y_max - y_min) / 4
y_false = y_min + (y_max - y_min) / 4
# Plot args
if kwargs['click'] == True:
marker_size = abs(corr_r) * 5000
ax.scatter(x_centroid, y_true, marker='o', s=marker_size, alpha=0.6, c='red')
corr_text = str(round(corr_r, 2)).replace('-0', '-').lstrip('0')
ax.annotate(corr_text, [x_centroid, y_true,], ha='center', va='center', fontsize=20)
else:
marker_size = abs(corr_r) * 5000
ax.scatter(x_centroid, y_false, marker='o', s=marker_size, alpha=0.6, c='green')
corr_text = str(round(corr_r, 2)).replace('-0', '-').lstrip('0')
ax.annotate(corr_text, [x_centroid, y_false,], ha='center', va='center', fontsize=20)
# re-sampling the DF for better visualization:
vizDF = toyTrainDF.sample(False, 0.001, 2019).toPandas().iloc[:,0:14]
# define helper function to log-tranform:
def log_transform(x):
'''shifts x up by one to account for presence of 0 values'
and assumes negative values should be NaN'''
return np.log(x + 1)
# log transform and normalize the DF, keeping the label column and using (VALUE - MEAN/STDDEV)
vizDF_norm = pd.DataFrame(vizDF[intFields].values, columns=intFields, index=vizDF.index)
vizDF_norm.apply(log_transform)
vizDF_norm = (vizDF_norm - vizDF_norm.mean())/vizDF_norm.std()
vizDF[intFields] = vizDF_norm
# ploting the results splited by label:
sns.set_context('notebook', font_scale=1.3)
g = sns.PairGrid(vizDF, vars=intFields, hue='click', palette={True:'green', False:'red'})
g.map_lower(sns.scatterplot, alpha=0.3)
g.map_diag(sns.distplot, hist=False)
g.map_upper(corrdot)
g.add_legend()
###Output
_____no_output_____
###Markdown
Observations:1. We consider highly correlated features where Pearson's index is higher than 0.8. The following features shows strong correlations: * `intFeature1` and `intFeature10` * `intFeature4` and `intFeature13` * `intFeature5` and `intFeature10` * `intFeature7` and `intFeature11`2. We can reduce multicolinearity dropping features 10, 11 and 13.
###Code
# CAN RUN IN LOCAL NODE
# adding features to drop list:
features_to_drop = features_to_drop + ['intFeature10', 'intFeature11', 'intFeature13']
###Output
_____no_output_____
###Markdown
Finally, we want to replace null values with the mean for each feature, making sure our dataset is complete. The absence of values would make our regression algorithm ignore non-complete rows, drastically reducing our training dataset. 3.3.2 Exploring the categorical features:As instated before, we don't have any clue about the semantic value of the categorical features, so our analysis will be based solely on data completion and distribution. First let's take a look if missing data means less probability of a click:
###Code
# CAN RUN IN LOCAL NODE
# checking the average Null count in each row:
toyTrainPandasDF = toyTrainDF.toPandas()
countNullDF = toyTrainPandasDF.isnull().sum(axis=1).rename('CountNaN')
countNullDF.describe()
# CAN RUN IN LOCAL NODE
# concat dataframes
countNullDF = pd.concat([toyTrainPandasDF['click'], countNullDF], axis=1)
countNullDF.groupby('click').mean()
###Output
_____no_output_____
###Markdown
We see no rows with only null values (but we actually can't guarantee that for the full dataset), while the average missing count is around 5. Also we note a difference in null count for clicked and non-clicked add, however the difference can be considered neglegible.Now, let's consider the distribution of the categorical columns:
###Code
# CAN RUN IN LOCAL NODE
catFields = ["catFeature"+str(i+1) for i in np.arange(26)]
uniqueCounts = {}
for field in catFields:
uniqueCounts[field] = toyTrainDF.select(field).distinct().count()
plt.bar(uniqueCounts.keys(), uniqueCounts.values(), color='g')
plt.title("Unique values in each categorical features")
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Observation:Some categorical variables (such as catFeature3) have a large number of unique values. They should be binned using some techniques. (a) They can be binned based on frequency.(b) They can be binned based on average value of the outcome variable.For a better understand of each feature, we can take a look on its main values and value count:
###Code
# CAN RUN IN LOCAL NODE
def descCatEDA(dataframe, column, totalCount=0, nanThreshold=0.5):
"""
Function that prints an analysis of column from the given dataframe. Retuns None.
Args:
dataframe - input dataframe
column - string with a dataframe column.
totalCount - optional. Number of rows in the dataframe (if defined avoid recalculation).
nanThreshold - optional. Percentage allowed of NaN values in the column.
Returns:
Column - name of the column if the column have more NaN if it represents more
than nanThreshold ratio.
Output:
NaN Count - number for NaN values in % of the row count of the dataframe.
Most Freq - number of values for the 5 most frequent values (discarding NaN).
"""
if totalCount == 0:
totalCount = dataframe.count()
pandCol = dataframe.select(column).toPandas()[column]
freqNumbers = dict(pandCol.value_counts(normalize=True).head(5))
nanCount = dataframe.filter(dataframe[column].isNull()).count()
validCount = totalCount - nanCount
print('+'+13*'-'+'+'+22*'-'+'+')
print('| {:^4}'.format(column)+' |{:>14}{:>6.2f}% |'.format('Null Count: ', nanCount/totalCount*100))
print('+'+13*'-'+'+'+22*'-'+'+')
print('| Unique Values: {:>19} |'.format(pandCol.nunique()))
print('+'+13*'-'+'+'+22*'-'+'+')
for item in freqNumbers:
print('|{:>12} |{:>20.2f}% |'.format(item, freqNumbers[item]*100))
print('+'+13*'-'+'+'+22*'-'+'+\n')
if nanCount/totalCount*100 > nanTreshold*100:
return column
else:
return None
# CAN RUN IN LOCAL NODE
badFeatures = []
nanTreshold = 0.75
for item in catFields:
badFeatures.append(descCatEDA(toyTrainDF, item, totalCount, nanTreshold))
badFeatures = list(filter(None,badFeatures))
print('List of catFeatures with more than {:4.2f}% NaN ratio: {}'.format(nanTreshold*100, badFeatures))
###Output
+-------------+----------------------+
| catFeature1 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 1377 |
+-------------+----------------------+
| 05db9164 | 50.07% |
| 68fd1e64 | 16.66% |
| 5a9ed9b0 | 8.35% |
| 8cf07265 | 4.93% |
| be589b51 | 3.29% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature2 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 552 |
+-------------+----------------------+
| 38a947a1 | 11.47% |
| 207b2d81 | 4.31% |
| 38d50e09 | 3.85% |
| 1cfdf714 | 3.63% |
| 287130e0 | 3.54% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature3 | Null Count: 3.41% |
+-------------+----------------------+
| Unique Values: 551267 |
+-------------+----------------------+
| d032c263 | 2.56% |
| 02cf9876 | 1.12% |
| aa8c1539 | 1.01% |
| 9143c832 | 1.01% |
| 77f2f2e5 | 0.90% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature4 | Null Count: 3.41% |
+-------------+----------------------+
| Unique Values: 199163 |
+-------------+----------------------+
| c18be181 | 3.67% |
| 29998ed1 | 2.25% |
| d16679b9 | 2.20% |
| 85dd697c | 1.99% |
| 13508380 | 1.95% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature5 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 288 |
+-------------+----------------------+
| 25c83c98 | 67.17% |
| 4cf72387 | 15.61% |
| 43b19349 | 6.32% |
| 384874ce | 3.28% |
| 30903e74 | 1.93% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature6 | Null Count: 12.06% |
+-------------+----------------------+
| Unique Values: 15 |
+-------------+----------------------+
| 7e0ccccf | 45.08% |
| fbad5c96 | 24.74% |
| fe6b92e5 | 21.15% |
| 13718bbd | 3.63% |
| 6f6d9be8 | 3.28% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature7 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 11543 |
+-------------+----------------------+
| 1c86e0eb | 2.10% |
| dc7659bd | 1.27% |
| 7195046d | 0.89% |
| 5e64ce5f | 0.77% |
| 468a0854 | 0.77% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature8 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 606 |
+-------------+----------------------+
| 0b153874 | 59.43% |
| 5b392875 | 16.64% |
| 1f89b562 | 7.46% |
| 37e4aa92 | 4.14% |
| 062b5529 | 2.60% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature9 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 3 |
+-------------+----------------------+
| a73ee510 | 89.86% |
| 7cc72ec2 | 10.12% |
| a18233ea | 0.02% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature10 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 37464 |
+-------------+----------------------+
| 3b08e48b | 22.16% |
| efea433b | 1.49% |
| fbbf2c95 | 0.76% |
| fa7d0797 | 0.58% |
| 03e48276 | 0.53% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature11 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 5010 |
+-------------+----------------------+
| 755e4a50 | 3.21% |
| e51ddf94 | 2.12% |
| 7f8ffe57 | 1.49% |
| 4d8549da | 1.30% |
| 8b94178b | 1.11% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature12 | Null Count: 3.41% |
+-------------+----------------------+
| Unique Values: 487307 |
+-------------+----------------------+
| dfbb09fb | 2.56% |
| 6aaba33c | 2.25% |
| 8fe001f4 | 1.12% |
| d8c29807 | 1.01% |
| ae1bb660 | 1.01% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature13 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 3159 |
+-------------+----------------------+
| 5978055e | 3.21% |
| 3516f6e6 | 2.36% |
| 46f42a63 | 1.64% |
| 025225f2 | 1.44% |
| 1aa94af3 | 1.34% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature14 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 26 |
+-------------+----------------------+
| b28479f6 | 34.97% |
| 07d13a8f | 34.31% |
| 1adce6ef | 15.46% |
| 64c94865 | 4.39% |
| cfef1c29 | 3.05% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature15 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 10358 |
+-------------+----------------------+
| 2d0bb053 | 1.50% |
| d345b1a0 | 1.03% |
| 3628a186 | 0.92% |
| 10040656 | 0.88% |
| 10935a85 | 0.83% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature16 | Null Count: 3.41% |
+-------------+----------------------+
| Unique Values: 365038 |
+-------------+----------------------+
| 84898b2a | 2.56% |
| b041b04a | 2.25% |
| 36103458 | 1.12% |
| c64d548f | 1.01% |
| bad5ee18 | 1.01% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature17 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 10 |
+-------------+----------------------+
| e5ba7672 | 46.19% |
| 07c540c4 | 13.04% |
| d4bb7bd8 | 11.46% |
| 3486227d | 8.34% |
| 776ce399 | 5.23% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature18 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 4328 |
+-------------+----------------------+
| e88ffc9d | 3.16% |
| 891589e7 | 2.81% |
| 2804effd | 2.61% |
| c21c3e4c | 2.39% |
| 5aed7436 | 2.17% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature19 | Null Count: 43.97% |
+-------------+----------------------+
| Unique Values: 1922 |
+-------------+----------------------+
| 21ddcdc9 | 61.65% |
| 55dd3565 | 3.45% |
| 5b885066 | 1.43% |
| 9437f62f | 1.33% |
| 712d530c | 1.03% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature20 | Null Count: 43.97% |
+-------------+----------------------+
| Unique Values: 3 |
+-------------+----------------------+
| b1252a9d | 34.00% |
| 5840adea | 33.20% |
| a458ea53 | 32.80% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature21 | Null Count: 3.41% |
+-------------+----------------------+
| Unique Values: 435601 |
+-------------+----------------------+
| 0014c32a | 2.56% |
| 723b4dfd | 2.25% |
| e587c466 | 1.12% |
| 5f957280 | 1.01% |
| 0429f84b | 1.01% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature22 | Null Count: 76.25% |
+-------------+----------------------+
| Unique Values: 15 |
+-------------+----------------------+
| ad3062eb | 57.46% |
| c9d4222a | 35.29% |
| 78e2e389 | 3.05% |
| 8ec974f4 | 2.27% |
| c0061c6d | 1.69% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature23 | Null Count: 0.00% |
+-------------+----------------------+
| Unique Values: 15 |
+-------------+----------------------+
| 32c7478e | 44.03% |
| 3a171ecb | 20.00% |
| 423fab69 | 12.07% |
| bcdee96c | 6.94% |
| be7c41b4 | 5.61% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature24 | Null Count: 3.41% |
+-------------+----------------------+
| Unique Values: 57546 |
+-------------+----------------------+
| 3fdb382b | 5.41% |
| b34f3128 | 4.93% |
| 3b183c5c | 4.72% |
| 1793a828 | 4.49% |
| 45ab94c8 | 2.30% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature25 | Null Count: 43.97% |
+-------------+----------------------+
| Unique Values: 77 |
+-------------+----------------------+
| 001f3601 | 25.57% |
| e8b83407 | 19.38% |
| ea9a246c | 14.32% |
| cb079c2d | 7.04% |
| 9b3e8820 | 5.59% |
+-------------+----------------------+
+-------------+----------------------+
| catFeature26 | Null Count: 43.97% |
+-------------+----------------------+
| Unique Values: 40473 |
+-------------+----------------------+
| 49d68486 | 7.35% |
| c84c4aec | 3.36% |
| 2fede552 | 2.85% |
| c27f155b | 2.68% |
| aa5f0a15 | 2.48% |
+-------------+----------------------+
List of catFeatures with more than 75.00% NaN ratio: ['catFeature22']
###Markdown
Observations:1. Some features have a high number of missing data (such as `catFeature22`, missing more than 75% of the values). With this result we can set a threshold to drop features.2. Similarly to what happened with the integer features, some are highly skewed, with up to 90% of the valid values in one category.
###Code
# CAN RUN IN LOCAL NODE
# adding features to drop list:
features_to_drop.append('catFeature22')
###Output
_____no_output_____
###Markdown
Additionally, in order to apply the Logistic Regression model into our data, we want to one-hot-encode our levels. However, depending on the number of unique elements in our features, this approach can generate a high number of features, for example:
###Code
# CAN RUN IN LOCAL NODE
uniqueCounts = {}
for field in catFields:
uniqueCounts[field] = toyTrainDF.select(field).distinct().count()
uniqueCounts
# CAN RUN IN LOCAL NODE
avg_levels = sum(uniqueCounts.values())/len(uniqueCounts.values())
avg_levels
###Output
_____no_output_____
###Markdown
In this case, we can have $26 \cdot 85122 = 2,213,172$ in a higly sparse dataset. Based on our toy datset we will have more features than rows, ending in problems with the curse of dimensionality. The way we found to overcome this problem was to work with bins for rare leves. Hopefully, with the skeweness of our data, this technique will drastically reduce the model dimesionality.Finally, because of the lack of semantic meaning of categorical data, the absence of data may also mean something for our model, so we want to add missing values on categorical data to its own bin, creating bins of `missing_value` for each categorical feature.After all the previous consideration about our data, we are ready to start transforming our dataset and implementing the Binary Logistic Regression to predict click-through-rate. 4. Algorithm Implementation 4.1 Data preparation:From the previous sections we derived the following tranformations for our dataset:1. Drop `intFeature12`.2. Set negative numbers in `intFeature2` to `null`.3. Replace `null` values with the feature mean for the numerical features.4. Apply a log-tranformation and normalized all numeric features.5. Drop features `intFeature10`, `intFeature11`, and `intFeature13`.6. Drop feature `catFeature22`.7. Create bins `rare-levels` for levels with less than 30 occurencies on categorical features.8. Create bins `missing_values` for categorical data. Additionally we will adress more common data tranformation, like deleting duplicated rows and rows missing more than 75% of the features. Let's starting defining a function to transform our dataframe (we will use now for the train data, and later for the test data):
###Code
# as `features_to_drop` was generated in local mode, use this to recreate the list:
features_to_drop = ['intFeature12','intFeature10','intFeature11','intFeature13','catFeature22']
# as we use the udf created in our local node, use this to redefine it:
replace_negatives = functions.udf(lambda x: None if x < 0 else x, IntegerType())
def df_transform(df, features_to_drop, feature_threshold):
'''
Function to apply all tranformations described in EDA section:
1. Drop features_to_drop.
2. Set negative numbers in `intFeature2` to `null`.
3. Fill null with mean() in numerical features
4. Apply a log-tranformation and normalize all numeric features.
5. Create bins `rare-levels` for levels with less than 30 occurencies on categorical features.
6. Create bins `missing_values` for categorical data.
Args:
df : pyspark dataframe containing all features
features_to_drop : features identified to be dropped
feature_threshold : features with less observations than this will be
converted to 'feature_name-rare'
intFields : list with names of numerical features (log transform and normalization)
catFields : list with names of categorical features (for binning rare and missing values)
Returns:
transformed_DF : pyspark dataframe with the same form, but with transformed features.
'''
# Cleaning duplicate and missing more than 75% of the features:
dedupedDF = df.dropDuplicates()
print('Droped {} duplicated rows.'.format(df.count() - dedupedDF.count()))
cleanDF = dedupedDF.dropna(thresh=30)
print('Droped {} incomplete rows.'.format(dedupedDF.count() - cleanDF.count()))
# Drop bad features (see EDA section):
cleanDF = cleanDF.drop(*features_to_drop)
# get the column names after droping features:
intFields = [i for i in cleanDF.columns if 'int' in i]
catFields = [i for i in cleanDF.columns if 'cat' in i]
# Setting intFeature2 values to null (see EDA section for udf definition)
cleanDF = cleanDF.withColumn('intFeature2', replace_negatives(cleanDF.intFeature2))
# create a means dictionary to use in fill.na:
mean_dict = {}
# iterating through numerical features
for i in intFields:
#compute mean for each column
col_mean = cleanDF.select(functions.mean(functions.col(i)).alias('mean')).collect()[0]['mean']
mean_dict[i] = col_mean
cleanDF = cleanDF.na.fill(mean_dict)
# normalizing the features:
for i in intFields:
# compute mean for each column
col_mean = cleanDF.select(functions.mean(functions.col(i)).alias('mean')).collect()[0]['mean']
col_std = cleanDF.select(functions.stddev(functions.col(i)).alias('std')).collect()[0]['std']
#update dataframe
cleanDF = cleanDF.withColumn(i,(functions.col(i) - col_mean)/col_std)
# Creating new columns with bins for "rare levels" and "missing values"
for feature in catFields:
bucket = set(cleanDF.groupBy(feature) \
.count() \
.filter(functions.col('count') > 30) \
.select(functions.col(feature)) \
.rdd \
.map(lambda x: x[0]) \
.collect())
# update dataframe with rare values (fewer than 30) as their own bucketed value
cleanDF = cleanDF.withColumn(feature, functions.when(functions.col(feature).isin(bucket),
functions.col(feature)).otherwise(feature+'-rare'))
# fill the missing values with with the bin name
cleanDF = cleanDF.na.fill(feature + '-NaN', subset=feature)
return cleanDF
# let's time our code to have a sense how long it is taking to pre-process the data:
start = time.time()
binnedDF = df_transform(df_full, features_to_drop, 30)
timeCount = time.time() - start
print('... Full dataset preprocessed in {} seconds.'.format(timeCount))
# droping the cached dataset and caching the new one
df_full.unpersist()
binnedDF.cache()
###Output
Droped 10651 duplicated rows.
Droped 3950455 incomplete rows.
... Full dataset preprocessed in 4438.735630989075 seconds.
###Markdown
At this point, it is also benefitial to store our tranformed dataset so we can have a checkpoint and restore our work if needed. For that, we will use a parquet file again:
###Code
# Comment/Uncomment only one of this cells to write/read the processed DF from google storage:
binnedDF.write.parquet("gs://w261_final-project_team13/fullDF_preprocessed.txt")
# binnedDF = spark.read.parquet("gs://w261_final-project_team13/fullDF_preprocessed.txt")
###Output
_____no_output_____
###Markdown
Finally, we can see our data cleaned, normalized and ready to be ingested by our model.
###Code
# showing the 5 first rows for a sanity check:
binnedDF.select(binnedDF.columns[:6]).show(5)
binnedDF.select(binnedDF.columns[7:12]).show(5)
binnedDF.select(binnedDF.columns[13:19]).show(5)
binnedDF.select(binnedDF.columns[20:26]).show(5)
binnedDF.select(binnedDF.columns[27:33]).show(5)
binnedDF.select(binnedDF.columns[34:]).show(5)
###Output
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+
|click| intFeature1| intFeature2| intFeature3| intFeature4| intFeature5|
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+
| 0|-0.45906996863551086| -0.2705813462114749| 0.03671347858943969|-0.28927765692267365| -0.226532357711775|
| 1| 0.5153776892822588|-0.26025628704547576| 0.8853857938316968| 1.8325994541639543| -0.2557887458143535|
| 1|-0.04144954381360...|-0.29897525891797255|-0.07209066439033686| -0.788542859531292| -0.2335336157200802|
| 0|-0.45906996863551086|-0.00212980789549702|0.004072235695506729|-0.16446135627051903|-0.21393959726697262|
| 0|-0.45906996863551086|-0.29639399412647277| -0.0612102500923592|-0.03964505561836445|-0.05086730949434342|
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+
only showing top 5 rows
+--------------------+-------------------+--------------------+-----------+-----------+
| intFeature7| intFeature8| intFeature9|catFeature1|catFeature2|
+--------------------+-------------------+--------------------+-----------+-----------+
|-0.17747875481593128| 1.0530694763084874|-0.18725171881122907| 05db9164| 207b2d81|
|-0.14725179636761937| 0.522773266015911| -0.3911113403123852| 5bfa8ab5| 38a947a1|
| 0.4119469349261512|-0.5967409557128607|-0.06946615972167217| ae82ea21| 4f25e98b|
|-0.22281919248839918| 1.1709130785957265|-0.18272150500009227| 05db9164| 58e67aaf|
|-0.11702483791930744|-0.7145845580000998|-0.35033941601215396| 9a89b36c| 38a947a1|
+--------------------+-------------------+--------------------+-----------+-----------+
only showing top 5 rows
+----------------+-----------+-----------+-----------+-----------+-----------+
| catFeature4|catFeature5|catFeature6|catFeature7|catFeature8|catFeature9|
+----------------+-----------+-----------+-----------+-----------+-----------+
| c95cee83| 25c83c98| fe6b92e5| d5141a06| 0b153874| a73ee510|
| 7031bb66| 43b19349| fbad5c96| 1c86e0eb| 5b392875| a73ee510|
|catFeature4-rare| f7109724| 7e0ccccf| dc2b40a4| 0b153874| a73ee510|
| 5d0afaff| 4cf72387| 7e0ccccf| b724bd80| 0b153874| a73ee510|
| 0a8cd7bc| 25c83c98| fbad5c96| f2530a89| 0b153874| a73ee510|
+----------------+-----------+-----------+-----------+-----------+-----------+
only showing top 5 rows
+------------+-----------------+------------+------------+------------+-----------------+
|catFeature11| catFeature12|catFeature13|catFeature14|catFeature15| catFeature16|
+------------+-----------------+------------+------------+------------+-----------------+
| f2a5d7d2|catFeature12-rare| a3b89afc| 1adce6ef| f6114366|catFeature16-rare|
| 755e4a50| e2163351| 5978055e| b28479f6| 46ed0b3c| beef16d6|
| 6685ea28|catFeature12-rare| 7edc047a| 07d13a8f| 5be89da3|catFeature16-rare|
| c389b738| 4454ea3a| d7ccab4e| 051219e6| d83fb924| fb914e97|
| 2181d913| eebc06cb| 1e750733| 07d13a8f| ed217c18| 31cf393e|
+------------+-----------------+------------+------------+------------+-----------------+
only showing top 5 rows
+------------+-----------------+-----------------+-----------------+------------+------------+
|catFeature18| catFeature19| catFeature20| catFeature21|catFeature23|catFeature24|
+------------+-----------------+-----------------+-----------------+------------+------------+
| 966c77d8| 21ddcdc9| b1252a9d|catFeature21-rare| 32c7478e| 9fa3e01a|
| 2c6cb693|catFeature19-rare|catFeature20-rare| 4645d72c| 32c7478e| b258af68|
| bc5a0ff7| 712d530c| b1252a9d|catFeature21-rare| c7dc6720| 97d22f08|
| c21c3e4c| 1b1b9309| a458ea53| 5f361005| 3a171ecb| 3fdb382b|
| 61d51f71|catFeature19-rare|catFeature20-rare| 58d08d44| c7dc6720| 355b6af8|
+------------+-----------------+-----------------+-----------------+------------+------------+
only showing top 5 rows
+-----------------+
| catFeature26|
+-----------------+
| 59b96d68|
|catFeature26-rare|
| 288cca91|
| 49d68486|
|catFeature26-rare|
+-----------------+
only showing top 5 rows
###Markdown
4.2 Model TrainingFor the model training itself, we decided to go with the PySpark ML library, that is based on PySpark DataFrames, and allows users to quick assembly and configure practical machine learning pipelines. For reference on how the algorithm works, see section 2: Algorithm Explanation.In a high level, we will take the pre-processed dataset and set a pipeline, a high-level API that allows us to set transformations and estimations to our dataset (pretty much like transformations and actions in RDDs). Note that we could add the `df_transform` function to this pipeline and have a pipeline that could ingest raw data and output predictions. We chose to have our data preparation outside of the pipeline for clarity purposes, however, if this model eventually goes into production, we would add all the data transformations required. For this model, the stages of the pipeline need to be:1. Create numerical indexers to our categorical features.2. One-hot-encode using the indexers.3. Create a multiple column vector.4. Set a pipeline to automatize the previous three tranformations4. Split our dataset into training and testing dataframes.5. Train the model using the train data.6. Fit the test data and output the results.
###Code
# reading the column names (they changed during the pre-processing phase)
intFields = [i for i in binnedDF.columns if 'int' in i]
catFields = [i for i in binnedDF.columns if 'cat' in i]
# Transform categorical values in numerical indexes:
indexers = [StringIndexer(inputCol=c, outputCol='{0}_indexed'.format(c)) for c in catFields]
label = StringIndexer(inputCol='click', outputCol='label')
# implement one-hot-encoding categorical features to reduce dimensionality for model training:
encoder = OneHotEncoderEstimator(
inputCols=[indexer.getOutputCol() for indexer in indexers],
outputCols=["{0}_encoded".format(indexer.getOutputCol()) for indexer in indexers],
dropLast=False)
# combining all the feature columns into a single vector column
assembler = VectorAssembler(inputCols= encoder.getOutputCols() + intFields,
outputCol= "features")
# adding a time counter for checking the training time:
start = time.time()
# defining the pipeline that will index the catFeatures, tranform all features ina vector and bring back the labels:
pipeline = Pipeline(stages = indexers + [encoder, assembler, label])
# apply the pipeline to the full dataset
processedDF = pipeline.fit(binnedDF).transform(binnedDF)
dataset = processedDF.select('label','features')
# spliting our ready-to-consume dataset in train and test data:
train, test = dataset.randomSplit([0.7,0.3], seed=2019)
# Create an instance of our model and fit the training data
lr = LogisticRegression(maxIter=10)
lr_model = lr.fit(train)
# Check training time:
trainTime = time.time()
print('... Model trained in {0} seconds'.format(trainTime - start))
# making predictions on test data and check its time counter:
predictions = lr_model.transform(test)
print('... Predictions made in {0} seconds'.format(time.time() - trainTime))
###Output
... Model trained in 528.844703912735 seconds
... Predictions made in 0.017261505126953125 seconds
###Markdown
Our predictions are ready! We can take a look on some of them:
###Code
predictions.select('label','prediction').describe().show()
predictions.select('label', 'prediction', 'probability').show(20)
###Output
+-------+-------------------+-------------------+
|summary| label| prediction|
+-------+-------------------+-------------------+
| count| 12563455| 12563455|
| mean|0.25858157648513086|0.12508000386836265|
| stddev|0.43785518159261594| 0.3308096207960631|
| min| 0.0| 0.0|
| max| 1.0| 1.0|
+-------+-------------------+-------------------+
+-----+----------+--------------------+
|label|prediction| probability|
+-----+----------+--------------------+
| 0.0| 0.0|[0.53955726793911...|
| 0.0| 0.0|[0.58878684031028...|
| 0.0| 0.0|[0.57554087521907...|
| 0.0| 0.0|[0.59028387376181...|
| 0.0| 0.0|[0.67967183250981...|
| 0.0| 0.0|[0.54123039459767...|
| 0.0| 0.0|[0.61253679897914...|
| 0.0| 1.0|[0.32645694369665...|
| 0.0| 1.0|[0.49548674451012...|
| 0.0| 1.0|[0.48974500506562...|
| 0.0| 0.0|[0.62100677433490...|
| 0.0| 0.0|[0.54145141061398...|
| 0.0| 0.0|[0.57947287183355...|
| 0.0| 0.0|[0.77279394683437...|
| 0.0| 1.0|[0.45327212707968...|
| 0.0| 1.0|[0.49298854587815...|
| 0.0| 0.0|[0.90928598552323...|
| 0.0| 0.0|[0.73588893048367...|
| 0.0| 1.0|[0.49392083045970...|
| 0.0| 0.0|[0.87076282337477...|
+-----+----------+--------------------+
only showing top 20 rows
###Markdown
We see that the mean of the predicted column is lower than the true labels, which indicates a lot of false negatives (in despite of our first 20 rows analisys). For this reason, we need an error metric to better check the quqality of our model. Based on the original Kaggle competition, we decided to use the `log-loss` function, which unfortunatelly is not defined in PySpark. To implement that function we can do:
###Code
# extracting true,predicted and probability to compute log loss
selected = predictions.select('label', 'prediction', 'probability')
# evaluation of logit fit with logLoss
# we want to extract the probability of a click which is the first element in the probability column
# so we need a user defined function to manipulate columns of array types
firstelement = functions.udf(lambda v: float(v[1]), FloatType())
# pyspark implementation for logLoss
pyspark_mod = selected.withColumn('logloss', -functions.col('label') * functions.log(firstelement(functions.col('probability'))) - (1.-functions.col('label'))*functions.log(1-firstelement(functions.col('probability'))))
ll = pyspark_mod.agg(functions.mean('logloss').alias('ll')).collect()[0]['ll']
# print the resulting log-loss
print("The log loss from our logistic regression model is: " + str(ll))
###Output
The log loss from our logistic regression model is: 0.47759315586299
|
iPythonNotebooks/PATRIC to FBA.ipynb | ###Markdown
How to create and run a gap-filled FBA from PATRICThe PATRIC (the Pathosystems Resource Integration Center) contains the best collection of well annotated genomes. They also happen to have been annotated by RAST, and so we should be able to use those integrations directly.Here we'll walk through taking a genome from PATRIC, building a model, and running it. PATRIC also has model reconstruction built in, but when I tried it (05/24/16) it was not working.*Update: This still appears to be incompatible with PyFBA as of 12/2019As usual, we'll start by loading some modules that we'll need for our analysis.
###Code
import sys
import os
import copy
import PyFBA
###Output
_____no_output_____
###Markdown
Find a genome and download the annotationsYou need to find your genome in PATRIC and download the annotations.Once you have identified the genome you would like to build the model for, choose _Feature Table_ from the menu bar:Next, choose _Download_ and save as a _text file (.txt)_. That will save a file called _FeatureTable.txt_ to your Downloads location. That file has the following columns:| Genome | Genome ID | Accession | PATRIC ID | RefSeq Locus Tag | Alt Locus Tag | Feature ID | | Annotation | Feature Type | Start | End | Length | Strand | FIGfam ID || PATRIC genus-specific families (PLfams) | PATRIC cross-genus families (PGfams) | Protein ID | AA Length | Gene Symbol | Product |The key columns are PATRIC ID (Column 3) and Product (Column 19) [Column numbers are 0 based!]Now that we know that, we need to convert these feature names into functional roles. The key here is to split on adjoiners, such as ' / ', ' ', and ' @ '.
###Code
assigned_functions = {}
with open(os.path.join(os.environ['HOME'], 'Downloads/FeatureTable.txt'), 'r') as f:
for l in f:
p=l.strip().split("\t")
assigned_functions[p[3]]=PyFBA.parse.roles_of_function(p[19])
roles = set([i[0] for i in [list(j) for j in assigned_functions.values()]])
print("There are {} unique roles in this genome".format(len(roles)))
###Output
There are 3842 unique roles in this genome
###Markdown
Next, we convert those roles to reactions. We start with a dict of roles and reactions, but we only need a list of unique reactions, so we convert the keys to a set.
###Code
roles_to_reactions = PyFBA.filters.roles_to_reactions(roles)
reactions_to_run = set()
for role in roles_to_reactions:
reactions_to_run.update(roles_to_reactions[role])
print("There are {}".format(len(reactions_to_run)) +
" unique reactions associated with this genome".format(len(reactions_to_run)))
reactions_to_run.add('rxn13681')
print("There are {}".format(len(reactions_to_run)) +
" unique reactions associated with this genome".format(len(reactions_to_run)))
###Output
There are 1 unique reactions associated with this genome
###Markdown
Read all the reactions and compounds in our databaseWe read all the reactions, compounds, and enzymes in the [ModelSEEDDatabase](https://github.com/ModelSEED/ModelSEEDDatabase) into three data structures. Each one is a dictionary with a string representation of the object as the key and the PyFBA object as the value.We modify the reactions specifically for Gram negative models (there are also options for Gram positive models, Mycobacterial models, general microbial models, and plant models).
###Code
compounds, reactions, enzymes = \
PyFBA.parse.model_seed.compounds_reactions_enzymes('gramnegative')
###Output
_____no_output_____
###Markdown
Update reactions to run, making sure that all reactions are in the list!There are some reactions that come from functional roles that do not appear in the reactions list. We're working on tracking these down, but for now we just check that all reaction IDs in *reactions_to_run* are in *reactions*, too.
###Code
tempset = set()
for r in reactions_to_run:
if r in reactions:
tempset.add(r)
else:
sys.stderr.write("Reaction ID {} is not in our reactions list. Skipped\n".format(r))
reactions_to_run = tempset
###Output
_____no_output_____
###Markdown
Test whether these reactions grow on ArgonneLB mediaWe can test whether this set of reactions grows on ArgonneLB media. The media is the same one we used above, and you can download the [ArgonneLB.txt](https://raw.githubusercontent.com/linsalrob/PyFBA/master/media/ArgonneLB.txt) and text file and put it in the same directory as this iPython notebook to run it.(Note: we don't need to convert the media components, because the media and compounds come from the same source.)
###Code
media = PyFBA.parse.read_media_file('ArgonneLB.txt')
print("Our media has {} components".format(len(media)))
###Output
Our media has 65 components
###Markdown
Define a biomass equationThe biomass equation is the part that says whether the model will grow! This is a [metabolism.reaction.Reaction](https://github.com/linsalrob/PyFBA/blob/master/PyFBA/metabolism/reaction.py) object.
###Code
biomass_equation = PyFBA.metabolism.biomass_equation('gramnegative')
###Output
_____no_output_____
###Markdown
Run the FBAWith the reactions, compounds, reactions_to_run, media, and biomass model, we can test whether the model grows on this media.
###Code
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
Initial run has a biomass flux value of 0.0 --> Growth: False
###Markdown
Gap-fill the modelSince the model does not grow on ArgonneLB we need to gap-fill it to ensure growth. There are several ways that we can gap-fill, and we will work through them until we get growth.As you will see, we update the *reactions_to_run list* each time, and keep the media and everything else consistent. Then we just need to run the FBA like we have done above and see if we get growth.We also keep a copy of the original *reactions_to_run*, and a list with all the reactions that we are adding, so once we are done we can go back and bisect the reactions that are added.
###Code
added_reactions = []
original_reactions_to_run = copy.copy(reactions_to_run)
reactions_to_run = copy.copy(original_reactions_to_run)
###Output
_____no_output_____
###Markdown
Media import reactionsWe need to make sure that the cell can import everything that is in the media... otherwise it won't be able to grow. Be sure to only do this step if you are certain that the cell can grow on the media you are testing.
###Code
media_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions,
reactions_to_run, media)
added_reactions.append(("media", media_reactions))
reactions_to_run.update(media_reactions)
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
Run has a biomass flux value of 0.0 --> Growth: False
###Markdown
Essential reactionsThere are ~100 reactions that are in every model we have tested, and we construe these to be essential for all models, so we typically add these next!
###Code
essential_reactions = PyFBA.gapfill.suggest_essential_reactions()
added_reactions.append(("essential", essential_reactions))
reactions_to_run.update(essential_reactions)
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
Run has a biomass flux value of 9.92806925858361e-14 --> Growth: False
###Markdown
SubsystemsThe reactions connect us to subsystems (see [Overbeek et al. 2014](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3965101/)), and this test ensures that all the subsystems are complete. We add reactions required to complete the subsystem.
###Code
subsystem_reactions = \
PyFBA.gapfill.suggest_reactions_from_subsystems(reactions,
reactions_to_run,
threshold=0.5)
added_reactions.append(("subsystems", subsystem_reactions))
reactions_to_run.update(subsystem_reactions)
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
pre_orphan=copy.copy(reactions_to_run)
pre_o_added=copy.copy(added_reactions)
print("Pre orphan has {} reactions".format(len(pre_orphan)))
reactions_to_run=copy.copy(pre_orphan)
added_reactions = copy.copy(pre_o_added)
###Output
_____no_output_____
###Markdown
Orphan compoundsOrphan compounds are those compounds which are only associated with one reaction. They are either produced, or trying to be consumed. We need to add reaction(s) that complete the network of those compounds.You can change the maximum number of reactions that a compound is in to be considered an orphan (try increasing it to 2 or 3).
###Code
orphan_reactions = PyFBA.gapfill.suggest_by_compound(compounds, reactions,
reactions_to_run,
max_reactions=1)
added_reactions.append(("orphans", orphan_reactions))
reactions_to_run.update(orphan_reactions)
print("Post orphan has {} reactions".format(len(reactions_to_run)))
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
_____no_output_____
###Markdown
Trimming the modelNow that the model has been shown to grow on ArgonneLB media after several gap-fill iterations, we should trim down the reactions to only the required reactions necessary to observe growth.
###Code
reqd_additional = set()
# Begin loop through all gap-filled reactions
while added_reactions:
ori = copy.copy(original_reactions_to_run)
ori.update(reqd_additional)
# Test next set of gap-filled reactions
# Each set is based on a method described above
how, new = added_reactions.pop()
sys.stderr.write("Testing reactions from {}\n".format(how))
# Get all the other gap-filled reactions we need to add
for tple in added_reactions:
ori.update(tple[1])
# Use minimization function to determine the minimal
# set of gap-filled reactions from the current method
new_essential = PyFBA.gapfill.minimize_additional_reactions(ori, new, compounds,
reactions, media,
biomass_equation)
sys.stderr.write("Saved {} reactions from {}\n".format(len(new_essential), how))
for r in new_essential:
sys.stderr.write(r + "\n")
# Record the method used to determine
# how the reaction was gap-filled
for new_r in new_essential:
reactions[new_r].is_gapfilled = True
reactions[new_r].gapfill_method = how
reqd_additional.update(new_essential)
# Combine old and new reactions
all_reactions = original_reactions_to_run.union(reqd_additional)
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, all_reactions,
media, biomass_equation)
print("The biomass reaction has a flux of {} --> Growth: {}".format(value, growth))
###Output
_____no_output_____
###Markdown
How to create and run a gap-filled FBA from PATRICThe PATRIC (the Pathosystems Resource Integration Center) contains the best collection of well annotated genomes. They also happen to have been annotated by RAST, and so we should be able to use those integrations directly.Here we'll walk through taking a genome from PATRIC, building a model, and running it. PATRIC also has model reconstruction built in, but when I tried it (05/24/16) it was not working.As usual, we'll start by loading some modules that we'll need for our analysis.
###Code
import sys
import os
import copy
import PyFBA
import re
import inspect
inspect.getfile(PyFBA)
###Output
_____no_output_____
###Markdown
Find a genome and download the annotationsYou need to find your genome in PATRIC and download the annotations.Once you have identified the genome you would like to build the model for, choose _Feature Table_ from the menu bar:Next, choose _Download_ and save as a _text file (.txt)_. That will save a file called _FeatureTable.txt_ to your Downloads location. That file has the following columns:| Genome | Genome ID | Accession | PATRIC ID | RefSeq Locus Tag | Alt Locus Tag | Feature ID | | Annotation | Feature Type | Start | End | Length | Strand | FIGfam ID || PATRIC genus-specific families (PLfams) | PATRIC cross-genus families (PGfams) | Protein ID | AA Length | Gene Symbol | Product | GOThe key columns are PATRIC ID (Column 3) and Product (Column 19) [Column numbers are 0 based!]Now that we know that, we need to convert these feature names into functional roles. The key here is to split on adjoiners, such as ' / ', ' ', and ' @ '.
###Code
assigned_functions = {}
with open(os.path.join('workspace/Citrobacter_sedlakii_genome_features.txt'), 'r') as f:
for l in f:
p=l.strip().split("\t")
assigned_functions[p[3]]=PyFBA.parse.roles_of_function(p[19])
roles = set([i[0] for i in [list(j) for j in assigned_functions.values()]])
print("There are {} unique roles in this genome".format(len(roles)))
###Output
There are 3509 unique roles in this genome
###Markdown
Next, we convert those roles to reactions. We start with a dict of roles and reactions, but we only need a list of unique reactions, so we convert the keys to a set.
###Code
roles_to_reactions = PyFBA.filters.roles_to_reactions(roles, organism_type="Gram_Negative", verbose=False)
###Output
_____no_output_____
###Markdown
If you toggle `verbose=True`, you will see that there are a lot of roles that we skip, even though we have an EC number for them: for whatever reason, the annotation is not quite right. We can check for those too, because our model seed parsed data has EC numbers with reactions.
###Code
# ecr2r = PyFBA.filters.roles_to_ec_reactions(roles, organism_type="Gram_Negative", verbose=False)
ecr2r = set()
###Output
_____no_output_____
###Markdown
We combine `roles_to_reactions` and `ecr2r` and figure out what the unique set of reactions is for our genome.
###Code
roles_to_reactions.update(ecr2r)
reactions_to_run = set()
for role in roles_to_reactions:
reactions_to_run.update(roles_to_reactions[role])
print("There are {}".format(len(reactions_to_run)) +
" unique reactions associated with this genome".format(len(reactions_to_run)))
###Output
There are 1065 unique reactions associated with this genome
###Markdown
Read all the reactions and compounds in our databaseWe read all the reactions, compounds, and enzymes in the [ModelSEEDDatabase](https://github.com/ModelSEED/ModelSEEDDatabase) into three data structures. Note, the first time you call this it is a bit slow as it has to parse the files, but if we've parsed them once, we don't need to do it again!We modify the reactions specifically for Gram negative models (there are also options for Gram positive models, Mycobacterial models, general microbial models, and plant models).
###Code
compounds, reactions, enzymes = \
PyFBA.parse.model_seed.compounds_reactions_enzymes('gramnegative')
print(f"There are {len(compounds):,} compounds, {len(reactions):,} reactions, and {len(enzymes):,} enzymes in total")
for r in reactions:
for c in reactions[r].all_compounds():
if c.uptake_secretion:
print(f"US: {c}")
###Output
_____no_output_____
###Markdown
Update reactions to run, making sure that all reactions are in the list!There are some reactions that come from functional roles that do not appear in the reactions list. We're working on tracking these down, but for now we just check that all reaction IDs in *reactions_to_run* are in *reactions*, too.
###Code
tempset = set()
for r in reactions_to_run:
if r in reactions:
tempset.add(r)
else:
sys.stderr.write("Reaction ID {} is not in our reactions list. Skipped\n".format(r))
reactions_to_run = tempset
###Output
_____no_output_____
###Markdown
Test whether these reactions grow on ArgonneLB mediaWe can test whether this set of reactions grows on ArgonneLB media. The media is the same one we used above, and you can download the [ArgonneLB.txt](https://raw.githubusercontent.com/linsalrob/PyFBA/master/media/ArgonneLB.txt) and text file and put it in the same directory as this iPython notebook to run it.(Note: we don't need to convert the media components, because the media and compounds come from the same source.)
###Code
media = PyFBA.parse.read_media_file("/home/redwards/test_media/ArgonneLB.txt")
print("Our media has {} components".format(len(media)))
###Output
Our media has 65 components
###Markdown
Define a biomass equationThe biomass equation is the part that says whether the model will grow! This is a [metabolism.reaction.Reaction](https://github.com/linsalrob/PyFBA/blob/master/PyFBA/metabolism/reaction.py) object.
###Code
biomass_equation = PyFBA.metabolism.biomass_equation()
biomass_equation.equation
with open('rbad.txt', 'w') as out:
for r in reactions_to_run:
out.write(f"{r}\n")
###Output
_____no_output_____
###Markdown
Run the FBAWith the reactions, compounds, reactions_to_run, media, and biomass model, we can test whether the model grows on this media.
###Code
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
print(f"There are {len(reactions_to_run)} reactions to run")
upsr = 0
for r in reactions_to_run:
if r.startswith('upsr'):
upsr += 1
print(f"There are {upsr} uptake secretion reactions in reactions_to_run")
upsr = 0
for r in reactions:
if r.startswith('upsr'):
upsr += 1
print(f"There are {upsr} uptake secretion reactions in reactions")
###Output
There are 0 uptake secretion reactions in reactions_to_run
There are 165 uptake secretion reactions in reactions
###Markdown
Will gap filling work?These are the reactions from the C. sedlakii SBML file, and so if we add these, we should get growth!
###Code
sbml_addnl = {'rxn00868', 'rxn01923', 'rxn02268', 'rxn10215', 'rxn10219', 'rxn08089', 'rxn10212', 'rxn08083', 'rxn10214', 'rxn10211', 'rxn10218', 'rxn08086', 'rxn10217', 'rxn08087', 'rxn08088', 'rxn08085', 'rxn10216', 'rxn08084', 'rxn10213', 'rxn05572', 'rxn05565', 'rxn00541', 'rxn10155', 'rxn10157', 'rxn05536', 'rxn05544', 'rxn12848', 'rxn12851', 'rxn05539', 'rxn05541', 'rxn05537', 'rxn05543', 'rxn12849', 'rxn05533', 'rxn05540', 'rxn05534', 'rxn05547', 'rxn05546', 'rxn05542', 'rxn05535', 'rxn12850', 'rxn05545', 'rxn05538', 'rxn05168', 'rxn05179', 'rxn05161', 'rxn03061', 'rxn09313', 'rxn08354', 'rxn08356', 'rxn09315', 'rxn05549', 'rxn05160', 'rxn05644', 'rxn05330', 'rxn05335', 'rxn05334', 'rxn05329', 'rxn05333', 'rxn05332', 'rxn05331', 'rxn05415', 'rxn05381', 'rxn05386', 'rxn05427', 'rxn05431', 'rxn05373', 'rxn05377', 'rxn05398', 'rxn05419', 'rxn05402', 'rxn05369', 'rxn05361', 'rxn05394', 'rxn05406', 'rxn05365', 'rxn05390', 'rxn05423', 'rxn05462', 'rxn05411', 'rxn03492', 'rxn04050', 'rxn08258', 'rxn04713', 'rxn00990', 'rxn00875', 'rxn08471', 'rxn05737', 'rxn08467', 'rxn10067', 'rxn08468', 'rxn08469', 'rxn08470', 'rxn02160', 'rxn05422', 'rxn05372', 'rxn05341', 'rxn05376', 'rxn05342', 'rxn05337', 'rxn05385', 'rxn05397', 'rxn05340', 'rxn05461', 'rxn05368', 'rxn05418', 'rxn05393', 'rxn05336', 'rxn05426', 'rxn05364', 'rxn05430', 'rxn05410', 'rxn05339', 'rxn05401', 'rxn05338', 'rxn05360', 'rxn05414', 'rxn05405', 'rxn05389', 'rxn05380', 'rxn03164', 'rxn05229', 'rxn07586', 'rxn05054', 'rxn04384', 'rxn00503', 'rxn00183', 'rxn05187', 'rxn05515', 'rxn02056', 'rxn09134', 'rxn09125', 'rxn09157', 'rxn09128', 'rxn09142', 'rxn09161', 'rxn09147', 'rxn09164', 'rxn09152', 'rxn09124', 'rxn09131', 'rxn09133', 'rxn09138', 'rxn09143', 'rxn09153', 'rxn09160', 'rxn09158', 'rxn09148', 'rxn09144', 'rxn09150', 'rxn09130', 'rxn09149', 'rxn09163', 'rxn09159', 'rxn09132', 'rxn09127', 'rxn09140', 'rxn09145', 'rxn09137', 'rxn09154', 'rxn09151', 'rxn09146', 'rxn09123', 'rxn09139', 'rxn09126', 'rxn09141', 'rxn09135', 'rxn09136', 'rxn09155', 'rxn09162', 'rxn09129', 'rxn09156', 'rxn02949', 'rxn03241', 'rxn03245', 'rxn02911', 'rxn02167', 'rxn03250', 'rxn02934', 'rxn03240', 'rxn03247', 'rxn05316', 'rxn09687', 'rxn05198', 'rxn09688', 'rxn05199', 'rxn05200', 'rxn09685', 'rxn05318', 'rxn05205', 'rxn05621', 'rxn05656', 'rxn05585', 'rxn05172', 'rxn05594', 'rxn05552', 'rxn05599', 'rxn05512', 'rxn05620', 'rxn01277', 'rxn05518', 'rxn05145', 'rxn05460', 'rxn05396', 'rxn05363', 'rxn05359', 'rxn05367', 'rxn05417', 'rxn05421', 'rxn05392', 'rxn05413', 'rxn05349', 'rxn05388', 'rxn05429', 'rxn05371', 'rxn05400', 'rxn05425', 'rxn05409', 'rxn05404', 'rxn05375', 'rxn05379', 'rxn05384', 'rxn04139', 'rxn00640', 'rxn05507', 'rxn05506', 'rxn01893', 'rxn00671', 'rxn00501', 'rxn10340', 'rxn10334', 'rxn10337', 'rxn10338', 'rxn10341', 'rxn10335', 'rxn10342', 'rxn10339', 'rxn10336', 'rxn00160', 'rxn01285', 'rxn04143', 'rxn01847', 'rxn01103', 'rxn00227', 'rxn05175', 'rxn05163', 'rxn05958', 'rxn05683', 'rxn05484', 'rxn02933', 'rxn04750', 'rxn03244', 'rxn01451', 'rxn03239', 'rxn03246', 'rxn03242', 'rxn03249', 'rxn06777', 'rxn05500', 'rxn01637', 'rxn01122', 'rxn04602', 'rxn02416', 'rxn04601', 'rxn04928', 'rxn05596', 'rxn02775', 'rxn04046', 'rxn07589', 'rxn03491', 'rxn10117', 'rxn10119', 'rxn08333', 'rxn04673', 'rxn10308', 'rxn10311', 'rxn10315', 'rxn10309', 'rxn10307', 'rxn10312', 'rxn10310', 'rxn10314', 'rxn08040', 'rxn10313', 'rxn12147', 'rxn03931', 'rxn03916', 'rxn04674', 'rxn03397', 'rxn10094', 'rxn02286', 'rxn00555', 'rxn08709', 'rxn04052', 'rxn03512', 'rxn04045', 'rxn12224', 'rxn09188', 'rxn02359', 'rxn02008', 'rxn03643', 'rxn09177', 'rxn12512', 'rxn07587', 'rxn02507', 'rxn05202', 'rxn08291', 'rxn06865', 'rxn00303', 'rxn00222', 'rxn09978', 'rxn09979', 'rxn07588', 'rxn03919', 'rxn03435', 'rxn02187', 'rxn02186', 'rxn03436', 'rxn03068', 'rxn05317', 'rxn01219', 'rxn00364', 'rxn03514', 'rxn04048', 'rxn02792', 'rxn00350', 'rxn02791', 'rxn00171', 'rxn01000', 'rxn00675', 'rxn00175', 'rxn00986', 'rxn03932', 'rxn08712', 'rxn04113', 'rxn04996', 'rxn08756', 'rxn08352', 'rxn06023', 'rxn03136', 'rxn00800', 'rxn05165', 'rxn05181', 'rxn08194', 'rxn09180', 'rxn00670', 'rxn00173', 'rxn03644', 'rxn08619', 'rxn09289', 'rxn00776', 'rxn01360', 'rxn08335', 'rxn08336', 'rxn12500', 'rxn02287', 'rxn02774', 'rxn09167', 'rxn08708', 'rxn05156', 'rxn05151', 'rxn01629', 'rxn12146', 'rxn01123', 'rxn05147', 'rxn05173', 'rxn08707', 'rxn00927', 'rxn01299', 'rxn01226', 'rxn01545', 'rxn02476', 'rxn02011', 'rxn05201', 'rxn01895', 'rxn04604', 'rxn00830', 'rxn01403', 'rxn00179', 'rxn03991', 'rxn03990', 'rxn03975', 'rxn03974', 'rxn00818', 'rxn03838', 'rxn00817', 'rxn02596', 'rxn05555', 'rxn00056', 'rxn00212', 'rxn06979', 'rxn11544', 'rxn03918', 'rxn05559', 'rxn08345', 'rxn00509', 'rxn00006', 'rxn00834', 'rxn05293', 'rxn00634', 'rxn08618', 'rxn06848', 'rxn09997', 'rxn05938', 'rxn04783', 'rxn05206', 'rxn00102', 'rxn05937', 'rxn01644', 'rxn02938', 'rxn00792', 'rxn08711', 'rxn03513', 'rxn04047', 'rxn01265', 'rxn03394', 'rxn00777', 'rxn01106', 'rxn07492', 'rxn03538', 'rxn01480', 'rxn00119', 'rxn01517', 'rxn01966', 'rxn01132', 'rxn05162', 'rxn02277', 'rxn08257', 'rxn01352', 'rxn03540', 'rxn00789', 'rxn00508', 'rxn04386', 'rxn10481', 'rxn05528', 'rxn06077', 'rxn01671', 'rxn02929', 'rxn03917', 'rxn03135', 'rxn00469', 'rxn00791', 'rxn00756', 'rxn03087', 'rxn01329', 'rxn01917', 'rxn01879', 'rxn02285', 'rxn08710', 'rxn07438', 'rxn02321', 'rxn00787', 'rxn01289', 'rxn00851', 'rxn05297', 'rxn00062', 'rxn04132', 'rxn04133', 'rxn05319', 'rxn05467', 'rxn05468', 'rxn02374', 'rxn03012', 'rxn05064', 'rxn02666', 'rxn04457', 'rxn04456', 'rxn01664', 'rxn02916', 'rxn05667', 'rxn10571', 'rxn05195', 'rxn05645', 'rxn05144', 'rxn02988', 'rxn01256', 'rxn12604', 'rxn05039', 'rxn10904', 'rxn05499', 'rxn01152', 'rxn05691', 'rxn12893', 'rxn11116', 'rxn00880', 'rxn05593', 'rxn05469', 'rxn00186', 'rxn05694', 'rxn05491', 'rxn05682', 'rxn01748', 'rxn00327', 'rxn01746', 'rxn09656'}
r2r_plussbml = copy.copy(reactions_to_run)
print(f"Before adding sbml reactions there were {len(r2r_plussbml)}")
r2r_plussbml.update(sbml_addnl)
print(f"After adding sbml reactions there were {len(r2r_plussbml)}")
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml,
media, biomass_equation, verbose=True)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
print(f"Before adding upsr reactions there were {len(r2r_plussbml)} reactions")
for r in reactions:
if r.startswith('upsr'):
r2r_plussbml.update({r})
print(f"After adding upsr reactions there were {len(r2r_plussbml)} reactions")
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml,
media, biomass_equation, verbose=True)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
# seems like we need EX_cpd00034
upsr = 0
for r in reactions_to_run:
if r.startswith('EX'):
upsr += 1
print(f"There are {upsr} EX reactions in reactions_to_run")
upsr = 0
for r in reactions:
if r.startswith('EX'):
upsr += 1
print(f"There are {upsr} EX reactions in reactions")
biomass_equation = PyFBA.metabolism.biomass_equation('standard')
biomass_equation.equation
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml,
media, biomass_equation, verbose=True)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
uptake_secretion_reactions
all_compounds = compounds
# Filter for compounds that are boundary compounds
filtered_compounds = set()
for c in all_compounds:
if not compounds[c].uptake_secretion:
filtered_compounds.add(c)
print(f"There are {len(all_compounds)} total compounds and {len(filtered_compounds)} filtered compounds")
without_ex = set()
with open('rwex.txt', 'r') as fin:
for l in fin:
l = l.strip()
without_ex.add(l)
without_ex
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, without_ex,
media, biomass_equation, verbose=True)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
len(without_ex)
len(reactions_to_run)
###Output
_____no_output_____
###Markdown
it is the biomass model that is the problemLets take the biomass model from the SBML and see if this work.
###Code
sbml_equation = '(0.00778132482043096) cpd00063: Ca2 (location: c) + (0.352889948968272) cpd00156: L_Valine (location: e) + (0.00778132482043096) cpd00030: Mn2 (location: e) + (0.00778132482043096) cpd00205: K (location: c) + (0.428732289454499) cpd00035: L_Alanine (location: e) + (0.128039715997337) cpd00060: L_Methionine (location: e) + (0.15480760087483) cpd00066: L_Phenylalanine (location: c) + (0.00778132482043096) cpd00017: S_Adenosyl_L_methionine (location: c) + (0.00778132482043096) cpd00010: CoA (location: c) + (0.0609084652443221) cpd15665: Peptidoglycan_polymer_n_subunits (location: c) + (0.0841036156544863) cpd00052: CTP (location: c) + (0.00778132482043096) cpd10516: fe3 (location: e) + (0.01468498342018) cpd00357: TTP (location: c) + (0.00778132482043096) cpd00099: Cl_ (location: e) + (0.01468498342018) cpd00356: dCTP (location: c) + (0.00778132482043096) cpd10515: Fe2 (location: e) + (0.00778132482043096) cpd00254: Mg (location: c) + (0.242249358141304) cpd00322: L_Isoleucine (location: e) + (0.00778132482043096) cpd00058: Cu2 (location: e) + (0.00778132482043096) cpd00149: Co2 (location: c) + (0.201205267995816) cpd00041: L_Aspartate (location: e) + (1) cpd17043: RNA_transcription (location: c) + (0.219496655995436) cpd00023: L_Glutamate (location: e) + (0.219496655995436) cpd00053: L_Glutamine (location: e) + (0.376088782528765) cpd00107: L_Leucine (location: e) + (0.00778132482043096) cpd00220: Riboflavin (location: e) + (0.179790960093822) cpd00054: L_Serine (location: e) + (0.0472899299502361) cpd00065: L_Tryptophan (location: e) + (0.0609084652443221) cpd02229: Bactoprenyl_diphosphate (location: c) + (0.00778132482043096) cpd11493: ACP (location: c) + (1) cpd17041: Protein_biosynthesis (location: c) + (0.184698405654696) cpd00129: L_Proline (location: e) + (0.135406821203723) cpd00038: GTP (location: c) + (0.01468498342018) cpd00241: dGTP (location: c) + (1) cpd17042: DNA_replication (location: c) + (0.211466290532188) cpd00161: L_Threonine (location: e) + (40.1101757365074) cpd00002: ATP (location: c) + (0.00778132482043096) cpd00016: Pyridoxal_phosphate (location: c) + (0.00778132482043096) cpd00048: Sulfate (location: e) + (0.00778132482043096) cpd00003: NAD (location: c) + (0.01468498342018) cpd00115: dATP (location: c) + (0.115101904973216) cpd00069: L_Tyrosine (location: e) + (0.00778132482043096) cpd00015: FAD (location: c) + (0.201205267995816) cpd00132: L_Asparagine (location: e) + (0.00778132482043096) cpd00006: NADP (location: c) + (35.5386858537513) cpd00001: H2O (location: e) + (0.0762884719008526) cpd00084: L_Cysteine (location: c) + (0.0794113918032267) cpd00119: L_Histidine (location: e) + (0.285970236774541) cpd00039: L_Lysine (location: e) + (0.0908319049068452) cpd00062: UTP (location: c) + (0.00778132482043096) cpd00034: Zn2 (location: e) + (0.247156803702178) cpd00051: L_Arginine (location: e) + (0.510820469745475) cpd00033: Glycine (location: e) > (40) cpd00008: ADP (location: c) + (39.9922186751796) cpd00009: Phosphate (location: e) + (0.00778132482043096) cpd12370: apo_ACP (location: c) + (1) cpd11416: Biomass (location: c) + (40) cpd00067: H (location: e) + (0.0609084652443221) cpd15666: Peptidoglycan_polymer_n_1_subunits (location: c) + (0.405833094852252) cpd00012: PPi (location: e)'
sbml_left_compounds = {'cpd00066: L_Phenylalanine (location: c)' : 0.15480760087483, 'cpd00016: Pyridoxal_phosphate (location: c)' : 0.00778132482043096, 'cpd00132: L_Asparagine (location: e)' : 0.201205267995816, 'cpd00156: L_Valine (location: e)' : 0.352889948968272, 'cpd00099: Cl_ (location: e)' : 0.00778132482043096, 'cpd00038: GTP (location: c)' : 0.135406821203723, 'cpd00003: NAD (location: c)' : 0.00778132482043096, 'cpd17041: Protein_biosynthesis (location: c)' : 1.0, 'cpd00033: Glycine (location: e)' : 0.510820469745475, 'cpd00322: L_Isoleucine (location: e)' : 0.242249358141304, 'cpd00254: Mg (location: c)' : 0.00778132482043096, 'cpd17043: RNA_transcription (location: c)' : 1.0, 'cpd00048: Sulfate (location: e)' : 0.00778132482043096, 'cpd10515: Fe2 (location: e)' : 0.00778132482043096, 'cpd02229: Bactoprenyl_diphosphate (location: c)' : 0.0609084652443221, 'cpd11493: ACP (location: c)' : 0.00778132482043096, 'cpd00161: L_Threonine (location: e)' : 0.211466290532188, 'cpd00006: NADP (location: c)' : 0.00778132482043096, 'cpd00060: L_Methionine (location: e)' : 0.128039715997337, 'cpd00119: L_Histidine (location: e)' : 0.0794113918032267, 'cpd00052: CTP (location: c)' : 0.0841036156544863, 'cpd00051: L_Arginine (location: e)' : 0.247156803702178, 'cpd15665: Peptidoglycan_polymer_n_subunits (location: c)' : 0.0609084652443221, 'cpd00017: S_Adenosyl_L_methionine (location: c)' : 0.00778132482043096, 'cpd00030: Mn2 (location: e)' : 0.00778132482043096, 'cpd10516: fe3 (location: e)' : 0.00778132482043096, 'cpd00065: L_Tryptophan (location: e)' : 0.0472899299502361, 'cpd00084: L_Cysteine (location: c)' : 0.0762884719008526, 'cpd00023: L_Glutamate (location: e)' : 0.219496655995436, 'cpd17042: DNA_replication (location: c)' : 1.0, 'cpd00356: dCTP (location: c)' : 0.01468498342018, 'cpd00035: L_Alanine (location: e)' : 0.428732289454499, 'cpd00069: L_Tyrosine (location: e)' : 0.115101904973216, 'cpd00220: Riboflavin (location: e)' : 0.00778132482043096, 'cpd00129: L_Proline (location: e)' : 0.184698405654696, 'cpd00357: TTP (location: c)' : 0.01468498342018, 'cpd00205: K (location: c)' : 0.00778132482043096, 'cpd00149: Co2 (location: c)' : 0.00778132482043096, 'cpd00063: Ca2 (location: c)' : 0.00778132482043096, 'cpd00054: L_Serine (location: e)' : 0.179790960093822, 'cpd00001: H2O (location: e)' : 35.5386858537513, 'cpd00010: CoA (location: c)' : 0.00778132482043096, 'cpd00015: FAD (location: c)' : 0.00778132482043096, 'cpd00062: UTP (location: c)' : 0.0908319049068452, 'cpd00107: L_Leucine (location: e)' : 0.376088782528765, 'cpd00241: dGTP (location: c)' : 0.01468498342018, 'cpd00053: L_Glutamine (location: e)' : 0.219496655995436, 'cpd00039: L_Lysine (location: e)' : 0.285970236774541, 'cpd00034: Zn2 (location: e)' : 0.00778132482043096, 'cpd00058: Cu2 (location: e)' : 0.00778132482043096, 'cpd00002: ATP (location: c)' : 40.1101757365074, 'cpd00041: L_Aspartate (location: e)' : 0.201205267995816, 'cpd00115: dATP (location: c)' : 0.01468498342018}
sbml_right_compounds = {'cpd00067: H (location: e)' : 40.0, 'cpd00012: PPi (location: e)' : 0.405833094852252, 'cpd00008: ADP (location: c)' : 40.0, 'cpd11416: Biomass (location: c)' : 1.0, 'cpd12370: apo_ACP (location: c)' : 0.00778132482043096, 'cpd00009: Phosphate (location: e)' : 39.9922186751796, 'cpd15666: Peptidoglycan_polymer_n_1_subunits (location: c)' : 0.0609084652443221}
sbml_biomass = PyFBA.metabolism.Reaction('sbml_biomass', 'sbml_biomass')
sbml_biomass.equation = sbml_equation
parsecomp = re.compile('^(cpd\\d+): (.*?) \(location: (.)\)')
for c in sbml_left_compounds:
m = parsecomp.match(c)
if not m:
sys.stderr.write(f"Can't parse {c}\n")
if m.group(1) in compounds:
if False and compounds[m.group(1)] != m.group(2):
sys.stderr.write(f"We had |{compounds[m.group(1)]}| for {m.group(1)} in the SBML, but now have |{m.group(2)}|\n")
newcomp = PyFBA.metabolism.CompoundWithLocation.from_compound(compounds[m.group(1)], m.group(3))
sbml_biomass.add_left_compounds({newcomp})
sbml_biomass.set_left_compound_abundance(newcomp, sbml_left_compounds[c])
else:
print(f"{m.group(1)} not found")
for c in sbml_right_compounds:
m = parsecomp.match(c)
if not m:
sys.stderr.write(f"Can't parse {c}\n")
if m.group(1) in compounds:
if True and compounds[m.group(1)] != m.group(2):
sys.stderr.write(f"We had |{compounds[m.group(1)]}| for {m.group(1)} in the SBML, but now have |{m.group(2)}|\n")
newcomp = PyFBA.metabolism.CompoundWithLocation.from_compound(compounds[m.group(1)], m.group(3))
sbml_biomass.add_right_compounds({newcomp})
sbml_biomass.set_right_compound_abundance(newcomp, sbml_right_compounds[c])
else:
print(f"{m.group(1)} not found")
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, sbml_biomass, verbose=True)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
Before running FBA there are 44015 reactions
###Markdown
Add the missing reactions
###Code
all_reactions = {'rxn00868', 'rxn01923', 'rxn02268', 'rxn10215', 'rxn10219', 'rxn08089', 'rxn10212', 'rxn08083', 'rxn10214', 'rxn10211', 'rxn10218', 'rxn08086', 'rxn10217', 'rxn08087', 'rxn08088', 'rxn08085', 'rxn10216', 'rxn08084', 'rxn10213', 'rxn05572', 'rxn05565', 'rxn00541', 'rxn10155', 'rxn10157', 'rxn05536', 'rxn05544', 'rxn12848', 'rxn12851', 'rxn05539', 'rxn05541', 'rxn05537', 'rxn05543', 'rxn12849', 'rxn05533', 'rxn05540', 'rxn05534', 'rxn05547', 'rxn05546', 'rxn05542', 'rxn05535', 'rxn12850', 'rxn05545', 'rxn05538', 'rxn05168', 'rxn05179', 'rxn05161', 'rxn09313', 'rxn08354', 'rxn08356', 'rxn09315', 'rxn05549', 'rxn05160', 'rxn05644', 'rxn05330', 'rxn05335', 'rxn05334', 'rxn05329', 'rxn05333', 'rxn05332', 'rxn05331', 'rxn05415', 'rxn05381', 'rxn05386', 'rxn05427', 'rxn05431', 'rxn05373', 'rxn05377', 'rxn05398', 'rxn05419', 'rxn05402', 'rxn05369', 'rxn05361', 'rxn05394', 'rxn05406', 'rxn05365', 'rxn05390', 'rxn05423', 'rxn05462', 'rxn05411', 'rxn03492', 'rxn04050', 'rxn08258', 'rxn04713', 'rxn00990', 'rxn00875', 'rxn08471', 'rxn05737', 'rxn08467', 'rxn10067', 'rxn08468', 'rxn08469', 'rxn08470', 'rxn01302', 'rxn01301', 'rxn05422', 'rxn05372', 'rxn05341', 'rxn05376', 'rxn05342', 'rxn05337', 'rxn05385', 'rxn05397', 'rxn05340', 'rxn05461', 'rxn05368', 'rxn05418', 'rxn05393', 'rxn05336', 'rxn05426', 'rxn05364', 'rxn05430', 'rxn05410', 'rxn05339', 'rxn05401', 'rxn05338', 'rxn05360', 'rxn05414', 'rxn05405', 'rxn05389', 'rxn05380', 'rxn03164', 'rxn05229', 'rxn07586', 'rxn05054', 'rxn04384', 'rxn00503', 'rxn00183', 'rxn05187', 'rxn05515', 'rxn02056', 'rxn09134', 'rxn09125', 'rxn09157', 'rxn09128', 'rxn09142', 'rxn09161', 'rxn09147', 'rxn09164', 'rxn09152', 'rxn09124', 'rxn09131', 'rxn09133', 'rxn09138', 'rxn09143', 'rxn09153', 'rxn09160', 'rxn09158', 'rxn09148', 'rxn09144', 'rxn09150', 'rxn09130', 'rxn09149', 'rxn09163', 'rxn09159', 'rxn09132', 'rxn09127', 'rxn09140', 'rxn09145', 'rxn09137', 'rxn09154', 'rxn09151', 'rxn09146', 'rxn09123', 'rxn09139', 'rxn09126', 'rxn09141', 'rxn09135', 'rxn09136', 'rxn09155', 'rxn09162', 'rxn09129', 'rxn09156', 'rxn02949', 'rxn03241', 'rxn03245', 'rxn02911', 'rxn02167', 'rxn03250', 'rxn02934', 'rxn03240', 'rxn03247', 'rxn05316', 'rxn09687', 'rxn05198', 'rxn09688', 'rxn05199', 'rxn05200', 'rxn09685', 'rxn05318', 'rxn05205', 'rxn05621', 'rxn05656', 'rxn05585', 'rxn05172', 'rxn05594', 'rxn05552', 'rxn05599', 'rxn05512', 'rxn05620', 'rxn01277', 'rxn05518', 'rxn05145', 'rxn05460', 'rxn05396', 'rxn05363', 'rxn05359', 'rxn05367', 'rxn05417', 'rxn05421', 'rxn05392', 'rxn05413', 'rxn05349', 'rxn05388', 'rxn05429', 'rxn05371', 'rxn05400', 'rxn05425', 'rxn05409', 'rxn05404', 'rxn05375', 'rxn05379', 'rxn05384', 'rxn04139', 'rxn00640', 'rxn05507', 'rxn05506', 'rxn01893', 'rxn00671', 'rxn00501', 'rxn10340', 'rxn10334', 'rxn10337', 'rxn10338', 'rxn10341', 'rxn10335', 'rxn10342', 'rxn10339', 'rxn10336', 'rxn00160', 'rxn01285', 'rxn04143', 'rxn01847', 'rxn01103', 'rxn00227', 'rxn05175', 'rxn05163', 'rxn05683', 'rxn05484', 'rxn02933', 'rxn04750', 'rxn03244', 'rxn01451', 'rxn03239', 'rxn03246', 'rxn03242', 'rxn03249', 'rxn06777', 'rxn05500', 'rxn01637', 'rxn01122', 'rxn04602', 'rxn02416', 'rxn04601', 'rxn04928', 'rxn05596', 'rxn02762', 'rxn02521', 'rxn02522', 'rxn03483', 'rxn02775', 'rxn04046', 'rxn07589', 'rxn03491', 'rxn10117', 'rxn10119', 'rxn08333', 'rxn04673', 'rxn10308', 'rxn10311', 'rxn10315', 'rxn10309', 'rxn10307', 'rxn10312', 'rxn10310', 'rxn10314', 'rxn08040', 'rxn10313', 'rxn12147', 'rxn03931', 'rxn03916', 'rxn04674', 'rxn03397', 'rxn10094', 'rxn02286', 'rxn02474', 'rxn00555', 'rxn08709', 'rxn04052', 'rxn03512', 'rxn12224', 'rxn09188', 'rxn02359', 'rxn02008', 'rxn08179', 'rxn08178', 'rxn03643', 'rxn09177', 'rxn12512', 'rxn07587', 'rxn02507', 'rxn08291', 'rxn06865', 'rxn00303', 'rxn00222', 'rxn09978', 'rxn09979', 'rxn07588', 'rxn04413', 'rxn03537', 'rxn03536', 'rxn03919', 'rxn03435', 'rxn02187', 'rxn02186', 'rxn03436', 'rxn03068', 'rxn05317', 'rxn01219', 'rxn00364', 'rxn03514', 'rxn04048', 'rxn00544', 'rxn02792', 'rxn00350', 'rxn02791', 'rxn05221', 'rxn00675', 'rxn00175', 'rxn00986', 'rxn01507', 'rxn02400', 'rxn01670', 'rxn00363', 'rxn00708', 'rxn01218', 'rxn01521', 'rxn01445', 'rxn00913', 'rxn01145', 'rxn00132', 'rxn01961', 'rxn00831', 'rxn08712', 'rxn04113', 'rxn04996', 'rxn08756', 'rxn08352', 'rxn06023', 'rxn02449', 'rxn05165', 'rxn05181', 'rxn08194', 'rxn01093', 'rxn09180', 'rxn03644', 'rxn08619', 'rxn09289', 'rxn00776', 'rxn01360', 'rxn08335', 'rxn08336', 'rxn12500', 'rxn02287', 'rxn02774', 'rxn09167', 'rxn08708', 'rxn05156', 'rxn05151', 'rxn01629', 'rxn12146', 'rxn01123', 'rxn05147', 'rxn05173', 'rxn08707', 'rxn00927', 'rxn01299', 'rxn01226', 'rxn01545', 'rxn02476', 'rxn02011', 'rxn05201', 'rxn01895', 'rxn04604', 'rxn00830', 'rxn00179', 'rxn03991', 'rxn03990', 'rxn03975', 'rxn03974', 'rxn00818', 'rxn03838', 'rxn00817', 'rxn02596', 'rxn05555', 'rxn00056', 'rxn06979', 'rxn11544', 'rxn03918', 'rxn05559', 'rxn08345', 'rxn00509', 'rxn00205', 'rxn00006', 'rxn02473', 'rxn00834', 'rxn05293', 'rxn00105', 'rxn00634', 'rxn08618', 'rxn06848', 'rxn09997', 'rxn05938', 'rxn04783', 'rxn05206', 'rxn00102', 'rxn01644', 'rxn02938', 'rxn00792', 'rxn08711', 'rxn03513', 'rxn04047', 'rxn01265', 'rxn01404', 'rxn03394', 'rxn00777', 'rxn01106', 'rxn07492', 'rxn03538', 'rxn01480', 'rxn00119', 'rxn01517', 'rxn01966', 'rxn01132', 'rxn05162', 'rxn02277', 'rxn08257', 'rxn05197', 'rxn01352', 'rxn03540', 'rxn00789', 'rxn00508', 'rxn04386', 'rxn10481', 'rxn05528', 'rxn06077', 'rxn01671', 'rxn02929', 'rxn03917', 'rxn03135', 'rxn00469', 'rxn00756', 'rxn03087', 'rxn01329', 'rxn01917', 'rxn01879', 'rxn01538', 'rxn02285', 'rxn08710', 'rxn07438', 'rxn02321', 'rxn00787', 'rxn01289', 'rxn00851', 'rxn05297', 'rxn00062', 'rxn04132', 'rxn04133', 'rxn05319', 'rxn05467', 'rxn05468', 'rxn02374', 'rxn03012', 'rxn05064', 'rxn02666', 'rxn04457', 'rxn04456', 'rxn01664', 'rxn02916', 'rxn05667', 'rxn10571', 'rxn05195', 'rxn05645', 'rxn05144', 'rxn02988', 'rxn01256', 'rxn12604', 'rxn05039', 'rxn10904', 'rxn05499', 'rxn01152', 'rxn05691', 'rxn12893', 'rxn11116', 'rxn00880', 'rxn05593', 'rxn05469', 'rxn00186', 'rxn05694', 'rxn05491', 'rxn05682', 'rxn01748', 'rxn00327', 'rxn01746', 'rxn09656'}
print(f"Before updating there are {len(reactions_to_run)} reactions")
r2ra = copy.copy(reactions_to_run)
r2ra.update(all_reactions)
print(f"After updating there are {len(r2ra)} reactions")
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, sbml_biomass, verbose=True)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
new_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions,
reactions_to_run, media, verbose=False)
print(f"There are {len(new_reactions)} new reactions to add")
transrct = set()
for r in new_reactions:
if reactions[r].is_transport:
transrct.add(r)
print(f"There are {len(transrct)} new transport reactions")
reactions_to_run.update(transrct)
print(f"Before running FBA there are {len(reactions)} reactions")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print(f"After running FBA there are {len(reactions)} reactions")
print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
print(f"There are {len(reactions_to_run)} reactions to run")
###Output
_____no_output_____
###Markdown
Gap-fill the modelSince the model does not grow on ArgonneLB we need to gap-fill it to ensure growth. There are several ways that we can gap-fill, and we will work through them until we get growth.As you will see, we update the *reactions_to_run list* each time, and keep the media and everything else consistent. Then we just need to run the FBA like we have done above and see if we get growth.We also keep a copy of the original *reactions_to_run*, and a list with all the reactions that we are adding, so once we are done we can go back and bisect the reactions that are added.
###Code
added_reactions = []
original_reactions_to_run = copy.copy(reactions_to_run)
###Output
_____no_output_____
###Markdown
Media import reactionsWe need to make sure that the cell can import everything that is in the media... otherwise it won't be able to grow. Be sure to only do this step if you are certain that the cell can grow on the media you are testing.
###Code
update_type = 'media'
new_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions,
reactions_to_run, media, verbose=True)
added_reactions.append((update_type, new_reactions))
print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
reactions_to_run.update(new_reactions)
print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
for r in reactions:
if reactions[r].is_transport:
print(r)
for r in reactions:
for c in reactions[r].left_compounds:
if c.location == 'e':
if not reactions[r].is_transport:
print(f"Check {r}")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
_____no_output_____
###Markdown
Essential reactionsThere are ~100 reactions that are in every model we have tested, and we construe these to be essential for all models, so we typically add these next!
###Code
update_type = 'essential'
new_reactions = PyFBA.gapfill.suggest_essential_reactions()
added_reactions.append((update_type, new_reactions))
print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
reactions_to_run.update(new_reactions)
print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
_____no_output_____
###Markdown
SubsystemsThe reactions connect us to subsystems (see [Overbeek et al. 2014](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3965101/)), and this test ensures that all the subsystems are complete. We add reactions required to complete the subsystem.
###Code
update_type = 'subsystems'
new_reactions = \
PyFBA.gapfill.suggest_reactions_from_subsystems(reactions,
reactions_to_run,
threshold=0.5)
added_reactions.append((update_type, new_reactions))
print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
reactions_to_run.update(new_reactions)
print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
pre_orphan=copy.copy(reactions_to_run)
pre_o_added=copy.copy(added_reactions)
print("Pre orphan has {} reactions".format(len(pre_orphan)))
###Output
_____no_output_____
###Markdown
Orphan compoundsOrphan compounds are those compounds which are only associated with one reaction. They are either produced, or trying to be consumed. We need to add reaction(s) that complete the network of those compounds.You can change the maximum number of reactions that a compound is in to be considered an orphan (try increasing it to 2 or 3).
###Code
update_type = 'orphan compounds'
new_reactions = PyFBA.gapfill.suggest_by_compound(compounds, reactions,
reactions_to_run,
max_reactions=1)
added_reactions.append((update_type, new_reactions))
print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
reactions_to_run.update(new_reactions)
print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.")
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run,
media, biomass_equation)
print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
###Output
_____no_output_____
###Markdown
Trimming the modelNow that the model has been shown to grow on ArgonneLB media after several gap-fill iterations, we should trim down the reactions to only the required reactions necessary to observe growth.
###Code
reqd_additional = set()
# Begin loop through all gap-filled reactions
while added_reactions:
ori = copy.copy(original_reactions_to_run)
ori.update(reqd_additional)
# Test next set of gap-filled reactions
# Each set is based on a method described above
how, new = added_reactions.pop()
sys.stderr.write("Testing reactions from {}\n".format(how))
# Get all the other gap-filled reactions we need to add
for tple in added_reactions:
ori.update(tple[1])
# Use minimization function to determine the minimal
# set of gap-filled reactions from the current method
new_essential = PyFBA.gapfill.minimize_additional_reactions(ori, new, compounds,
reactions, media,
biomass_equation)
sys.stderr.write("Saved {} reactions from {}\n".format(len(new_essential), how))
for r in new_essential:
sys.stderr.write(r + "\n")
# Record the method used to determine
# how the reaction was gap-filled
for new_r in new_essential:
reactions[new_r].is_gapfilled = True
reactions[new_r].gapfill_method = how
reqd_additional.update(new_essential)
# Combine old and new reactions
all_reactions = original_reactions_to_run.union(reqd_additional)
status, value, growth = PyFBA.fba.run_fba(compounds, reactions, all_reactions,
media, biomass_equation)
print("The biomass reaction has a flux of {} --> Growth: {}".format(value, growth))
###Output
_____no_output_____ |
Measurements_text_generation_using_an_lstm_in_keras.ipynb | ###Markdown
Text Generation using an LSTM in KerasIn this kernel you we will go over how to let a network create text in the style of sir arthur conan doyle. This kernel is heavily based on the [official keras text generation example](https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py). I also made [a video](https://youtu.be/QtQt1CUEE3w) on text generation using an LSTM network.Content:1. [Introduction](1)2. [Loading in data](2)3. [Preprocessing](3) 3.1 [Map chars to integers](3.1) 3.2 [Split up into subsequences](3.2)4. [Building model](4) 4.1 [Helper Functions](4.1) 4.2 [Defining callbacks and training the model](4.2)5. [Generate new text](5) 6. [Conclusion](6) 1. IntroductionBecause the sequence in an text is important we will recurrent neural network which can remember its previous inputs.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import io
from sklearn import metrics
from sklearn.metrics import mean_squared_error
###Output
Using TensorFlow backend.
###Markdown
2. Loading in data
###Code
text = open('LSTMemerging-trojan.txt', 'r').read().lower()
print('text length', len(text))
print(text[:300])
###Output
alice's adventures in wonderland
lewis carroll
the millennium fulcrum edition 3.0
chapter i. down the rabbit-hole
alice was beginning to get very tired of sitting by her sister on the
bank, and of having nothing to do: once or twice she had peeped into the
book her sister was reading, but it
###Markdown
3. Preprocessing 3.1 Map chars to integersBecause we will be training on a character level we need to relate each unique character with a number.We are going to create two dicts one from character to integer and one to transform back to character
###Code
chars = sorted(list(set(text)))
print('total chars: ', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
###Output
_____no_output_____
###Markdown
3.2 Split up into subsequencesCreates an array of sentence data with the length maxlen as well as an array with the next character.
###Code
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print(sentences[:3])
print(next_chars[:3])
# Print length
print(len(sentences))
###Output
48123
###Markdown
We need to reshape our data in a format we can pass to the Keras LSTM The shape look like [samples, time steps, features]
###Code
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
print(x[:3])
print(y[:3])
###Output
[[[ True False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
[[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False True False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
[[False False False ... False False False]
[False False False ... False False False]
[False True False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]]
[[False True False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False]
[False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
True False False False False False False False False]
[False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False True False False False False False
False False False False False False False False False]]
###Markdown
4. Building modelFor this kernel I will use a really small LSTM network but if you want to get better results feel free to replace it with a bigger network.
###Code
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy','mae','mse'])
###Output
WARNING:tensorflow:From C:\Users\Administrator\Anaconda3\lib\site-packages\keras\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From C:\Users\Administrator\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3295: The name tf.log is deprecated. Please use tf.math.log instead.
###Markdown
4.1 Helper FunctionsI got this function from the lstm_text_generation example from keras. [https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py](https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py) Samples an index from a probability array with some temperature.
###Code
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
###Output
_____no_output_____
###Markdown
Callback function to print predicted text generated by our LSTM. It prints generated text with 5 different temperatures [0.2, 0.5, 1.0, 1.2]. 0.2 will generate text with more ordinary word. 1.2 will generate wilder guesses.
###Code
def on_epoch_end(epoch, logs):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
###Output
_____no_output_____
###Markdown
4.2 Defining callbacks and training the model
###Code
from keras.callbacks import ModelCheckpoint
filepath = "weights.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss',
verbose=1, save_best_only=True,
mode='min')
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.2,
patience=1, min_lr=0.001)
callbacks = [print_callback, checkpoint, reduce_lr]
model.fit(x, y, batch_size=256, epochs=1, callbacks=callbacks)
testScore = model.evaluate(x,y, verbose=0)
print ('\nTest Scores: acc={}, mae={}, mse={}'.format(*testScore))
###Output
Test Scores: acc=2.0600175274515076, mae=0.4020946325097816, mse=0.033119961810506984
###Markdown
5. Generate new textWe can generate text using the same approach as in the on_epoch_end helper function create by Keras.
###Code
def generate_text(length, diversity):
# Get random starting text
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
for i in range(length):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
return generated
#testScore = model.evaluate(preds, diversity, verbose=0)
#print ('\nTest Scores: mse={}, mae={}'.format(*testScore))
#testScore = model.evaluate(preds, verbose=0)
#print ('\nTest Scores: acc={}'.format(*testScore))
#print("Accuracy:",metrics.accuracy_score(y_test, predictions))
print(generate_text(5000, 0.2))
#testScore = model.evaluate(preds, diversity, verbose=0)
#print ('\nTest Scores: mse={}, mae={}'.format(*testScore))
# print("##############################################################")
# print("Accuracy:",metrics.accuracy_score(y_test, predictions))
# print("Kappa Stats:",metrics.cohen_kappa_score(y_test, predictions))
# print("Precision:",metrics.precision_score(y_test, predictions))
# print("Recall:",metrics.recall_score(y_test, predictions))
# print("Mean Absolute Error:",metrics.mean_absolute_error(y, preds))
# print("Mean Squared Error:",metrics.mean_squared_error(diversity, preds))
# print("F-Measure:",metrics.recall_score(y_test, predictions))
# print("##############################################################")
###Output
_____no_output_____
###Markdown
6. ConclusionAfter 5 epochs our LSTM performed a ok job and I'm more than satisfied with the result.Here are a few things you can change to get better results1. Add more LSTM Layers.2. Use more LSTM Cells.3. Train for more than 5 epochs. (25+)4. Add dropout Layer.5. Play around with the batch-size
###Code
###Output
_____no_output_____
###Markdown
Text Generation using an LSTM in KerasIn this kernel you we will go over how to let a network create text in the style of sir arthur conan doyle. This kernel is heavily based on the [official keras text generation example](https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py). I also made [a video](https://youtu.be/QtQt1CUEE3w) on text generation using an LSTM network.Content:1. [Introduction](1)2. [Loading in data](2)3. [Preprocessing](3) 3.1 [Map chars to integers](3.1) 3.2 [Split up into subsequences](3.2)4. [Building model](4) 4.1 [Helper Functions](4.1) 4.2 [Defining callbacks and training the model](4.2)5. [Generate new text](5) 6. [Conclusion](6) 1. IntroductionBecause the sequence in an text is important we will recurrent neural network which can remember its previous inputs.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import io
from sklearn import metrics
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
2. Loading in data
###Code
text = open('LSTMemerging-trojan.txt', 'r').read().lower()
print('text length', len(text))
print(text[:400])
###Output
alert tcp $home_net any -> $external_net 25 (msg:"et trojan suspicious smtp handshake outbound"; flow:established,to_server; content:"001 ruthere"; depth:11; metadata: former_category malware; reference:url,doc.emergingthreats.net/bin/view/main/2008562; classtype:unknown; sid:2008562; rev:3; metadata:created_at 2010_07_30, updated_at 2010_07_30;)
alert tcp $external_net 25 -> $home_net any (msg:
###Markdown
3. Preprocessing 3.1 Map chars to integersBecause we will be training on a character level we need to relate each unique character with a number.We are going to create two dicts one from character to integer and one to transform back to character
###Code
chars = sorted(list(set(text)))
print('total chars: ', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
###Output
_____no_output_____
###Markdown
3.2 Split up into subsequencesCreates an array of sentence data with the length maxlen as well as an array with the next character.
###Code
maxlen = 200
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print(sentences[:3])
print(next_chars[:3])
# Print length
print(len(sentences))
###Output
1490667
###Markdown
We need to reshape our data in a format we can pass to the Keras LSTM The shape look like [samples, time steps, features]
###Code
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
print(x[:3])
print(y[:3])
###Output
[[[ True False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
[[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
[[False True False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]]
[[False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False True False False False False False False False
False False False False False False False False False]
[False False False False False False False False False False False False
False False False False False False False False False False False False
False False False True False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False]
[False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False True False False False False False False False False False
False False False False False False False False False]]
###Markdown
4. Building modelFor this kernel I will use a really small LSTM network but if you want to get better results feel free to replace it with a bigger network.
###Code
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy','mae','mse'])
###Output
_____no_output_____
###Markdown
4.1 Helper FunctionsI got this function from the lstm_text_generation example from keras. [https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py](https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py) Samples an index from a probability array with some temperature.
###Code
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
###Output
_____no_output_____
###Markdown
Callback function to print predicted text generated by our LSTM. It prints generated text with 5 different temperatures [0.2, 0.5, 1.0, 1.2]. 0.2 will generate text with more ordinary word. 1.2 will generate wilder guesses.
###Code
def on_epoch_end(epoch, logs):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
###Output
_____no_output_____
###Markdown
4.2 Defining callbacks and training the model
###Code
from keras.callbacks import ModelCheckpoint
filepath = "weights.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss',
verbose=1, save_best_only=True,
mode='min')
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.2,
patience=1, min_lr=0.001)
callbacks = [print_callback, checkpoint, reduce_lr]
model.fit(x, y, batch_size=256, epochs=1, callbacks=callbacks)
testScore = model.evaluate(x,y, verbose=0)
print ('\nTest Scores: acc={}, mae={}, mse={}'.format(*testScore))
###Output
Test Scores: acc=2.0600175274515076, mae=0.4020946325097816, mse=0.033119961810506984
###Markdown
5. Generate new textWe can generate text using the same approach as in the on_epoch_end helper function create by Keras.
###Code
def generate_text(length, diversity):
# Get random starting text
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
for i in range(length):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
return generated
#testScore = model.evaluate(preds, diversity, verbose=0)
#print ('\nTest Scores: mse={}, mae={}'.format(*testScore))
#testScore = model.evaluate(preds, verbose=0)
#print ('\nTest Scores: acc={}'.format(*testScore))
#print("Accuracy:",metrics.accuracy_score(y_test, predictions))
print(generate_text(5000, 0.2))
#testScore = model.evaluate(preds, diversity, verbose=0)
#print ('\nTest Scores: mse={}, mae={}'.format(*testScore))
# print("##############################################################")
# print("Accuracy:",metrics.accuracy_score(y_test, predictions))
# print("Kappa Stats:",metrics.cohen_kappa_score(y_test, predictions))
# print("Precision:",metrics.precision_score(y_test, predictions))
# print("Recall:",metrics.recall_score(y_test, predictions))
# print("Mean Absolute Error:",metrics.mean_absolute_error(y, preds))
# print("Mean Squared Error:",metrics.mean_squared_error(diversity, preds))
# print("F-Measure:",metrics.recall_score(y_test, predictions))
# print("##############################################################")
###Output
_____no_output_____
###Markdown
6. ConclusionAfter 5 epochs our LSTM performed a ok job and I'm more than satisfied with the result.Here are a few things you can change to get better results1. Add more LSTM Layers.2. Use more LSTM Cells.3. Train for more than 5 epochs. (25+)4. Add dropout Layer.5. Play around with the batch-size
###Code
###Output
_____no_output_____ |
notebooks/Counts.ipynb | ###Markdown
How many distinct individuals per family?
###Code
members_per_family = family_members.\
dropna(subset=["video"], how='any').\
groupby('fid').count().\
rename({'surname': 'num_members'}, axis=1)['num_members'].\
to_frame()
members_per_family.to_csv('./members_per_family.csv')
members_per_family.head()
###Output
_____no_output_____
###Markdown
How many unique videos per family?
###Code
vids_per_family = pd.DataFrame(columns=['fid', 'surname', 'num_videos']).set_index('fid')
for (fid, surname), family in family_members.dropna(subset=['video'], how='any').groupby(['fid', 'surname']):
videos_for_family = {url for url in
itertools.chain(family['video'].dropna().values,
family['video2'].dropna().values,
family['video3'].dropna().values,
)
}
vids_per_family.loc[fid, 'surname'] = surname
vids_per_family.loc[fid, 'num_videos'] = len(videos_for_family)
vids_per_family.to_csv('./videos_per_family.csv')
vids_per_family.head()
###Output
_____no_output_____
###Markdown
How many pairs are possible?
###Code
kin_pairs = []
for (fid, surname), family in family_members.dropna(subset=['video'], how='any').groupby(['fid', 'surname']):
pairs_for_family = itertools.combinations(family['mid'].values, r=2)
for (p1_mid, p2_mid) in pairs_for_family:
kin_pairs.append((
surname,
f'{fid}/MID{p1_mid}',
f'{fid}/MID{p2_mid}',
))
kin_pairs = pd.DataFrame.from_records(kin_pairs, columns=['surname', 'p1', 'p2'])
kin_pairs
###Output
_____no_output_____
###Markdown
What are the counts of each relationship type?
###Code
rid = pd.read_csv("/Users/zkhan/master-version/fiwdb/FIW_RIDs.csv").set_index("RID").dropna().to_dict()["Label"]
relmats = {folder.stem: pd.read_csv(folder / "mid.csv").set_index("MID") for folder in fiwdb.glob("F*")}
def proper_relationship(p1, p2, relmats, rid):
fid1, mid1, *rest = p1.split("/")
fid2, mid2, *rest = p2.split("/")
mid1 = int(mid1.split("MID")[-1])
mid2 = int(mid2.split("MID")[-1])
if fid1 != fid2:
return "NOT_RELATED"
matrix = relmats[fid1]
p1_gender = matrix.loc[mid1, "Gender"][0]
p2_gender = matrix.loc[mid2, "Gender"][0]
p1_male = p1_gender == "m"
p2_male = p2_gender == "m"
rel_idx = matrix.loc[mid1][str(mid2)]
if rel_idx not in rid:
return 'NOT_RELATED'
try:
rel = rid[rel_idx]
except:
print(matrix)
raise
if rel == "Child":
p1_role = "son" if p1_male else "daughter"
p2_role = "father" if p2_male else "mother"
elif rel == "Parent":
p1_role = "father" if p1_male else "mother"
p2_role = "son" if p2_male else "daughter"
elif rel == "Grandparent":
p1_role = "grandfather" if p1_male else "grandmother"
p2_role = "grandson" if p2_male else "granddaughter"
elif rel == "Grandchild":
p1_role = "grandson" if p1_male else "granddaughter"
p2_role = "grandfather" if p2_male else "grandmother"
elif rel == "Sibling":
if p1_male and p2_male:
p1_role, p2_role = "brother", "brother"
elif (not p1_male) and (not p2_male):
p1_role, p2_role = "sister", "sister"
else:
p1_role, p2_role = "sibling", "sibling"
elif rel == 'Spouse':
p1_role, p2_role = 'spouse', 'spouse'
elif rel == "Great Grandparent":
p1_role = "greatgrandfather" if p1_male else "greatgrandmother"
p2_role = "greatgrandson" if p2_male else "greatgranddaughter"
else:
print(rel)
p1_role, p2_role = 'unknown', ''
return "-".join(sorted([p1_role, p2_role]))
proper_roles = []
for row in kin_pairs.itertuples():
try:
proper_roles.append(proper_relationship(row.p1, row.p2, relmats, rid))
except:
print(row)
break
kin_pairs["ptype"] = pd.Series(proper_roles)
pair_type_counts = kin_pairs["ptype"].value_counts().to_frame().rename({'ptype': 'num_pairs'}, axis=1)
pair_type_counts.index.name = 'ptype'
pair_type_counts.to_csv('./pair_type_counts.csv')
pair_type_counts
kin_pairs.to_csv('./kin_pairs.csv', index=False)
###Output
_____no_output_____
###Markdown
Number of videos per relationship type
###Code
def get_clips_for_person(person, family_members):
"""
Get clips for an individual.
person:
A string like 'F0008/MID1'.
"""
fid = person.split('/')[0]
mid = int(person.split('/')[-1].split('MID')[-1])
row = family_members[family_members.fid.eq(fid) & family_members.mid.eq(mid)].iloc[0]
videos = (row.video, row.video2, row.video3)
videos = list(_ for _ in videos if not pd.isna(_))
return videos
def count_clips_for_pair(p1, p2, family_members) -> int:
p1_clips = set(get_clips_for_person(p1, family_members))
p2_clips = set(get_clips_for_person(p2, family_members))
common_clips = p1_clips & p2_clips
p1_clips = p1_clips - common_clips
p2_clips = p2_clips - common_clips
return len(p1_clips) * len(p2_clips) + len(common_clips)
clips_for_kin_pair = []
for pair in kin_pairs.itertuples():
clips_for_kin_pair.append(
count_clips_for_pair(pair.p1, pair.p2, family_members)
)
kin_pairs['clips_for_pair'] = pd.Series(clips_for_kin_pair)
clips_for_pair = kin_pairs.groupby('ptype').agg({'clips_for_pair': 'sum'})
clips_for_pair.to_csv('./clips_for_pair_types.csv')
clips_for_pair
###Output
_____no_output_____
###Markdown
How many subjects share a video?
###Code
url_counts = pd.Series(
list(family_members.video.dropna().values) +
list(family_members.video2.dropna().values) +
list(family_members.video3.dropna().values),
).value_counts()
urls_shared_counts = pd.Series(url_counts.values).value_counts()
url_shared_counts = urls_shared_counts.to_frame().sort_index()
url_shared_counts.index.name = 'num_members'
url_shared_counts = url_shared_counts.rename({0: 'num_videos'}, axis=1)
url_shared_counts
url_shared_counts.to_csv('num_videos_with_k_members.csv')
###Output
_____no_output_____
###Markdown
What are the ethnicities of the subjects?
###Code
family_members_eth = family_members.dropna(how='any', subset=['ethnicity', 'video'])
family_members_eth.ethnicity.value_counts()
eth_counts = family_members_eth\
.ethnicity.value_counts()\
.to_frame().reset_index()\
.rename({"index": "ethnicity", "ethnicity": "count"}, axis=1)
eth_counts
eth_counts.to_csv("./ethnicity_counts.csv", index=False)
###Output
_____no_output_____ |
notebooks/nn/SGD_momentum.ipynb | ###Markdown
Epoch
###Code
model = make_model()
model.to(trainer.device)
###Output
_____no_output_____
###Markdown
Lambda = 1e-10
###Code
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch")
logger = Logger("SGD_momentum2.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9908958333333333, 0.03196341497482111)
Valid: (0.9836666666666667, 0.058639409329742195)
###Markdown
Lambda = 1e-5
###Code
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch", lambda_=1e-5)
logger = Logger("SGD_momentum_lambda=1e-5_2.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9976875, 0.010387330086668953)
Valid: (0.9855, 0.05503412218927406)
###Markdown
Lambda = 1e-2
###Code
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch", lambda_=1e-2)
logger = Logger("SGD_momentum_lambda=1e-2.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9904583333333333, 0.032220454464045666)
Valid: (0.9824166666666667, 0.05974859910442804)
###Markdown
Epoch average
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg")
logger = Logger("SGD_momentum-avg.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9853333333333333, 0.050679075373957556)
Valid: (0.98, 0.06859579290946324)
###Markdown
Epoch average, with span = 100
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg", avg_alpha=(2 / (100 + 1)))
logger = Logger("SGD_momentum-avg_span_100.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9909166666666667, 0.030257200048925976)
Valid: (0.9831666666666666, 0.06084559714452674)
###Markdown
Epoch average, with span = 50
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg", avg_alpha=(2 / (50 + 1)))
logger = Logger("SGD_momentum-avg_span_50.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9910416666666667, 0.03172329428357382)
Valid: (0.9831666666666666, 0.059001391724838564)
###Markdown
Epoch average, with span = 20
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg", avg_alpha=(2 / (20 + 1)))
logger = Logger("SGD_momentum-avg_span_20.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9903541666666666, 0.033172125154950965)
Valid: (0.98325, 0.05807564333003635)
###Markdown
Epoch average, with span = 15
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg", avg_alpha=(2 / (15 + 1)))
logger = Logger("SGD_momentum-avg_span_15.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.990375, 0.03216092803166248)
Valid: (0.98375, 0.056568290886934845)
###Markdown
Epoch average, with span = 10
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg", avg_alpha=(2 / (10 + 1)))
logger = Logger("SGD_momentum-avg_span_10.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9892083333333334, 0.03670764370728284)
Valid: (0.982, 0.06022620059705029)
###Markdown
Epoch average, with span = 5
###Code
model = make_model()
model.to(trainer.device)
optimizer = AcceleratedSGD(model.parameters(), 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch_avg", avg_alpha = (2 / (5 + 1)))
logger = Logger("SGD_momentum-avg_span_5.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.to(trainer.device)
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9910208333333334, 0.03151546738048395)
Valid: (0.9838333333333333, 0.056970573332936814)
###Markdown
Split + epoch
###Code
model = make_model()
model.to(trainer.device)
None
groups = [{"params": [param]} for param in model.parameters()]
optimizer = AcceleratedSGD(groups, 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch")
logger = Logger("SGD_momentum-split.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.98975, 0.03377577416237909)
Valid: (0.9814166666666667, 0.06653467167946898)
###Markdown
Linear only
###Code
model = make_model()
model.to(trainer.device)
None
conv_group = {
"params": [param for child in list(model.children())[:10] for param in child.parameters()],
"method": None
}
fc_group = {
"params": [param for child in list(model.children())[10:] for param in child.parameters()]
}
groups = [conv_group, fc_group]
optimizer = AcceleratedSGD(groups, 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch")
logger = Logger("SGD_momentum-linear_only.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9907291666666667, 0.03270646293833852)
Valid: (0.9835, 0.05615483507156993)
###Markdown
Linear + conv separately
###Code
model = make_model()
model.to(trainer.device)
None
conv_group = {
"params": [param for child in list(model.children())[:10] for param in child.parameters()],
}
fc_group = {
"params": [param for child in list(model.children())[10:] for param in child.parameters()]
}
groups = [conv_group, fc_group]
optimizer = AcceleratedSGD(groups, 1e-3, k=10, momentum=0.5, weight_decay=1e-5, mode="epoch")
logger = Logger("SGD_momentum-linear_conv.txt")
epochs = 30
for epoch in range(epochs):
train_loss = trainer.train_epoch(model, optimizer, dl["train"])
optimizer.finish_epoch()
val_acc, val_loss = trainer.validation(model, dl["valid"])
logger.log("Epoch", epoch+1, "|",
f"Training loss: {train_loss:.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}")
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
optimizer.accelerate()
optimizer.store_parameters()
model.cuda()
None
train_score = trainer.validation(model, dl["train"])
valid_score = trainer.validation(model, dl["valid"])
logger.log("Train:", train_score)
logger.log("Valid:", valid_score)
###Output
Train: (0.9900208333333333, 0.03382842983398587)
Valid: (0.98275, 0.06026328789287557)
|
1-data-modeling/demos/Lesson 1/l1-demo-0-creating-a-table-with-postgres.ipynb | ###Markdown
Lesson 1 Demo 0: PostgreSQL and AutoCommits Walk through the basics of PostgreSQL autocommits
###Code
## import postgreSQL adapter for the Python
import psycopg2
###Output
_____no_output_____
###Markdown
Create a connection to the database1. Connect to the local instance of PostgreSQL (*127.0.0.1*)2. Use the database/schema from the instance. 3. The connection reaches out to the database (*studentdb*) and use the correct privilages to connect to the database (*user and password = student*).
###Code
conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student")
###Output
_____no_output_____
###Markdown
Use the connection to get a cursor that will be used to execute queries.
###Code
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Create a database to work in
###Code
cur.execute("select * from test")
###Output
_____no_output_____
###Markdown
Error occurs, but it was to be expected because table has not been created as yet. To fix the error, create the table.
###Code
cur.execute("CREATE TABLE test (col1 int, col2 int, col3 int);")
###Output
_____no_output_____
###Markdown
Error indicates we cannot execute this query. Since we have not committed the transaction and had an error in the transaction block, we are blocked until we restart the connection.
###Code
conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student")
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
In our exercises instead of worrying about commiting each transaction or getting a strange error when we hit something unexpected, let's set autocommit to true. **This says after each call during the session commit that one action and do not hold open the transaction for any other actions. One action = one transaction.** In this demo we will use automatic commit so each action is commited without having to call `conn.commit()` after each command. **The ability to rollback and commit transactions are a feature of Relational Databases.**
###Code
conn.set_session(autocommit=True)
cur.execute("select * from test")
cur.execute("CREATE TABLE test (col1 int, col2 int, col3 int);")
###Output
_____no_output_____
###Markdown
Once autocommit is set to true, we execute this code successfully. There were no issues with transaction blocks and we did not need to restart our connection.
###Code
cur.execute("select * from test")
cur.execute("select count(*) from test")
print(cur.fetchall())
###Output
_____no_output_____ |
Deep Learning/Tutorials/.ipynb_checkpoints/Lenet_in_keras-checkpoint.ipynb | ###Markdown
Intermediate Net in Keras Build a intermediate neural network in to classify MNIST digits Set seed for reproducibility
###Code
import numpy as np
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras.optimizers import SGD
###Output
Using TensorFlow backend.
###Markdown
Load dataset
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype("float32")
X_test = X_test.reshape(10000, 784).astype("float32")
X_train /= 255
X_test /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network
###Code
model = Sequential()
# model.add(Dense((64), activation='relu', input_shape=(784, ), kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense((64), activation='relu', input_shape=(784, )))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense((64), activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense((10), activation='softmax'))
model.summary()
# model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy'])
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(X_train, y_train, batch_size=128, epochs=5, verbose=1, validation_data=(X_test, y_test))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 11s 176us/step - loss: 0.8537 - acc: 0.7309 - val_loss: 0.2595 - val_acc: 0.9235
Epoch 2/5
60000/60000 [==============================] - 8s 131us/step - loss: 0.4377 - acc: 0.8721 - val_loss: 0.2149 - val_acc: 0.9329
Epoch 3/5
60000/60000 [==============================] - 9s 151us/step - loss: 0.3690 - acc: 0.8930 - val_loss: 0.1918 - val_acc: 0.9404
Epoch 4/5
60000/60000 [==============================] - 9s 146us/step - loss: 0.3321 - acc: 0.9029 - val_loss: 0.1740 - val_acc: 0.9455
Epoch 5/5
60000/60000 [==============================] - 9s 147us/step - loss: 0.3067 - acc: 0.9107 - val_loss: 0.1596 - val_acc: 0.9509
|
notebooks/GRU_220.ipynb | ###Markdown
GRU 220* Operate on 16000 GenCode 34 seqs.* 5-way cross validation. Save best model per CV.* Report mean accuracy from final re-validation with best 5.* Use Adam with a learn rate decay schdule.
###Code
NC_FILENAME='ncRNA.gc34.processed.fasta'
PC_FILENAME='pcRNA.gc34.processed.fasta'
DATAPATH=""
try:
from google.colab import drive
IN_COLAB = True
PATH='/content/drive/'
drive.mount(PATH)
DATAPATH=PATH+'My Drive/data/' # must end in "/"
NC_FILENAME = DATAPATH+NC_FILENAME
PC_FILENAME = DATAPATH+PC_FILENAME
except:
IN_COLAB = False
DATAPATH=""
EPOCHS=200
SPLITS=5
K=3
VOCABULARY_SIZE=4**K+1 # e.g. K=3 => 64 DNA K-mers + 'NNN'
EMBED_DIMEN=16
FILENAME='GRU220'
NEURONS=64
DROP=0.5
ACT="tanh"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
from tensorflow import keras
from keras.wrappers.scikit_learn import KerasRegressor
from keras.models import Sequential
from keras.layers import Bidirectional
from keras.layers import GRU
from keras.layers import Dense
from keras.layers import LayerNormalization
import time
dt='float32'
tf.keras.backend.set_floatx(dt)
###Output
_____no_output_____
###Markdown
Build model
###Code
def compile_model(model):
adam_default_learn_rate = 0.001
schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate = adam_default_learn_rate*10,
#decay_steps=100000, decay_rate=0.96, staircase=True)
decay_steps=10000, decay_rate=0.99, staircase=True)
# learn rate = initial_learning_rate * decay_rate ^ (step / decay_steps)
alrd = tf.keras.optimizers.Adam(learning_rate=schedule)
bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
print("COMPILE...")
#model.compile(loss=bc, optimizer=alrd, metrics=["accuracy"])
model.compile(loss=bc, optimizer="adam", metrics=["accuracy"])
print("...COMPILED")
return model
def build_model():
embed_layer = keras.layers.Embedding(
#VOCABULARY_SIZE, EMBED_DIMEN, input_length=1000, input_length=1000, mask_zero=True)
#input_dim=[None,VOCABULARY_SIZE], output_dim=EMBED_DIMEN, mask_zero=True)
input_dim=VOCABULARY_SIZE, output_dim=EMBED_DIMEN, mask_zero=True)
rnn1_layer = keras.layers.Bidirectional(
keras.layers.GRU(NEURONS, return_sequences=True,
input_shape=[1000,EMBED_DIMEN], activation=ACT, dropout=DROP) )#bi
rnn2_layer = keras.layers.Bidirectional(
keras.layers.GRU(NEURONS, return_sequences=False,
activation=ACT, dropout=DROP) )#bi
dense1_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt)
drop1_layer = keras.layers.Dropout(DROP)
dense2_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt)
drop2_layer = keras.layers.Dropout(DROP)
output_layer = keras.layers.Dense(1, activation="sigmoid", dtype=dt)
mlp = keras.models.Sequential()
mlp.add(embed_layer)
mlp.add(rnn1_layer)
mlp.add(rnn2_layer)
mlp.add(dense1_layer)
mlp.add(drop1_layer)
mlp.add(dense2_layer)
mlp.add(drop2_layer)
mlp.add(output_layer)
mlpc = compile_model(mlp)
return mlpc
###Output
_____no_output_____
###Markdown
Load and partition sequences
###Code
# Assume file was preprocessed to contain one line per seq.
# Prefer Pandas dataframe but df does not support append.
# For conversion to tensor, must avoid python lists.
def load_fasta(filename,label):
DEFLINE='>'
labels=[]
seqs=[]
lens=[]
nums=[]
num=0
with open (filename,'r') as infile:
for line in infile:
if line[0]!=DEFLINE:
seq=line.rstrip()
num += 1 # first seqnum is 1
seqlen=len(seq)
nums.append(num)
labels.append(label)
seqs.append(seq)
lens.append(seqlen)
df1=pd.DataFrame(nums,columns=['seqnum'])
df2=pd.DataFrame(labels,columns=['class'])
df3=pd.DataFrame(seqs,columns=['sequence'])
df4=pd.DataFrame(lens,columns=['seqlen'])
df=pd.concat((df1,df2,df3,df4),axis=1)
return df
def separate_X_and_y(data):
y= data[['class']].copy()
X= data.drop(columns=['class','seqnum','seqlen'])
return (X,y)
###Output
_____no_output_____
###Markdown
Make K-mers
###Code
def make_kmer_table(K):
npad='N'*K
shorter_kmers=['']
for i in range(K):
longer_kmers=[]
for mer in shorter_kmers:
longer_kmers.append(mer+'A')
longer_kmers.append(mer+'C')
longer_kmers.append(mer+'G')
longer_kmers.append(mer+'T')
shorter_kmers = longer_kmers
all_kmers = shorter_kmers
kmer_dict = {}
kmer_dict[npad]=0
value=1
for mer in all_kmers:
kmer_dict[mer]=value
value += 1
return kmer_dict
KMER_TABLE=make_kmer_table(K)
def strings_to_vectors(data,uniform_len):
all_seqs=[]
for seq in data['sequence']:
i=0
seqlen=len(seq)
kmers=[]
while i < seqlen-K+1 -1: # stop at minus one for spaced seed
#kmer=seq[i:i+2]+seq[i+3:i+5] # SPACED SEED 2/1/2 for K=4
kmer=seq[i:i+K]
i += 1
value=KMER_TABLE[kmer]
kmers.append(value)
pad_val=0
while i < uniform_len:
kmers.append(pad_val)
i += 1
all_seqs.append(kmers)
pd2d=pd.DataFrame(all_seqs)
return pd2d # return 2D dataframe, uniform dimensions
def make_kmers(MAXLEN,train_set):
(X_train_all,y_train_all)=separate_X_and_y(train_set)
X_train_kmers=strings_to_vectors(X_train_all,MAXLEN)
# From pandas dataframe to numpy to list to numpy
num_seqs=len(X_train_kmers)
tmp_seqs=[]
for i in range(num_seqs):
kmer_sequence=X_train_kmers.iloc[i]
tmp_seqs.append(kmer_sequence)
X_train_kmers=np.array(tmp_seqs)
tmp_seqs=None
labels=y_train_all.to_numpy()
return (X_train_kmers,labels)
def make_frequencies(Xin):
Xout=[]
VOCABULARY_SIZE= 4**K + 1 # plus one for 'NNN'
for seq in Xin:
freqs =[0] * VOCABULARY_SIZE
total = 0
for kmerval in seq:
freqs[kmerval] += 1
total += 1
for c in range(VOCABULARY_SIZE):
freqs[c] = freqs[c]/total
Xout.append(freqs)
Xnum = np.asarray(Xout)
return (Xnum)
def make_slice(data_set,min_len,max_len):
slice = data_set.query('seqlen <= '+str(max_len)+' & seqlen>= '+str(min_len))
return slice
###Output
_____no_output_____
###Markdown
Cross validation
###Code
def do_cross_validation(X,y,given_model):
cv_scores = []
fold=0
splitter = ShuffleSplit(n_splits=SPLITS, test_size=0.1, random_state=37863)
for train_index,valid_index in splitter.split(X):
fold += 1
X_train=X[train_index] # use iloc[] for dataframe
y_train=y[train_index]
X_valid=X[valid_index]
y_valid=y[valid_index]
# Avoid continually improving the same model.
model = compile_model(keras.models.clone_model(given_model))
bestname=DATAPATH+FILENAME+".cv."+str(fold)+".best"
mycallbacks = [keras.callbacks.ModelCheckpoint(
filepath=bestname, save_best_only=True,
monitor='val_accuracy', mode='max')]
print("FIT")
start_time=time.time()
history=model.fit(X_train, y_train, # batch_size=10, default=32 works nicely
epochs=EPOCHS, verbose=1, # verbose=1 for ascii art, verbose=0 for none
callbacks=mycallbacks,
validation_data=(X_valid,y_valid) )
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
best_model=keras.models.load_model(bestname)
scores = best_model.evaluate(X_valid, y_valid, verbose=0)
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
cv_scores.append(scores[1] * 100)
print()
print("%d-way Cross Validation mean %.2f%% (+/- %.2f%%)" % (fold, np.mean(cv_scores), np.std(cv_scores)))
###Output
_____no_output_____
###Markdown
Train on RNA lengths 200-1Kb
###Code
MINLEN=200
MAXLEN=1000
print("Load data from files.")
nc_seq=load_fasta(NC_FILENAME,0)
pc_seq=load_fasta(PC_FILENAME,1)
train_set=pd.concat((nc_seq,pc_seq),axis=0)
nc_seq=None
pc_seq=None
print("Ready: train_set")
#train_set
subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y
print ("Data reshape")
(X_train,y_train)=make_kmers(MAXLEN,subset)
#print ("Data prep")
#X_train=make_frequencies(X_train)
print ("Compile the model")
model=build_model()
print ("Summarize the model")
print(model.summary()) # Print this only once
model.save(DATAPATH+FILENAME+'.model')
print ("Cross valiation")
do_cross_validation(X_train,y_train,model)
print ("Done")
###Output
_____no_output_____ |
com/javahabit/course-3/week-1/SentenceTokenizer.ipynb | ###Markdown
Sequence 1 Create a sentence tokenzier.
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
sentences = ['I love my dog',
'I, love cats',
'You love your dog!']
print (sentences)
###Output
['I love my dog', 'I, love cats', 'You love your dog!']
###Markdown
Define the Tokenizer
###Code
tokenizer = Tokenizer(num_words=100)
tokenizer.fit_on_texts(sentences)
###Output
_____no_output_____
###Markdown
Print word vector:- This shows the what index is provided to a word```tokenizer.word_index or tokenizer.index_word```
###Code
print("word_index: " ,tokenizer.word_index)
print("index_word", tokenizer.index_word)
###Output
word_index: {'love': 1, 'i': 2, 'dog': 3, 'my': 4, 'cats': 5, 'you': 6, 'your': 7}
index_word {1: 'love', 2: 'i', 3: 'dog', 4: 'my', 5: 'cats', 6: 'you', 7: 'your'}
###Markdown
Print word counts- This will print how many times a word is present in all the sentences.``` tokenizer.word_counts (ordered dictionary in the way sentence is read) tokenizer.word_docs(unordered)```
###Code
print("word_docs: \n", tokenizer.word_docs)
print("word_counts: \n",tokenizer.word_counts)
###Output
word_counts:
OrderedDict([('i', 2), ('love', 3), ('my', 1), ('dog', 2), ('cats', 1), ('you', 1), ('your', 1)])
###Markdown
Now we create a sentence tokenizer- This will create a group of word tokens in arrays.
###Code
sentence_tokens = tokenizer.texts_to_sequences(sentences)
print (tokenizer.index_word)
print(sentence_tokens)
###Output
{1: 'love', 2: 'i', 3: 'dog', 4: 'my', 5: 'cats', 6: 'you', 7: 'your'}
[[2, 1, 4, 3], [2, 1, 5], [6, 1, 7, 3]]
###Markdown
Now test your data on a set of sentences that we have not seen before
###Code
test_data = ['I really love my dog',
'my dog loves my manatee']
test_result = tokenizer.texts_to_sequences(test_data)
print(test_result)
###Output
[[2, 1, 4, 3], [4, 3, 4]]
###Markdown
Missing values or unseen values- We can that word 'really' is missing.- The last sentece is captured as 'my dog my'
###Code
print(tokenizer.sequences_to_texts(test_result))
###Output
['i love my dog', 'my dog my']
|
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb | ###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
# Use matplotlib for visualizing the model
import matplotlib as mpl
import matplotlib.pyplot as plt
# Here we'll import Pandas and Numpy data processing libraries
import numpy as np
import pandas as pd
# Use seaborn for data visualization
import seaborn as sns
# Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning.
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
# Customize our Matplot lib visualization figure size and colors
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
# pandas module read_csv() function reads the CSV file into a DataFrame object.
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
# `head()` function is used to get the first n rows of dataframe
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
# describe() is used to view some basic statistical details
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
# Numpy bincount() method is used to obtain the frequency of each element provided inside a numpy array
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
# train_test_split() method split arrays or matrices into random train and test subsets
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
# `np.clip()` clip (limit) the values in an array.
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
# pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
# Seaborn’s jointplot displays a relationship between 2 variables (bivariate) as well as
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
# The suptitle() function in pyplot module of the matplotlib library is used to add a title to the figure.
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
# `tf.keras.initializers.Constant()` generates tensors with constant values.
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
# Creating a Sequential model
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
# Compile the model
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
# Stop training when a monitored metric has stopped improving.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# Display a model summary
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
# use the model to do prediction with model.predict()
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
# np.log() is a mathematical function that is used to calculate the natural logarithm.
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
# Fit data to model
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
# Fit data to model
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
# subplots() which acts as a utility wrapper and helps in creating common layouts of subplots
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
# Plot Receiver operating characteristic (ROC) curve.
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
# Function legend() which is used to Place a legend on the axes
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
# np.arange() return evenly spaced values within a given interval.
ids = np.arange(len(pos_features))
# choice() method, you can get the random samples of one dimensional array and return the random samples of numpy array.
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# numpy.concatenate() function concatenate a sequence of arrays along an existing axis.
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
# numpy.random.shuffle() modify a sequence in-place by shuffling its contents.
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
# With the help of tf.data.Dataset.from_tensor_slices() method, we can get the slices of an array in the form of objects
# by using tf.data.Dataset.from_tensor_slices() method.
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
# Samples elements at random from the datasets in `datasets`.
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
# `np.ceil()` function returns the ceil value of the input array elements
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
# Use matplotlib for visualizing the model
import matplotlib as mpl
import matplotlib.pyplot as plt
# Here we'll import Pandas and Numpy data processing libraries
import numpy as np
import pandas as pd
# Use seaborn for data visualization
import seaborn as sns
# Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning.
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
# Customize our Matplot lib visualization figure size and colors
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
# pandas module read_csv() function reads the CSV file into a DataFrame object.
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
# `head()` function is used to get the first n rows of dataframe
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
# describe() is used to view some basic statistical details
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
# Numpy bincount() method is used to obtain the frequency of each element provided inside a numpy array
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
# train_test_split() method split arrays or matrices into random train and test subsets
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
# `np.clip()` clip (limit) the values in an array.
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
# pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
# Seaborn’s jointplot displays a relationship between 2 variables (bivariate) as well as
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
# The suptitle() function in pyplot module of the matplotlib library is used to add a title to the figure.
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
# `tf.keras.initializers.Constant()` generates tensors with constant values.
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
# Creating a Sequential model
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
# Compile the model
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
# Stop training when a monitored metric has stopped improving.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# Display a model summary
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
# use the model to do prediction with model.predict()
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
# np.log() is a mathematical function that is used to calculate the natural logarithm.
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
# Fit data to model
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
# Fit data to model
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
# subplots() which acts as a utility wrapper and helps in creating common layouts of subplots
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
# Plot Receiver operating characteristic (ROC) curve.
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
# Function legend() which is used to Place a legend on the axes
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
# np.arange() return evenly spaced values within a given interval.
ids = np.arange(len(pos_features))
# choice() method, you can get the random samples of one dimensional array and return the random samples of numpy array.
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# numpy.concatenate() function concatenate a sequence of arrays along an existing axis.
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
# numpy.random.shuffle() modify a sequence in-place by shuffling its contents.
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
# With the help of tf.data.Dataset.from_tensor_slices() method, we can get the slices of an array in the form of objects
# by using tf.data.Dataset.from_tensor_slices() method.
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
# Samples elements at random from the datasets in `datasets`.
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
# `np.ceil()` function returns the ceil value of the input array elements
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data like: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
# Use matplotlib for visualizing the model
import matplotlib as mpl
import matplotlib.pyplot as plt
# Here we'll import Pandas and Numpy data processing libraries
import numpy as np
import pandas as pd
# Use seaborn for data visualization
import seaborn as sns
# Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning.
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
# Customize our Matplot lib visualization figure size and colors
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
# pandas module read_csv() function reads the CSV file into a DataFrame object.
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
# `head()` function is used to get the first n rows of dataframe
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
# describe() is used to view some basic statistical details
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
# Numpy bincount() method is used to obtain the frequency of each element provided inside a numpy array
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
# train_test_split() method split arrays or matrices into random train and test subsets
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
# `np.clip()` clip (limit) the values in an array.
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
# pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
# Seaborn’s jointplot displays a relationship between 2 variables (bivariate) as well as
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
# The suptitle() function in pyplot module of the matplotlib library is used to add a title to the figure.
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
# `tf.keras.initializers.Constant()` generates tensors with constant values.
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
# Creating a Sequential model
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
# Compile the model
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
# Stop training when a monitored metric has stopped improving.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# Display a model summary
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
# use the model to do prediction with model.predict()
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
# np.log() is a mathematical function that is used to calculate the natural logarithm.
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
# Fit data to model
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
# Fit data to model
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
# subplots() which acts as a utility wrapper and helps in creating common layouts of subplots
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
# Plot Receiver operating characteristic (ROC) curve.
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
# Function legend() which is used to Place a legend on the axes
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
# np.arange() return evenly spaced values within a given interval.
ids = np.arange(len(pos_features))
# choice() method, you can get the random samples of one dimensional array and return the random samples of numpy array.
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# numpy.concatenate() function concatenate a sequence of arrays along an existing axis.
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
# numpy.random.shuffle() modify a sequence in-place by shuffling its contents.
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
# With the help of tf.data.Dataset.from_tensor_slices() method, we can get the slices of an array in the form of objects
# by using tf.data.Dataset.from_tensor_slices() method.
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
# Samples elements at random from the datasets in `datasets`.
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
# `np.ceil()` function returns the ceil value of the input array elements
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Advanced Logistic Regression in TensorFlow 2.0 Learning Objectives1. Load a CSV file using Pandas2. Create train, validation, and test sets3. Define and train a model using Keras (including setting class weights)4. Evaluate the model using various metrics (including precision and recall)5. Try common techniques for dealing with imbalanced data: Class weighting and Oversampling Introduction This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. PENDING LINK UPDATE: Each learning objective will correspond to a __TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook. Start by importing the necessary libraries for this lab.
###Code
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
# Use matplotlib for visualizing the model
import matplotlib as mpl
import matplotlib.pyplot as plt
# Here we'll import Pandas and Numpy data processing libraries
import numpy as np
import pandas as pd
# Use seaborn for data visualization
import seaborn as sns
# Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning.
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.1.0
###Markdown
In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
###Code
# Customize our Matplot lib visualization figure size and colors
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Data processing and exploration Download the Kaggle Credit Card Fraud data setPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
###Code
file = tf.keras.utils
# pandas module read_csv() function reads the CSV file into a DataFrame object.
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
# `head()` function is used to get the first n rows of dataframe
raw_df.head()
###Output
_____no_output_____
###Markdown
Now, let's view the statistics of the raw dataframe.
###Code
# describe() is used to view some basic statistical details
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
###Output
_____no_output_____
###Markdown
Examine the class label imbalanceLet's look at the dataset imbalance:
###Code
# Numpy bincount() method is used to obtain the frequency of each element provided inside a numpy array
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
###Output
Examples:
Total: 284807
Positive: 492 (0.17% of total)
###Markdown
This shows the small fraction of positive samples. Clean, split and normalize the dataThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
###Code
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
###Output
_____no_output_____
###Markdown
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
###Code
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
# train_test_split() method split arrays or matrices into random train and test subsets
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
###Output
_____no_output_____
###Markdown
Normalize the input features using the sklearn StandardScaler.This will set the mean to 0 and standard deviation to 1.Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
###Code
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
# `np.clip()` clip (limit) the values in an array.
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
###Output
Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)
###Markdown
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distributionNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:* Do these distributions make sense? * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.* Can you see the difference between the ditributions? * Yes the positive examples contain a much higher rate of extreme values.
###Code
# pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
# Seaborn’s jointplot displays a relationship between 2 variables (bivariate) as well as
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
# The suptitle() function in pyplot module of the matplotlib library is used to add a title to the figure.
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
###Output
_____no_output_____
###Markdown
Define the model and metricsDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
###Code
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
# `tf.keras.initializers.Constant()` generates tensors with constant values.
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
# Creating a Sequential model
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
# Compile the model
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
###Output
_____no_output_____
###Markdown
Understanding useful metricsNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.* **False** negatives and **false** positives are samples that were **incorrectly** classified* **True** negatives and **true** positives are samples that were **correctly** classified* **Accuracy** is the percentage of examples correctly classified> $\frac{\text{true samples}}{\text{total samples}}$* **Precision** is the percentage of **predicted** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false positives}}$* **Recall** is the percentage of **actual** positives that were correctly classified> $\frac{\text{true positives}}{\text{true positives + false negatives}}$* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more:* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) Baseline model Build the modelNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
###Code
EPOCHS = 100
BATCH_SIZE = 2048
# Stop training when a monitored metric has stopped improving.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# Display a model summary
model = make_model()
model.summary()
###Output
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 16) 480
_________________________________________________________________
dropout_8 (Dropout) (None, 16) 0
_________________________________________________________________
dense_17 (Dense) (None, 1) 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________
###Markdown
Test run the model:
###Code
# use the model to do prediction with model.predict()
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
Optional: Set the correct initial bias. These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence. With the default bias initialization the loss should be about `math.log(2) = 0.69314`
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 1.7441
###Markdown
The correct bias to set can be derived from:$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$$$ b_0 = -log_e(1/p_0 - 1) $$$$ b_0 = log_e(pos/neg)$$
###Code
# np.log() is a mathematical function that is used to calculate the natural logarithm.
initial_bias = np.log([pos/neg])
initial_bias
###Output
_____no_output_____
###Markdown
Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: `pos/total = 0.0018`
###Code
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
###Output
_____no_output_____
###Markdown
With this initialization the initial loss should be approximately:$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
###Code
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
###Output
Loss: 0.0275
###Markdown
This initial loss is about 50 times less than if would have been with naive initilization.This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weightsTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
###Code
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
###Output
_____no_output_____
###Markdown
Confirm that the bias fix helpsBefore moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
###Code
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
# Fit data to model
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
###Output
_____no_output_____
###Markdown
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
###Code
model = make_model()
model.load_weights(initial_weights)
# Fit data to model
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
###Output
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331
Epoch 12/100
169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332
Epoch 00012: early stopping
###Markdown
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
###Code
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
# subplots() which acts as a utility wrapper and helps in creating common layouts of subplots
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
###Output
_____no_output_____
###Markdown
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
###Code
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
###Output
_____no_output_____
###Markdown
Evaluate your model on the test dataset and display the results for the metrics you created above.
###Code
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
###Output
loss : 0.005941324691873794
tp : 55.0
fp : 12.0
tn : 56845.0
fn : 50.0
accuracy : 0.99891156
precision : 0.8208955
recall : 0.52380955
auc : 0.9390888
Legitimate Transactions Detected (True Negatives): 56845
Legitimate Transactions Incorrectly Detected (False Positives): 12
Fraudulent Transactions Missed (False Negatives): 50
Fraudulent Transactions Detected (True Positives): 55
Total Fraudulent Transactions: 105
###Markdown
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
###Code
def plot_roc(name, labels, predictions, **kwargs):
# Plot Receiver operating characteristic (ROC) curve.
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
###Code
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
###Output
Weight for class 0: 0.50
Weight for class 1: 289.44
###Markdown
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
###Code
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 182276 samples, validate on 45569 samples
Epoch 1/100
182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492
Epoch 2/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605
Epoch 3/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669
Epoch 4/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709
Epoch 5/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725
Epoch 6/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728
Epoch 7/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739
Epoch 8/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769
Epoch 9/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750
Epoch 10/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761
Epoch 11/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768
Epoch 12/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771
Epoch 13/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772
Epoch 14/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 15/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781
Epoch 16/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 17/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 18/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 19/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 20/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 21/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797
Epoch 22/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777
Epoch 23/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779
Epoch 24/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 25/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 26/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 27/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786
Epoch 28/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 29/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784
Epoch 30/100
182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785
Epoch 31/100
178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.
182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788
Epoch 00031: early stopping
###Markdown
Check training history
###Code
plot_metrics(weighted_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
###Output
loss : 0.06950428275801711
tp : 94.0
fp : 905.0
tn : 55952.0
fn : 11.0
accuracy : 0.9839191
precision : 0.0940941
recall : 0.8952381
auc : 0.9844724
Legitimate Transactions Detected (True Negatives): 55952
Legitimate Transactions Incorrectly Detected (False Positives): 905
Fraudulent Transactions Missed (False Negatives): 11
Fraudulent Transactions Detected (True Positives): 94
Total Fraudulent Transactions: 105
###Markdown
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
# Function legend() which is used to Place a legend on the axes
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
###Code
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
###Output
_____no_output_____
###Markdown
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
###Code
# np.arange() return evenly spaced values within a given interval.
ids = np.arange(len(pos_features))
# choice() method, you can get the random samples of one dimensional array and return the random samples of numpy array.
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# numpy.concatenate() function concatenate a sequence of arrays along an existing axis.
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
# numpy.random.shuffle() modify a sequence in-place by shuffling its contents.
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
###Output
_____no_output_____
###Markdown
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
###Code
BUFFER_SIZE = 100000
def make_ds(features, labels):
# With the help of tf.data.Dataset.from_tensor_slices() method, we can get the slices of an array in the form of objects
# by using tf.data.Dataset.from_tensor_slices() method.
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
###Output
_____no_output_____
###Markdown
Each dataset provides `(feature, label)` pairs:
###Code
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
###Output
Features:
[-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304
-5. 2.86676917 -4.9308611 -5. 3.58555137 -5.
1.51535494 -5. 0.01049775 -5. -5. -5.
2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978
-0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]
Label: 1
###Markdown
Merge the two together using `experimental.sample_from_datasets`:
###Code
# Samples elements at random from the datasets in `datasets`.
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
###Output
0.48974609375
###Markdown
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
###Code
# `np.ceil()` function returns the ceil value of the input array elements
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
###Output
_____no_output_____
###Markdown
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
###Output
Train for 278.0 steps, validate for 23 steps
Epoch 1/100
278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799
Epoch 2/100
278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779
Epoch 3/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778
Epoch 4/100
278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783
Epoch 5/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762
Epoch 6/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748
Epoch 7/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742
Epoch 8/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 9/100
278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713
Epoch 10/100
278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717
Epoch 11/100
276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.
278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637
Epoch 00011: early stopping
###Markdown
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
###Code
plot_metrics(resampled_history )
###Output
_____no_output_____
###Markdown
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
###Code
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
###Output
Train for 20 steps, validate for 23 steps
Epoch 1/1000
20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425
Epoch 2/1000
20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580
Epoch 3/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660
Epoch 4/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713
Epoch 5/1000
20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753
Epoch 6/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773
Epoch 7/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787
Epoch 8/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794
Epoch 9/1000
20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799
Epoch 10/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802
Epoch 11/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805
Epoch 12/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804
Epoch 13/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802
Epoch 14/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803
Epoch 15/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797
Epoch 16/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796
Epoch 17/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794
Epoch 18/1000
20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793
Epoch 19/1000
20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791
Epoch 20/1000
20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788
Epoch 21/1000
19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.
20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785
Epoch 00021: early stopping
###Markdown
Re-check training history
###Code
plot_metrics(resampled_history)
###Output
_____no_output_____
###Markdown
Evaluate metrics
###Code
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
###Output
loss : 0.3960801533448772
tp : 99.0
fp : 5892.0
tn : 50965.0
fn : 6.0
accuracy : 0.8964573
precision : 0.016524788
recall : 0.94285715
auc : 0.9804354
Legitimate Transactions Detected (True Negatives): 50965
Legitimate Transactions Incorrectly Detected (False Positives): 5892
Fraudulent Transactions Missed (False Negatives): 6
Fraudulent Transactions Detected (True Positives): 99
Total Fraudulent Transactions: 105
###Markdown
Plot the ROC
###Code
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
###Output
_____no_output_____ |
.ipynb_checkpoints/Train_and_finetune-checkpoint.ipynb | ###Markdown
Initial model evaluation
###Code
X, X_val, y, y_val = train_test_split(X_train, y_train, test_size=.1, random_state=42)
y.head()
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
start = time() # Get start time
X_train_subset = X_train.ix[:sample_size, :]
y_train_subset = y_train.ix[:sample_size, :]
learner.fit(X_train_subset, y_train_subset)
end = time() # Get end time
results['train_time'] = end - start
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end - start
# TODO: Compute accuracy on the
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
# TODO: Compute accuracy on test set
results['acc_test'] = accuracy_score(y_test, predictions_test)
# TODO: Compute F-score on the the first 300 training samples
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta = .5)
# TODO: Compute F-score on the test set
results['f_test'] = fbeta_score(y_test, predictions_test, beta = .5)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
clf_A = AdaBoostClassifier(random_state=0)
clf_B = SVC(random_state=0)
clf_C = RandomForestClassifier(random_state=0)
clf_D = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=0)
samples_1 = int(.01 * X_train.shape[1])
samples_10 = int(.1 * X_train.shape[1])
samples_100 = X_train.shape[1]
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C, clf_D]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
###Output
_____no_output_____ |
ensemble/00_prepro_unify_datasets_and_w2v.ipynb | ###Markdown
Unifying the datasets and create a cache for w2v vectores In this work, we deal with 4 different datasets PAN-CLEF2018 train (pre-competition), PAN-CLEF2018 eval (the actual competition), Lyrics dataset and the socialaa. Each of the these datasets was in a slight different configuration, demanding various pre-processing steps. This code is meant to unify and also creating a filtered word2vec pre-trained. Filtering the word2vecPre-trained models are usually huge files what could led to a slow training process.
###Code
import os;
import re;
import pandas as pd;
import zipfile;
import json;
from time import time
!pip install gensim=='3.8.0'
import gensim
from gensim.models import KeyedVectors
###Output
_____no_output_____
###Markdown
Unifying all datasetConverting PAN-CLEF dataset into a row style dataframe.
###Code
def readCollectionsOfProblemsZip(zip_file, project_name):
infocollection = 'collection-info.json';
problems = json.loads(zip_file.read(infocollection).decode('utf-8'));
candidates = []
for problem in problems:
problem_info = '/'.join([problem['problem-name'], 'problem-info.json'])
problem_info_obj = json.loads(zip_file.read(problem_info).decode('utf-8'));
for c in problem_info_obj['candidate-authors']:
candidate = {'problem':problem['problem-name'], 'language':problem['language'],'label': c['author-name']};
c_path = '/'.join([problem['problem-name'], c['author-name']]);
for f in zip_file.filelist :
if f.is_dir() or not f.filename.endswith('txt'):
continue;
if c_path in f.filename:
#print(f.filename)
candidates.append({
**candidate,
**{
'text':zip_file.read(f).decode(problem['encoding']),
'set':'known',
'filename':f.filename.split('/')[-1]
}
});
# reading the test set for each problem
u_path = '/'.join([problem['problem-name'], 'ground-truth.json']);
unknow_set_obj = json.loads(zip_file.read(u_path))['ground_truth'];
for f in unknow_set_obj:
u_path = '/'.join([problem['problem-name'], 'unknown',f['unknown-text']]);
#print(u_path, f)
candidate = {'problem':problem['problem-name'], 'language':problem['language'],'label': f['true-author']};
candidates.append({
**candidate,
**{
'text':zip_file.read(u_path).decode(problem['encoding']),
'set':'unknown',
'filename':f['unknown-text']
}
});
#for f in zf.filelist :
return candidates;
def readDataSet(project_name):
project_path = f'../../data/{project_name}.zip';
with zipfile.ZipFile(project_path) as zf:
df = pd.DataFrame(readCollectionsOfProblemsZip(zf, project_name));
return df;
PAN18_train = readDataSet('pan18-cross-domain-authorship-attribution-training-dataset-2017-12-02');
PAN18_test = readDataSet('pan18-cross-domain-authorship-attribution-test-dataset2-2018-04-20');
lyrics = readDataSet('Lyrics_PT_EN_2019');
lyrics.head(10)
def readSocial(path):
with zipfile.ZipFile(path) as zf:
temp = json.loads(zf.read('social_media_aa.json'))
social = [];
for p in temp:
problem = {'problem':p['problem-name'],'language':p['language']}
for c in p['candidates']:
social.append({
**problem,
**{'text':c[0],'label':c[1],'filename':c[2],'set':'known',}
})
for c in p['unknown']:
social.append({
**problem,
**{'text':c[0],'label':c[1],'filename':c[2],'set':'unknown',}
})
return pd.DataFrame(social)
social_AA = readSocial('../../data/social_media_aa.json.zip');
social_AA.head()
def addSetName(df, name):
df = df.copy();
df['dataset'] = name;
return df
allDS = pd.concat([
addSetName(PAN18_train,'pan18_train'),
addSetName(PAN18_test,'pan18_eval'),
addSetName(lyrics,'lyrics'),
addSetName(social_AA,'socialaa')
])
allDS
allDS.to_json('../../data/AllDS.json.zip', orient='records', compression='gzip')
allDS.groupby('language').count()
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
Caching embeddings Embedding files are huge and heavy files. Reading them takes a lot of time. This code is meant to create a vocabulary with all possible occurrences in the presente work and filter the original W2V into a small sample in order to speed up the train/test process. |ID|Download link|Vector size|Window|Corpus|Vocabulary size|Algorithm|Lemmatization||--- |--- |--- |--- |--- |--- |--- |--- ||43|http://vectors.nlpl.eu/repository/20/43.zip|100|10|French CoNLL17 corpus|2567698|Word2Vec Continuous Skipgram|False||52|http://vectors.nlpl.eu/repository/20/52.zip|100|10|Italian CoNLL17 corpus|2469122|Word2Vec Continuous Skipgram|False||62|http://vectors.nlpl.eu/repository/20/62.zip|100|10|Polish CoNLL17 corpus|4420598|Word2Vec Continuous Skipgram|False||63|http://vectors.nlpl.eu/repository/20/63.zip|100|10|Portuguese CoNLL17 corpus|2536452|Word2Vec Continuous Skipgram|False||68|http://vectors.nlpl.eu/repository/20/68.zip|100|10|Spanish CoNLL17 corpus|2656057|Word2Vec Continuous Skipgram|False|
###Code
def buildVocabulary(texts):
counter = CountVectorizer(analyzer='word', lowercase=False);
counter.fit(texts);
return counter.vocabulary_;
def loadEnModel():
return KeyedVectors.load_word2vec_format(
os.path.join('GoogleNews-vectors-negative300.bin.gz')
,binary=True);
def loadModel(filename):
with zipfile.ZipFile(filename, "r") as archive:
stream = archive.open("model.txt");
model = KeyedVectors.load_word2vec_format(stream, binary=False, unicode_errors='replace')
return model
def createEmbeddingCache(model, vocab, fname):
print("Filtering model",end=' ');t0 = time();
vocabFilter = {};
vocabFilter = {w:model[w] for w in vocab if w in model};
print("Done in %0.3fs" % (time() - t0))
#embeddingSize= len(model[vocabFilter[0]])
embeddingSize = model.vector_size;
#trying to free the memory
del model;
import gc;
gc.collect();
print({'embeddingSize':embeddingSize, 'vocabSize':len(vocab), 'vocabFound':len(vocabFilter),'per_found':len(vocabFilter)/len(vocab)})
print("Writing model",end=' ');t0 = time();
with gzip.open(os.path.join('../embedding_cache',fname), 'w') as f:
f.write(("%s %s\n"%(len(vocabFilter),embeddingSize)).encode('utf-8'))
for w in sorted(list(vocabFilter.keys())):
a = " ".join([str(f) for f in vocabFilter[w]]);
line = "%s %s\n" % (w, a)
f.write(line.encode('utf-8'))
print("Done in %0.3fs" % (time() - t0));
###Output
_____no_output_____
###Markdown
caching english
###Code
print('Loading google w2v english - start');t0 = time();
model = loadEnModel();
print("Done in %0.3fs" % (time() - t0))
vocab = buildVocabulary(allDS.query('language == "en"')['text']);
createEmbeddingCache(model, vocab, f'w2v_en.txt.gz')
###Output
Loading google w2v english - start
Done in 155.712s
Filtering model Done in 3.823s
{'embeddingSize': 300, 'vocabSize': 60474, 'vocabFound': 48560, 'per_found': 0.8029897145880874}
Writing model Done in 36.743s
###Markdown
Caching the other languages
###Code
otherModels = {
'fr':'43',
'it':'52',
'pl':'208',
'pt':'63',
'pl':'62',
'sp':'68',
}
for lang, filenumber in otherModels.items():
print(f'\n\nLoading w2v {lang} - start');t0 = time();
model = loadModel(f'{filenumber}.zip');
print("Done in %0.3fs" % (time() - t0))
vocab = buildVocabulary(allDS.query(f'language == "{lang}"')['text']);
createEmbeddingCache(model, vocab, f'w2v_{lang}.txt.gz')
###Output
_____no_output_____ |
07_programacion_matematica/casos_codigo/ioperativ_clase22_warehouse_location_problem.ipynb | ###Markdown
----------------------------------- **Programacion Entera con Python: Caso Warehouse location****Universidad Tecnologica Nacional - Facultad Buenos Aires** **Ingenieria Industrial****Investigacion Operativa** Autor: Martin Palazzo + Caylie Cincera (https://www.youtube.com/watch?v=5I0mhX0973o) Curso I4051
###Code
pip install pulp
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pulp
from pulp import LpProblem
from pulp import LpMinimize
from pulp import LpVariable
from pulp import LpBinary
from pulp import *
###Output
_____no_output_____
###Markdown
$$\begin{matrix}c_{ij} & = & \text{costo de transportar del centro j al cliente i} \\x_{ij} & = &\text{el centro j provee al cliente i} \\u_j & = &\text{costo fijo de construir el centro j} \\w_j & = &\text{variable de existencia del centro j}\end{matrix}$$ $$\begin{matrix}\underset{x}{\text{min }} & \sum_{i = 1}^{n}\sum_{j = 1}^{m} c_{ij} x_{ij} + \sum_{j = 1}^{m} u_j w_j \\ & \\\text{s.t. } & \sum_{j = 1}^{m} x_{ij} = 1 \\ \\& x_{ij} \leq d_i w_j \\& x_{ij} \in \{ 0,1\} \\& w_{j} \in \{ 0,1\}\end{matrix}$$
###Code
# definimos la cantidad de clientes
clientes = [1,2,3,4,5]
# definimos la cantidad de depositos
warehouses = ['w1','w2','w3']
# demanda de cada cliente
demanda = {1:80,
2:270,
3:250,
4:160,
5:180}
# costo de activar cada warehouse
wcost = {'w1':1000,'w2':1000,'w3':1000}
# maxima cantidad que puede mover cada warehouse
max_q = {'w1':500,'w2':500,'w3':500}
# costo de transporte warehouse-cliente
transp_c = {'w1': {1 : 4, 2:5 , 3:6 , 4:8, 5:10},
'w2': {1 : 6, 2:4 , 3:3 , 4:5, 5:8},
'w3': {1 : 9, 2:7 , 3:4 , 4:3, 5:4}}
# definir el problema en pulp
opt = LpProblem("warehouseLocation", LpMinimize)
# definir las variables de decision Xij
xij = LpVariable.dicts("Servicio", [(i,j) for i in clientes
for j in warehouses], 0)
# definir la Wj
Uj = LpVariable.dicts("UsarLocacion", warehouses,0,1,LpBinary)
# funcion objetivo
opt += lpSum(wcost[j]*Uj[j] for j in warehouses) + lpSum(transp_c[j][i]*xij[(i,j)] for j in warehouses for i in clientes)
# restricciones 1
for i in clientes:
opt += lpSum(xij[(i,j)] for j in warehouses) == demanda[i]
# restruccion 1
for j in warehouses:
opt += lpSum(xij[(i,j)] for i in clientes) <= max_q[j]*Uj[j]
for i in clientes:
for j in warehouses:
opt += xij[(i,j)] <= demanda[i]*Uj[j]
# solucion
opt.solve()
# imprimir las variables de decision Wj
tol = 0.00001
for i in warehouses:
if Uj[i].varValue > tol:
print("Construir un warehouse en el sitio ",i)
# imprimir las variables de decision Xij
for q in opt.variables():
print(q.name,"=",q.varValue)
# imprimir la funcion objetivo en el optimo
print('El costo de la operacion es de = ', value(opt.objective))
###Output
_____no_output_____ |
Db2 Console RESTful APIs Hands-on Lab.ipynb | ###Markdown
Using the Db2 Console RESTful Service Class Db2 Console Class for Cloud Pak for DataThis Jupyter Notebook uses a reusable Python class library that encapsulates come best practices of how to use the Open APIs that are available for Db2 running in Cloud Pak for Data. Everything in the Db2 Console is available through an open RESTful Services API. The full set of APIs are documented as part of the Db2 Data Management Console user interface. Where to find this sample onlineYou can find a copy of this notebook at https://github.com/Db2-DTE-POC/CPDDVHOL4 Let's get started by loading the db2console.ipynb class library notebook, which is also available on GIT. The commands below copy the reusable library from GIT onto the local Cloud Pak for Data filesystem and runs the python file to create the Db2 Console API Class. To check out the reusable code on GIT click the follwing link: https://github.com/Db2-DTE-POC/CPDDVHOL4/blob/main/Db2ConsoleAPIClassforCPD.ipynb
###Code
!wget -O Db2ConsoleAPIClassforCPD.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVHOL4/main/Db2ConsoleAPIClassforCPD.ipynb
%run Db2ConsoleAPIClassforCPD.ipynb
print('Db2ConsoleAPIClassforCPD.ipynb loaded')
###Output
--2022-04-19 13:19:45-- https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVHOL4/main/Db2ConsoleAPIClassforCPD.ipynb
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 29423 (29K) [text/plain]
Saving to: ‘Db2ConsoleAPIClassforCPD.ipynb’
Db2ConsoleAPIClassf 100%[===================>] 28.73K --.-KB/s in 0.002s
2022-04-19 13:19:45 (17.3 MB/s) - ‘Db2ConsoleAPIClassforCPD.ipynb’ saved [29423/29423]
Db2ConsoleAPIClassforCPD.ipynb loaded
###Markdown
Db2 Data Management Console ConnectionThe first step is to create an instance of the Db2Console class (a Python object). The next cell creates an object called **databaseAPI**. The rest of this lab calls functions that are part of that object.The **databaseAPI** object creation requires the URL of the Cloud Pak for Data Console as well as the name of the Data Management Console instance running on Cloud Pak for Data. To authenticate a connection to the console API you also need a valid Cloud Pak for Data userid and password as well as a default database instance to connect to.
###Code
# Set the service URL to connect from inside the ICPD Cluster
Console = 'https://cpd-cpd-instance.apps.demo.ibmdte.net:31192'
# Connect to the Db2 Data Management Console service
user = 'admin'
password = 'CP4DDataFabric'
# Set up the required connection
databaseAPI = Db2Console(Console, 'dmc-1635311028943779')
api = '/v1'
databaseAPI.authenticate(api, user, password, 'db2wh-1635951043918331')
database = Console
###Output
Token Retrieved
###Markdown
If the connection was successfully established, the new object contains a reusable token that is used to reconnect to the console API service for each function call. You don't need to ever use the token in your code, but if you want to see what a secure token looks like run the next cell.
###Code
databaseAPI.getBearerToken()
###Output
_____no_output_____
###Markdown
Confirm the connectionTo confirm that your connection is working you can list the Console connection profiles. Each profile represents a connection to one of the available Db2 Warehouse, Db2 OLTP or Data Virtualization databases in Cloud Pak for Data.
###Code
databaseAPI.getConnectionProfiles()
###Output
_____no_output_____
###Markdown
Catalog FunctionsNow that you are connected to a specific database, in this example 'ONTIME', you can call functions that let you access catalog information. You can get a list of the schemas in the Ontime database. The cell below retrieves all the rows in a dataframe and displays the first 5.
###Code
databaseAPI.getSchemas().head(5)
###Output
_____no_output_____
###Markdown
Search and Count Tables and ViewsYou can also use capabilities that are built into the console. For example you can find out how many tables include the text "AIRCRAFT" or seach all the views (both user and catalog views) that include the text "TABLES". By default the functions below only search user tables. Adding "true" to the function call also searches the system tables. Try running the cell below. Then try changing "true" to "false" and see the different result.
###Code
display(databaseAPI.getSearchTableList("AIRCRAFT"))
display(databaseAPI.getSearchViewList("TABLE", "true"))
###Output
_____no_output_____
###Markdown
Tables in a SchemaThe next function all returns the first five tables contained in the "ONTIME" schema
###Code
databaseAPI.getTablesInSchema("ONTIME").head(5)
###Output
_____no_output_____
###Markdown
Fuzy object searchThe next function returns a list of either the tables or views that match search text. You can specify the number of rows in the result set (in this example 5) and specify whether you want to search user object or user and system objects (in this example true means searching both).
###Code
display(databaseAPI.searchObjects('view',"TABLE",5,'true'))
display(databaseAPI.searchObjects('table',"AIRLINE",5,'true'))
###Output
_____no_output_____
###Markdown
Running Scripts and WorkloadsThe Db2Console class can also call the SQL Editor service to run Db2 scripts. This isn't limited to single SQL statments. Scripts that include multiple statements are also supported. To make it easy to run the same script against different databases, the fucntion call requires the connection profile name, the userid and password and the sql script text. The next cell runs three SQL statements. The database connection and authentication is included in the call. It returns a JSON string that include details on each statement, its runtime, column types, the limit of returned rows, the full row count in the result set, and the actual results to the row limit.
###Code
sql = \
'''
SELECT TABSCHEMA, TABNAME, STATUS FROM SYSCAT.TABLES;
SELECT VIEWSCHEMA, VIEWNAME, VALID FROM SYSCAT.VIEWS;
SELECT TABSCHEMA, TABNAME, COLNAME, TYPENAME, LENGTH FROM SYSCAT.COLUMNS
'''
user = 'admin'
password = 'CP4DDataFabric'
profile = 'dv-1635944153872816'
display(databaseAPI.runScript(profile, user, password, sql))
###Output
_____no_output_____
###Markdown
To make it easier to see the results, the displayResults function parses the JSON into a readable format. Formatting Results
###Code
databaseAPI.displayResults(databaseAPI.runScript(profile, user, password, sql))
###Output
_____no_output_____
###Markdown
The number of cells returned is limited to 10 by default. You can add an additional parameter to the runScript command to return a much larger result set, which you can then manipulate in Python. The returnRows function converts the JSON result into a dataframe. It requires the json that is returned by runScipt and the index of the SQL result you want to work with. Returning Results as Dataframes
###Code
json = databaseAPI.runScript(profile, user, password, sql, 10000)
df = databaseAPI.returnRows(json,0)
display(df.head(5))
display(df.tail(5))
df = databaseAPI.returnRows(json,1)
display(df.head(5))
display(df.tail(5))
df = databaseAPI.returnRows(json,2)
display(df.head(5))
display(df.tail(5))
###Output
_____no_output_____
###Markdown
Running Workloads and Measuring Results One of the most powerful functions built into the Db2Console class is **runWorkload**. It lets you run multiple scripts against multiple databases in a loop. This is particularly useful for demonstrating Db2 monitoring or for measuring the performance of SQL against different databases.In the next example, two scripts are run repeatedly against all the databases currently cataloged by the Db2 Console.The runtimes are collected along the way and returned in a dataframe.
###Code
profileList = ['db2oltp-1636379315142134','db2oltp-1635953643145137','db2wh-1635951043918331']
sql1 = \
'''
SELECT TABSCHEMA, TABNAME, STATUS FROM SYSCAT.TABLES;
SELECT VIEWSCHEMA, VIEWNAME, VALID FROM SYSCAT.VIEWS;
'''
sql2 = \
'''
SELECT TABSCHEMA, TABNAME, COLNAME, TYPENAME, LENGTH FROM SYSCAT.COLUMNS
'''
user = 'admin'
password = 'CP4DDataFabric'
scriptList = [sql1, sql2]
profileReps = 2
scriptReps = 2
pause = 0.25
df = databaseAPI.runWorkload(profileList, scriptList, user, password, profileReps, scriptReps,pause)
display(df)
###Output
_____no_output_____
###Markdown
Now we can use the results in the dataframe to look at the results statistically. First we can see the average runtime for each statement across the databases.
###Code
print('Mean runtime in ms')
pd.set_option('display.max_colwidth', 100)
stmtMean = df.groupby(['statement']).mean()
print(stmtMean)
###Output
_____no_output_____
###Markdown
We can also display the total runtime for each statement across databases.
###Code
print('Total runtime in ms')
pd.set_option('display.max_colwidth', 100)
stmtSum = df.groupby(['statement']).sum()
print(stmtSum)
###Output
_____no_output_____
###Markdown
We can even graph the total run time for all the statements can compare database performance. Since there are more rows in the Db2 Warehouse database tables the statements may take longer to return a result.
###Code
print('Mean runtime in ms')
pd.set_option('display.max_colwidth', 100)
profileSum = df.groupby(['profile']).sum()
profileSum.plot(kind='bar')
plt.show()
###Output
_____no_output_____
###Markdown
Of course you can also analyze the slowest database by using some simple dataframe functions. The slowest database has the maximum total elapsed runtime.
###Code
print('Slowest Database')
slowestProfile = profileSum['runtime_ms'].idxmax()
print(slowestProfile)
###Output
_____no_output_____
###Markdown
Additional OptionsJust like using the SQL Editor you can also specify whether to stop the script if it encounters and error or to continue. You can also specify the seperator character between individual statements. Here is the full function call with defaults and options: runScript(profile, user, password, sqlText, limit=10, separator=';', stopOnError=False): ADMIN_CMD Commands and Calling Stored ProceduresThe SQL Editor can also be used to execute stored procedure calls. In this example, the procedure call updates statistics on the STOCKS.CUSTOMER table in the STOCKs OLTP database
###Code
sql = \
'''
CALL SYSPROC.ADMIN_CMD ('RUNSTATS ON TABLE STOCKS.CUSTOMER ON KEY COLUMNS and INDEXES ALL');
'''
user = 'admin'
password = 'CP4DDataFabric'
profile = 'db2oltp-1635953643145137'
display(databaseAPI.runScript(profile, user, password, sql))
###Output
_____no_output_____
###Markdown
Current Metrics FunctionsNow that you can run a workload to exercise Db2, you can measure what is going on. The following function calls let you see what applications are connected to the "Ontime" database, see any statements that are currently in-flight and see the frequently used statements stored in the Db2 package cache.The includeSystem parameter defines whether applications or statements generated by Db2 itself or the Db2 Console are included in the results. Let's include all statements.
###Code
includeSystem = "true"
###Output
_____no_output_____
###Markdown
The next cells likes the applications that are currently connected to the database
###Code
databaseAPI.getCurrentApplicationsConnections(includeSystem)
###Output
_____no_output_____
###Markdown
This next cell lists any statements currently running
###Code
databaseAPI.getInflightCurrentList(includeSystem)
###Output
_____no_output_____
###Markdown
Finally this example returns the list of every statement currently in the Db2 Package Cache. This gives a good representation of statements that are frequently used.
###Code
databaseAPI.getCurrentPackageCacheList(includeSystem).head(5)
###Output
_____no_output_____
###Markdown
Timeseries Monitoring FunctionsOne of the key capabilities of the Db2 Console is that is collects historical monitoring information as timeseries data. Each of the examples below has a parallel page in the Db2 Console. The next set of functions returns monitoring data based on a start and endtime. The console and Db2 use EPOCH time, which is the number of milli-seconds since January 1st 1970. The cell below sets startTime and endTime. endTime is the current and end time. startTime is set to 12 hours earlier.
###Code
import time
from datetime import date
oneHour = 3600000
endTime = int(time.time())*1000
startTime = endTime-(oneHour*12)
###Output
_____no_output_____
###Markdown
Time Based Metrics - Summary FunctionsThe following functions return a total summary of the number of user statements that have run over the last 12 hours as well as the average response time in ms over that same period.
###Code
databaseAPI.getStatementsCount(startTime, endTime)
databaseAPI.getAverageResponseTime(startTime, endTime)
###Output
_____no_output_____
###Markdown
Time Based Metrics - Interval Measurement FunctionsThe following functions return a measurement for each monitoring interval over the last 12 hours. The examples below return average response time and total rows read during each monitoring interval. The last 5 intervals are displayed
###Code
databaseAPI.getResponseTime(startTime, endTime).tail(5)
databaseAPI.getRowsRead(startTime, endTime).tail(5)
###Output
_____no_output_____
###Markdown
Time Based Metrics - Object FunctionsThe following functions return monitoring data over the last 12 hours with a summary row for each object. The latest 5 entries are displayed. This first call returns metrics for tables used in the last 12 hours.
###Code
databaseAPI.getTablesMetrics(startTime, endTime, includeSystem).tail(5)
###Output
_____no_output_____
###Markdown
The next statement returns details of individual statements that ran over the last twelve hours.
###Code
databaseAPI.getIndividualStatementExecution(startTime, endTime).tail(5)
###Output
_____no_output_____
###Markdown
Finally this statement returns a history of the statements that were found in the package cache over the last 12 hours.
###Code
databaseAPI.getPackageCacheStatement(startTime, endTime, includeSystem).tail(5)
###Output
_____no_output_____
###Markdown
Using the Db2 Console RESTful Service Class Db2 Console Class for Cloud Pak for DataThis Jupyter Notebook uses a reusable Python class library that encapsulates come best practices of how to use the Open APIs that are available for Db2 running in Cloud Pak for Data. Everything in the Db2 Console is available through an open RESTful Services API. The full set of APIs are documented as part of the Db2 Data Management Console user interface. Where to find this sample onlineYou can find a copy of this notebook at https://github.com/Db2-DTE-POC/CPDDVHOL4 Let's get started by loading the db2console.ipynb class library notebook, which is also available on GIT. The commands below copy the reusable library from GIT onto the local Cloud Pak for Data filesystem and runs the python file to create the Db2 Console API Class. To check out the reusable code on GIT click the follwing link: https://github.com/Db2-DTE-POC/CPDDVHOL4/blob/main/Db2ConsoleAPIClassforCPD.ipynb
###Code
!wget -O Db2ConsoleAPIClassforCPD.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVHOL4/main/Db2ConsoleAPIClassforCPD.ipynb
%run Db2ConsoleAPIClassforCPD.ipynb
print('Db2ConsoleAPIClassforCPD.ipynb loaded')
###Output
_____no_output_____
###Markdown
Db2 Data Management Console ConnectionThe first step is to create an instance of the Db2Console class (a Python object). The next cell creates an object called **databaseAPI**. The rest of this lab calls functions that are part of that object.The **databaseAPI** object creation requires the URL of the Cloud Pak for Data Console as well as the name of the Data Management Console instance running on Cloud Pak for Data. To authenticate a connection to the console API you also need a valid Cloud Pak for Data userid and password as well as a default database instance to connect to.
###Code
# Set the service URL to connect from inside the ICPD Cluster
Console = 'https://cpd-cpd-instance.apps.demo.ibmdte.net:31192'
# Connect to the Db2 Data Management Console service
user = 'admin'
password = 'CP4DDataFabric'
# Set up the required connection
databaseAPI = Db2Console(Console, 'dmc-1635311028943779')
api = '/v1'
databaseAPI.authenticate(api, user, password, 'db2wh-1635951043918331')
database = Console
###Output
_____no_output_____
###Markdown
If the connection was successfully established, the new object contains a reusable token that is used to reconnect to the console API service for each function call. You don't need to ever use the token in your code, but if you want to see what a secure token looks like run the next cell.
###Code
databaseAPI.getBearerToken()
###Output
_____no_output_____
###Markdown
Confirm the connectionTo confirm that your connection is working you can list the Console connection profiles. Each profile represents a connection to one of the available Db2 Warehouse, Db2 OLTP or Data Virtualization databases in Cloud Pak for Data.
###Code
databaseAPI.getConnectionProfiles()
###Output
_____no_output_____
###Markdown
Catalog FunctionsNow that you are connected to a specific database, in this example 'ONTIME', you can call functions that let you access catalog information. You can get a list of the schemas in the Ontime database. The cell below retrieves all the rows in a dataframe and displays the first 5.
###Code
databaseAPI.getSchemas().head(5)
###Output
_____no_output_____
###Markdown
Search and Count Tables and ViewsYou can also use capabilities that are built into the console. For example you can find out how many tables include the text "AIRCRAFT" or seach all the views (both user and catalog views) that include the text "TABLES". By default the functions below only search user tables. Adding "true" to the function call also searches the system tables. Try running the cell below. Then try changing "true" to "false" and see the different result.
###Code
display(databaseAPI.getSearchTableList("AIRCRAFT"))
display(databaseAPI.getSearchViewList("TABLE", "true"))
###Output
_____no_output_____
###Markdown
Tables in a SchemaThe next function all returns the first five tables contained in the "ONTIME" schema
###Code
databaseAPI.getTablesInSchema("ONTIME").head(5)
###Output
_____no_output_____
###Markdown
Fuzy object searchThe next function returns a list of either the tables or views that match search text. You can specify the number of rows in the result set (in this example 5) and specify whether you want to search user object or user and system objects (in this example true means searching both).
###Code
display(databaseAPI.searchObjects('view',"TABLE",5,'true'))
display(databaseAPI.searchObjects('table',"AIRLINE",5,'true'))
###Output
_____no_output_____
###Markdown
Running Scripts and WorkloadsThe Db2Console class can also call the SQL Editor service to run Db2 scripts. This isn't limited to single SQL statments. Scripts that include multiple statements are also supported. To make it easy to run the same script against different databases, the fucntion call requires the connection profile name, the userid and password and the sql script text. The next cell runs three SQL statements. The database connection and authentication is included in the call. It returns a JSON string that include details on each statement, its runtime, column types, the limit of returned rows, the full row count in the result set, and the actual results to the row limit.
###Code
sql = \
'''
SELECT TABSCHEMA, TABNAME, STATUS FROM SYSCAT.TABLES;
SELECT VIEWSCHEMA, VIEWNAME, VALID FROM SYSCAT.VIEWS;
SELECT TABSCHEMA, TABNAME, COLNAME, TYPENAME, LENGTH FROM SYSCAT.COLUMNS
'''
user = 'admin'
password = 'CP4DDataFabric'
profile = 'dv-1635944153872816'
display(databaseAPI.runScript(profile, user, password, sql))
###Output
_____no_output_____
###Markdown
To make it easier to see the results, the displayResults function parses the JSON into a readable format. Formatting Results
###Code
databaseAPI.displayResults(databaseAPI.runScript(profile, user, password, sql))
###Output
_____no_output_____
###Markdown
The number of cells returned is limited to 10 by default. You can add an additional parameter to the runScript command to return a much larger result set, which you can then manipulate in Python. The returnRows function converts the JSON result into a dataframe. It requires the json that is returned by runScipt and the index of the SQL result you want to work with. Returning Results as Dataframes
###Code
json = databaseAPI.runScript(profile, user, password, sql, 10000)
df = databaseAPI.returnRows(json,0)
display(df.head(5))
display(df.tail(5))
df = databaseAPI.returnRows(json,1)
display(df.head(5))
display(df.tail(5))
df = databaseAPI.returnRows(json,2)
display(df.head(5))
display(df.tail(5))
###Output
_____no_output_____
###Markdown
Running Workloads and Measuring Results One of the most powerful functions built into the Db2Console class is **runWorkload**. It lets you run multiple scripts against multiple databases in a loop. This is particularly useful for demonstrating Db2 monitoring or for measuring the performance of SQL against different databases.In the next example, two scripts are run repeatedly against all the databases currently cataloged by the Db2 Console.The runtimes are collected along the way and returned in a dataframe.
###Code
profileList = ['db2oltp-1636379315142134','db2oltp-1635953643145137','db2wh-1635951043918331']
sql1 = \
'''
SELECT TABSCHEMA, TABNAME, STATUS FROM SYSCAT.TABLES;
SELECT VIEWSCHEMA, VIEWNAME, VALID FROM SYSCAT.VIEWS;
'''
sql2 = \
'''
SELECT TABSCHEMA, TABNAME, COLNAME, TYPENAME, LENGTH FROM SYSCAT.COLUMNS
'''
user = 'admin'
password = 'CP4DDataFabric'
scriptList = [sql1, sql2]
profileReps = 2
scriptReps = 2
pause = 0.25
df = databaseAPI.runWorkload(profileList, scriptList, user, password, profileReps, scriptReps,pause)
display(df)
###Output
_____no_output_____
###Markdown
Now we can use the results in the dataframe to look at the results statistically. First we can see the average runtime for each statement across the databases.
###Code
print('Mean runtime in ms')
pd.set_option('display.max_colwidth', 100)
stmtMean = df.groupby(['statement']).mean()
print(stmtMean)
###Output
_____no_output_____
###Markdown
We can also display the total runtime for each statement across databases.
###Code
print('Total runtime in ms')
pd.set_option('display.max_colwidth', 100)
stmtSum = df.groupby(['statement']).sum()
print(stmtSum)
###Output
_____no_output_____
###Markdown
We can even graph the total run time for all the statements can compare database performance. Since there are more rows in the Db2 Warehouse database tables the statements may take longer to return a result.
###Code
print('Mean runtime in ms')
pd.set_option('display.max_colwidth', 100)
profileSum = df.groupby(['profile']).sum()
profileSum.plot(kind='bar')
plt.show()
###Output
_____no_output_____
###Markdown
Of course you can also analyze the slowest database by using some simple dataframe functions. The slowest database has the maximum total elapsed runtime.
###Code
print('Slowest Database')
slowestProfile = profileSum['runtime_ms'].idxmax()
print(slowestProfile)
###Output
_____no_output_____
###Markdown
Additional OptionsJust like using the SQL Editor you can also specify whether to stop the script if it encounters and error or to continue. You can also specify the seperator character between individual statements. Here is the full function call with defaults and options: runScript(profile, user, password, sqlText, limit=10, separator=';', stopOnError=False): ADMIN_CMD Commands and Calling Stored ProceduresThe SQL Editor can also be used to execute stored procedure calls. In this example, the procedure call updates statistics on the STOCKS.CUSTOMER table in the STOCKs OLTP database
###Code
sql = \
'''
CALL SYSPROC.ADMIN_CMD ('RUNSTATS ON TABLE STOCKS.CUSTOMER ON KEY COLUMNS and INDEXES ALL');
'''
user = 'admin'
password = 'CP4DDataFabric'
profile = 'db2oltp-1635953643145137'
display(databaseAPI.runScript(profile, user, password, sql))
###Output
_____no_output_____
###Markdown
Current Metrics FunctionsNow that you can run a workload to exercise Db2, you can measure what is going on. The following function calls let you see what applications are connected to the "Ontime" database, see any statements that are currently in-flight and see the frequently used statements stored in the Db2 package cache.The includeSystem parameter defines whether applications or statements generated by Db2 itself or the Db2 Console are included in the results. Let's include all statements.
###Code
includeSystem = "true"
###Output
_____no_output_____
###Markdown
The next cells likes the applications that are currently connected to the database
###Code
databaseAPI.getCurrentApplicationsConnections(includeSystem)
###Output
_____no_output_____
###Markdown
This next cell lists any statements currently running
###Code
databaseAPI.getInflightCurrentList(includeSystem)
###Output
_____no_output_____
###Markdown
Finally this example returns the list of every statement currently in the Db2 Package Cache. This gives a good representation of statements that are frequently used.
###Code
databaseAPI.getCurrentPackageCacheList(includeSystem).head(5)
###Output
_____no_output_____
###Markdown
Timeseries Monitoring FunctionsOne of the key capabilities of the Db2 Console is that is collects historical monitoring information as timeseries data. Each of the examples below has a parallel page in the Db2 Console. The next set of functions returns monitoring data based on a start and endtime. The console and Db2 use EPOCH time, which is the number of milli-seconds since January 1st 1970. The cell below sets startTime and endTime. endTime is the current and end time. startTime is set to 12 hours earlier.
###Code
import time
from datetime import date
oneHour = 3600000
endTime = int(time.time())*1000
startTime = endTime-(oneHour*12)
###Output
_____no_output_____
###Markdown
Time Based Metrics - Summary FunctionsThe following functions return a total summary of the number of user statements that have run over the last 12 hours as well as the average response time in ms over that same period.
###Code
databaseAPI.getStatementsCount(startTime, endTime)
databaseAPI.getAverageResponseTime(startTime, endTime)
###Output
_____no_output_____
###Markdown
Time Based Metrics - Interval Measurement FunctionsThe following functions return a measurement for each monitoring interval over the last 12 hours. The examples below return average response time and total rows read during each monitoring interval. The last 5 intervals are displayed
###Code
databaseAPI.getResponseTime(startTime, endTime).tail(5)
databaseAPI.getRowsRead(startTime, endTime).tail(5)
###Output
_____no_output_____
###Markdown
Time Based Metrics - Object FunctionsThe following functions return monitoring data over the last 12 hours with a summary row for each object. The latest 5 entries are displayed. This first call returns metrics for tables used in the last 12 hours.
###Code
databaseAPI.getTablesMetrics(startTime, endTime, includeSystem).tail(5)
###Output
_____no_output_____
###Markdown
The next statement returns details of individual statements that ran over the last twelve hours.
###Code
databaseAPI.getIndividualStatementExecution(startTime, endTime).tail(5)
###Output
_____no_output_____
###Markdown
Finally this statement returns a history of the statements that were found in the package cache over the last 12 hours.
###Code
databaseAPI.getPackageCacheStatement(startTime, endTime, includeSystem).tail(5)
###Output
_____no_output_____ |
code/.ipynb_checkpoints/Project-1-checkpoint.ipynb | ###Markdown
Rover Project Test NotebookThis notebook contains the functions from the lesson and provides the scaffolding you need to test out your mapping methods. The steps you need to complete in this notebook for the project are the following:* First just run each of the cells in the notebook, examine the code and the results of each.* Run the simulator in "Training Mode" and record some data. Note: the simulator may crash if you try to record a large (longer than a few minutes) dataset, but you don't need a ton of data, just some example images to work with. * Change the data directory path (2 cells below) to be the directory where you saved data* Test out the functions provided on your data* Write new functions (or modify existing ones) to report and map out detections of obstacles and rock samples (yellow rocks)* Populate the `process_image()` function with the appropriate steps/functions to go from a raw image to a worldmap.* Run the cell that calls `process_image()` using `moviepy` functions to create video output* Once you have mapping working, move on to modifying `perception.py` and `decision.py` to allow your rover to navigate and map in autonomous mode!**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".****Run the next cell to get code highlighting in the markdown cells.**
###Code
%%HTML
<style> code {background-color : orange !important;} </style>
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window (note it may show up behind your browser)
# Make some of the relevant imports
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
import imageio
imageio.plugins.ffmpeg.download()
###Output
_____no_output_____
###Markdown
Quick Look at the DataThere's some example data provided in the `test_dataset` folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator. Next, read in and display a random image from the `test_dataset` folder
###Code
path = '../test_dataset/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Calibration DataRead in and display example grid and rock sample calibration images. You'll use the grid for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.
###Code
# In the simulator you can toggle on a grid on the ground for calibration
# You can also toggle on the rock samples with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = '../calibration_images/example_grid1.jpg'
example_rock = '../calibration_images/example_rock1.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.imshow(grid_img)
plt.subplot(122)
plt.imshow(rock_img)
###Output
_____no_output_____
###Markdown
Perspective TransformDefine the perspective transform function from the lesson and test it on an image.
###Code
# Define a function to perform a perspective transform
# I've used the example grid image above to choose source points for the
# grid cell in front of the rover (each grid cell is 1 square meter in the sim)
# Define a function to perform a perspective transform
def perspect_transform(img, src, dst):
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))# keep same size as input image
mask = cv2.warpPerspective(np.zeros_like(img[:,:,0]), M, (img.shape[1], img.shape[0]))
return warped,mask
# Define calibration box in source (actual) and destination (desired) coordinates
# These source and destination points are defined to warp the image
# to a grid where each 10x10 pixel square represents 1 square meter
# The destination box will be 2*dst_size on each side
dst_size = 5
# Set a bottom offset to account for the fact that the bottom of the image
# is not the position of the rover but a bit in front of it
# this is just a rough guess, feel free to change it!
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
warped,mask = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
#scipy.misc.imsave('../output/warped_example.jpg', warped)
###Output
_____no_output_____
###Markdown
Color ThresholdingDefine the color thresholding function from the lesson and apply it to the warped image**TODO:** Ultimately, you want your map to not just include navigable terrain but also obstacles and the positions of the rock samples you're searching for. Modify this function or write a new function that returns the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that you can map these areas into world coordinates as well. **Hints and Suggestion:** * For obstacles you can just invert your color selection that you used to detect ground pixels, i.e., if you've decided that everything above the threshold is navigable terrain, then everthing below the threshold must be an obstacle!* For rocks, think about imposing a lower and upper boundary in your color selection to be more specific about choosing colors. You can investigate the colors of the rocks (the RGB pixel values) in an interactive matplotlib window to get a feel for the appropriate threshold range (keep in mind you may want different ranges for each of R, G and B!). Feel free to get creative and even bring in functions from other libraries. Here's an example of [color selection](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html) using OpenCV. * **Beware However:** if you start manipulating images with OpenCV, keep in mind that it defaults to `BGR` instead of `RGB` color space when reading/writing images, so things can get confusing.
###Code
# Identify pixels above the threshold
# Threshold of RGB > 160 does a nice job of identifying ground pixels only
def color_thresh(img, rgb_thresh=(180, 180, 180)):
# Create an array of zeros same xy size as img, but single channel
color_select = np.zeros_like(img[:,:,0])
# Require that each pixel be above all three threshold values in RGB
# above_thresh will now contain a boolean array with "True"
# where threshold was met
above_thresh = (img[:,:,0] > rgb_thresh[0]) \
& (img[:,:,1] > rgb_thresh[1]) \
& (img[:,:,2] > rgb_thresh[2])
# Index the array of zeros with the boolean array and set to 1
color_select[above_thresh] = 1
# Return the binary image
return color_select
plt.imshow(warped)
threshed = color_thresh(warped)
plt.imshow(threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
###Output
_____no_output_____
###Markdown
Coordinate TransformationsDefine the functions used to do coordinate transforms and apply them to an image.
###Code
# Define a function to convert from image coords to rover coords
def rover_coords(binary_img):
# Identify nonzero pixels
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to the rover position being at the
# center bottom of the image.
x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[1]/2 ).astype(np.float)
return x_pixel, y_pixel
# Define a function to convert to radial coords in rover space
def to_polar_coords(x_pixel, y_pixel):
# Convert (x_pixel, y_pixel) to (distance, angle)
# in polar coordinates in rover space
# Calculate distance to each pixel
dist = np.sqrt(x_pixel**2 + y_pixel**2)
# Calculate angle away from vertical for each pixel
angles = np.arctan2(y_pixel, x_pixel)
return dist, angles
# Define a function to map rover space pixels to world space
def rotate_pix(xpix, ypix, yaw):
# Convert yaw to radians
yaw_rad = yaw * np.pi / 180
xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))
ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))
# Return the result
return xpix_rotated, ypix_rotated
def translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale):
# Apply a scaling and a translation
xpix_translated = (xpix_rot / scale) + xpos
ypix_translated = (ypix_rot / scale) + ypos
# Return the result
return xpix_translated, ypix_translated
# Define a function to apply rotation and translation (and clipping)
# Once you define the two functions above this function should work
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):
# Apply rotation
xpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)
# Apply translation
xpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)
# Perform rotation, translation and clipping all at once
x_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)
y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)
# Return the result
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped, mask = perspect_transform(image, source, destination)
threshed = color_thresh(warped)
# Calculate pixel values in rover-centric coords and distance/angle to all pixels
xpix, ypix = rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
def find_rocks(img,levels=(110,110,50)):
rockpix = ((img[:,:,0]>levels[0])&(img[:,:,1]>levels[1])&(img[:,:,2]<levels[2]))
rock_image = np.zeros_like(img[:,:,0])
rock_image[rockpix] = 1
return rock_image
rock_map = find_rocks(rock_img)
fig = plt.figure(figsize = (12,3))
plt.subplot(121)
plt.imshow(rock_img)
plt.subplot(122)
plt.imshow(rock_map,cmap='gray')
###Output
_____no_output_____
###Markdown
Read in saved data and ground truth map of the worldThe next cell is all setup to read your saved data into a `pandas` dataframe. Here you'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain. After that, we'll define a class to store telemetry data and pathnames to images. When you instantiate this class (`data = Databucket()`) you'll have a global variable called `data` that you can refer to for telemetry and map data within the `process_image()` function in the following cell.
###Code
# Import pandas and read in csv file as a dataframe
import pandas as pd
# Change the path below to your data directory
# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator
# change the '.' to ','
df = pd.read_csv('../test_dataset/robot_log.csv', delimiter=';', decimal='.')
csv_img_list = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)
# Creating a class to be the data container
# Will read in saved data from csv file and populate this object
# Worldmap is instantiated as 200 x 200 grids corresponding
# to a 200m x 200m space (same size as the ground truth map: 200 x 200 pixels)
# This encompasses the full range of output position values in x and y from the sim
class Databucket():
def __init__(self):
self.images = csv_img_list
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
self.count = 0 # This will be a running index
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a Databucket().. this will be a global variable/object
# that you can refer to in the process_image() function below
data = Databucket()
###Output
_____no_output_____
###Markdown
Write a function to process stored imagesModify the `process_image()` function below by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this `process_image()` function in conjunction with the `moviepy` video processing package to create a video from the images you saved taking data in the simulator. In short, you will be passing individual images into `process_image()` and building up an image called `output_image` that will be stored as one frame of video. You can make a mosaic of the various steps of your analysis process and add text as you like (example provided below). To start with, you can simply run the next three cells to see what happens, but then go ahead and modify them such that the output video demonstrates your mapping process. Feel free to get creative!
###Code
# Define a function to pass stored images to
# reading rover position and yaw angle from csv file
# This function will be used by moviepy to create an output video
def process_image(img):
# Example of how to use the Databucket() object defined above
# to print the current x, y and yaw values
# print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])
image = np.copy(img)
# TODO:
# 1) Define source and destination points for perspective transform
dst_size = 5
bottom_offset = 6
source = np.float32([[14, 140], [301 ,140],[200, 96], [118, 96]])
destination = np.float32([[image.shape[1]/2 - dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size, image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size, image.shape[0] - 2*dst_size - bottom_offset],
])
# 2) Apply perspective transform
warped, mask = perspect_transform(image, source, destination)
# 3) Apply color threshold to identify navigable terrain/obstacles/rock samples
threshold = color_thresh(warped)
obs_map = np.absolute(np.float32(threshold)-1) * mask
# 4) Convert thresholded image pixel values to rover-centric coords
xpix, ypix = rover_coords(threshold)
obs_xpix, obs_ypix = rover_coords(obs_map)
# 5) Convert rover-centric pixel values to world coords
x_pix_world, y_pix_world = pix_to_world(xpix, ypix, data.xpos[data.count], data.ypos[data.count], data.yaw[data.count], 200,10)
obs_x_pix_world, obs_y_pix_world = pix_to_world(obs_xpix, obs_ypix, data.xpos[data.count], data.ypos[data.count], data.yaw[data.count], 200,10)
# 6) Update worldmap (to be displayed on right side of screen)
# Example: data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
# data.worldmap[rock_y_world, rock_x_world, 1] += 1
# data.worldmap[navigable_y_world, navigable_x_world, 2] += 1
data.worldmap[y_pix_world, x_pix_world, 2] = 255
data.worldmap[obs_y_pix_world, obs_x_pix_world, 0] = 255
navpix = data.worldmap[:,:,2]>0
data.worldmap[navpix,0] =0
rock_map = find_rocks(warped,levels=(110,110,50))
if rock_map.any():
rock_x,rock_y = rover_coords(rock_map)
rock_x_world, rock_y_world = pix_to_world(rock_x,rock_y,data.xpos[data.count], data.ypos[data.count], data.yaw[data.count], 200,10)
data.worldmap[rock_y_world,rock_x_world,:] = 255
# 7) Make a mosaic image, below is some example code
# First create a blank image (can be whatever shape you like)
output_image = np.zeros((img.shape[0] + data.worldmap.shape[0], img.shape[1]*2, 3))
# Next you can populate regions of the image with various output
# Here I'm putting the original image in the upper left hand corner
output_image[0:img.shape[0], 0:img.shape[1]] = img
# Let's create more images to add to the mosaic, first a warped image
warped,mask = perspect_transform(img, source, destination)
# Add the warped image in the upper right hand corner
output_image[0:img.shape[0], img.shape[1]:] = warped
# Overlay worldmap with ground truth map
map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)
# Flip map overlay so y-axis points upward and add to output_image
output_image[img.shape[0]:, 0:data.worldmap.shape[1]] = np.flipud(map_add)
# Then putting some text over the image
cv2.putText(output_image,"Populate this image with your analyses to make a video!", (20, 20),
cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)
if data.count < len(data.images) - 1:
data.count += 1 # Keep track of the index in the Databucket()
return output_image
###Output
_____no_output_____
###Markdown
Make a video from processed image dataUse the [moviepy](https://zulko.github.io/moviepy/) library to process images and create a video.
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
data = Databucket() # Re-initialize data in case you're running this cell multiple times
clip = ImageSequenceClip(data.images, fps=60) # Note: output video will be sped up because
# recording rate in simulator is fps=25
new_clip = clip.fl_image(process_image) #NOTE: this function expects color images!!
%time new_clip.write_videofile(output, audio=False)
###Output
[MoviePy] >>>> Building video ../output/test_mapping.mp4
[MoviePy] Writing video ../output/test_mapping.mp4
###Markdown
This next cell should function as an inline video playerIf this fails to render the video, try running the following cell (alternative video rendering method). You can also simply have a look at the saved mp4 in your `/output` folder
###Code
from IPython.display import HTML
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(output))
###Output
_____no_output_____
###Markdown
Below is an alternative way to create a video in case the above cell did not work.
###Code
import io
import base64
video = io.open(output, 'r+b').read()
encoded_video = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded_video.decode('ascii')))
###Output
_____no_output_____ |
apache-spark/python/k-means-clustering/Seed Clustering.ipynb | ###Markdown
Explore Data
###Code
data = spark.read.csv('../data/seeds_dataset.csv', inferSchema=True, header=True)
data.printSchema()
data.head(1)
data.count()
# we know there are 3 different kinds of wheat, so we use K = 3
from pyspark.ml.clustering import KMeans
###Output
_____no_output_____
###Markdown
Create Feature Set
###Code
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(inputCols=data.columns,outputCol='features')
with_features = assembler.transform(data).select('features')
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from pyspark.ml.feature import StandardScaler
scaler = StandardScaler(inputCol='features', outputCol='scaled_features')
scalar_model = scaler.fit(with_features)
scaled_data = scalar_model.transform(with_features)
scaled_data.select('scaled_features').head(1)
###Output
_____no_output_____
###Markdown
Train KMeans Model
###Code
kmeans = KMeans(featuresCol='scaled_features', k=3)
model = kmeans.fit(scaled_data)
###Output
_____no_output_____
###Markdown
Interpret Cluster Results
###Code
model.computeCost(scaled_data)
model.clusterCenters()
results = model.transform(scaled_data)
results.select('prediction').show()
###Output
+----------+
|prediction|
+----------+
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 1|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 0|
| 2|
+----------+
only showing top 20 rows
|
notebooks/demo_benchmark.ipynb | ###Markdown
Benchmark data
###Code
cml.load_dataset(
"s2s-ai-challenge-test-output-benchmark",
parameter="t2m",
).to_xarray()["t2m"]
cml.load_dataset(
"s2s-ai-challenge-test-output-benchmark",
parameter="tp",
).to_xarray()
cml.load_dataset("s2s-ai-challenge-test-output-benchmark", parameter=["t2m", "tp"]).to_xarray()
###Output
_____no_output_____ |
notebooks/3110 - Full tank dataset - Cut metrics.ipynb | ###Markdown
Plot the histogram of the two observables 1. Sum of charges in an event 2. Number of PMTs hit 1. Sum of charges in an event
###Code
train_batch_size = 1024
dset=WCH5Dataset(path, 0.1, 0.1, reduced_dataset_size=300000)
train_loader = DataLoader(dset, batch_size=train_batch_size, shuffle=False,
num_workers=4, sampler=SubsetRandomSampler(dset.train_indices))
event_charges = []
event_timings = []
labels = []
for data in iter(train_loader):
event_charges.append(data[0][:,:,:,:19].numpy())
event_timings.append(data[0][:,:,:,19:].numpy())
labels.append(data[1].numpy())
print(len(event_charges), event_charges[0].shape, len(event_timings), event_timings[0].shape)
print(len(labels), labels[0].shape)
event_charges = np.concatenate(event_charges, axis=0)
event_timings = np.concatenate(event_timings, axis=0)
labels = np.concatenate(labels, axis=0)
print(event_charges.shape, event_timings.shape, labels.shape)
###Output
(240000, 40, 40, 19) (240000, 40, 40, 19) (240000,)
###Markdown
Distrbution of the labels
###Code
print(Counter(labels))
sum_event_charges = np.sum(event_charges.reshape(event_charges.shape[0], -1), axis=1)
sum_event_timings = np.sum(event_timings.reshape(event_timings.shape[0], -1), axis=1)
label_dict = {0:["gamma","red"], 1:["e","blue"], 2:["mu","green"]}
print(sum_event_charges.shape, sum_event_timings.shape)
###Output
(240000,) (240000,)
###Markdown
Plot the per event sum charge distribution
###Code
event_charge_dict = {}
for label in label_dict.keys():
event_charge_dict[label] = sum_event_charges[labels == label].flatten()
fig, axes = plt.subplots(3, 1, figsize=(32,18), sharex=True)
for label in label_dict.keys():
axes[label].hist(event_charge_dict[label], bins=200, density=False, label=label_dict[label][0], alpha=0.8,
color=label_dict[label][1])
axes[label].legend(prop={"size":30})
axes[label].set_xlabel("Total event charge", fontsize=30)
axes[label].set_ylabel("Frequency", fontsize=30)
axes[label].tick_params(axis="both", labelsize=30)
axes[label].set_yscale("log")
axes[label].grid(True, which="both", axis="both")
###Output
_____no_output_____
###Markdown
2. Number of PMTs hit in an event PMT charge distribution in a single event
###Code
event_charge_dict = {}
for label in label_dict.keys():
label_events = event_charges[labels == label]
label_events = label_events.reshape(label_events.shape[0], -1)
event_charge_dict[label] =
fig, axes = plt.subplots(3, 1, figsize=(32,18), sharex=True)
for label in label_dict.keys():
axes[label].hist(event_charge_dict[label], bins=200, density=False, label=label_dict[label][0], alpha=0.8,
color=label_dict[label][1])
axes[label].legend(prop={"size":30})
axes[label].set_xlabel("Total event charge", fontsize=30)
axes[label].set_ylabel("Frequency", fontsize=30)
axes[label].tick_params(axis="both", labelsize=30)
axes[label].set_yscale("log")
axes[label].grid(True, which="both", axis="both")
###Output
_____no_output_____
###Markdown
So we will treat any non-zero charge PMT as a hit
###Code
event_charge_dict = {}
for label in label_dict.keys():
label_events = event_charges[labels == label]
label_events = label_events.reshape(label_events.shape[0], -1)
label_events = [event[event > 0] for event in label_events]
label_events = [event.shape[0] for event in label_events]
event_charge_dict[label] = label_events
fig, axes = plt.subplots(3, 1, figsize=(32,18), sharex=True)
for label in label_dict.keys():
axes[label].hist(event_charge_dict[label], bins=200, density=False, label=label_dict[label][0], alpha=0.8,
color=label_dict[label][1], range=(0, 200))
axes[label].legend(prop={"size":30})
axes[label].set_xlabel("Number of PMT hits", fontsize=30)
axes[label].set_ylabel("Frequency", fontsize=30)
axes[label].tick_params(axis="both", labelsize=30)
axes[label].set_yscale("log")
axes[label].grid(True, which="both", axis="both")
axes[label].set_ylim(bottom=1)
###Output
_____no_output_____
###Markdown
Example showing what is happening in the snippet above
###Code
events = event_charges[0:2]
events = events.reshape(events.shape[0], -1)
print(events.shape)
events = [event[event > 0] for event in events]
print(len(events))
print(events[0].shape, events[1].shape)
events = [event.shape[0] for event in events]
print(events)
###Output
[823, 215]
|
tutorials/nlp/Token_Classification-BioMegatron.ipynb | ###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Convert the Megatron-LM Weights to Nemo fileIf you prefer to use the Huggingface BERT models, please skip this section and refer to `Setting up a NeMo Experiment` section to load a model from `nemo_nlp.modules.get_pretrained_lm_models_list()`NeMo Megatron BERT can [load from a pretrained model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html?highlight=nemo%20filerestore) using `.nemo` file. We can convert the Megatron-LM checkpoint to the `.nemo` file. Let's first download the pretrained model weights and vocabulary file.
###Code
from nemo.collections.nlp.modules.common.megatron.megatron_utils import MEGATRON_CONFIG_MAP
import pathlib
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
checkpoint_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['checkpoint']
vocab_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['vocab']
checkpoint_filename = pathlib.Path(checkpoint_url).name
vocab_filename = pathlib.Path(vocab_url).name
if not pathlib.Path(checkpoint_filename).exists():
print('downloading from checkpoint url', checkpoint_url)
!wget $checkpoint_url
if not pathlib.Path(vocab_filename).exists():
print('downloading from vocab url', vocab_url)
!wget $vocab_url
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
# Prepare the model parameters
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
MODEL_CONFIG = "megatron_bert_config.yaml"
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
config.model.num_layers = 24
config.model.hidden_size = 1024
config.model.ffn_hidden_size = 4096
config.model.num_attention_heads = 16
config.model.tokenizer.vocab_file = vocab_filename
config.model.tokenizer.type = 'BertWordPieceCase'
config.model.tensor_model_parallel_size = 1
config.model.data.data_prefix = ''
config.model.max_position_embeddings = 512
config.model.data.seq_length = 512
config.cfg = {}
config.cfg.cfg = config.model
with open('hparams.yaml', 'w') as f:
f.write(OmegaConf.to_yaml(config.cfg))
import os
PWD = os.getcwd()
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/megatron_lm_ckpt_to_nemo.py')
!python -m torch.distributed.run --nproc_per_node=1 megatron_lm_ckpt_to_nemo.py --checkpoint_folder=$PWD --checkpoint_name=$checkpoint_filename --hparams_file=$PWD/hparams.yaml --nemo_file_path=$PWD/biomegatron.nemo --model_type=bert --tensor_model_parallel_size=1
###Output
_____no_output_____
###Markdown
Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below. Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
accelerator = 'gpu' if torch.cuda.is_available() else 'cpu'
config.trainer.devices = 1
config.trainer.accelerator = accelerator
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.strategy = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
os.makedirs(WORK_DIR, exist_ok=True)
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
To load the pretrained BERT LM model, we can either load it from the converted `.nemo` file as shown above or load it from a list of included model names. We can get the list of names by following command ```python complete list of supported BERT-like modelsprint(nemo_nlp.modules.get_pretrained_lm_models_list())```We can change the `model.language_mode` config to use it```python add the specified above model parameters to the configconfig.model.language_model.pretrained_model_name = MODEL_NAME```In this notebook, we will use the converted `.nemo` file as our LM model, which is BioMegatron, [Megatron-LM BERT](https://arxiv.org/abs/1909.08053) pre-trained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) biomedical text corpus.
###Code
# add the specified above model parameters to the config
# config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
config.model.language_model.nemo_file = 'biomegatron.nemo'
config.model.language_model.pretrained_model_name = 'megatron-bert-cased'
config.model.tokenizer.vocab_file='vocab.txt'
config.model.tokenizer.tokenizer_model = 'BertWordPieceCase'
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.half().evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'DDP'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'ddp'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
9288106 Clustering of missense mutations in the <category="Modifier">ataxia-telangiectasia</category> gene in a <category="SpecificDisease">sporadic T-cell leukaemia</category>. <category="SpecificDisease">Ataxia-telangiectasia</category> ( <category="SpecificDisease">A-T</category> ) is a <category="DiseaseClass">recessive multi-system disorder</category> caused by mutations in the ATM gene at 11q22-q23 ( ref . 3 ) . The risk of <category="DiseaseClass">cancer</category> , especially <category="DiseaseClass">lymphoid neoplasias</category> , is substantially elevated in <category="Modifier">A-T</category> patients and has long been associated with chromosomal instability . By analysing <category="Modifier">tumour</category> DNA from patients with <category="SpecificDisease">sporadic T-cell prolymphocytic leukaemia</category> ( <category="SpecificDisease">T-PLL</category> ) , a rare <category="DiseaseClass">clonal malignancy</category> with similarities to a <category="SpecificDisease">mature T-cell leukaemia</category> seen in <category="SpecificDisease">A-T</category> , we demonstrate a high frequency of ATM mutations in <category="SpecificDisease">T-PLL</category> . In marked contrast to the ATM mutation pattern in <category="SpecificDisease">A-T</category> , the most frequent nucleotide changes in this <category="DiseaseClass">leukaemia</category> were missense mutations . These clustered in the region corresponding to the kinase domain , which is highly conserved in ATM-related proteins in mouse , yeast and Drosophila . The resulting amino-acid substitutions are predicted to interfere with ATP binding or substrate recognition . Two of seventeen mutated <category="SpecificDisease">T-PLL</category> samples had a previously reported <category="Modifier">A-T</category> allele . In contrast , no mutations were detected in the p53 gene , suggesting that this <category="Modifier">tumour</category> suppressor is not frequently altered in this <category="DiseaseClass">leukaemia</category> . Occasional missense mutations in ATM were also found in <category="Modifier">tumour</category> DNA from patients with <category="SpecificDisease">B-cell non-Hodgkins lymphomas</category> ( <category="SpecificDisease">B-NHL</category> ) and a <category="Modifier">B-NHL</category> cell line . The evidence of a significant proportion of loss-of-function mutations and a complete absence of the normal copy of ATM in the majority of mutated <category="DiseaseClass">tumours</category> establishes somatic inactivation of this gene in the pathogenesis of <category="SpecificDisease">sporadic T-PLL</category> and suggests that ATM acts as a <category="Modifier">tumour</category> suppressor . As constitutional DNA was not available , a putative hereditary predisposition to <category="SpecificDisease">T-PLL</category> will require further investigation . .
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
total 1.5M
-rw-r--r-- 1 root root 196K Apr 8 00:56 devel.tsv
-rw-r--r-- 1 root root 201K Apr 8 00:56 test.tsv
-rw-r--r-- 1 root root 1.1M Apr 8 00:56 train.tsv
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
[NeMo I 2022-04-08 00:57:03 import_from_iob_format:119] Processing DATA_DIR/NER/train.tsv
[NeMo I 2022-04-08 00:57:03 import_from_iob_format:124] Processing of the DATA_DIR/NER/train.tsv is complete
[NeMo I 2022-04-08 00:57:06 import_from_iob_format:119] Processing DATA_DIR/NER/dev.tsv
[NeMo I 2022-04-08 00:57:06 import_from_iob_format:124] Processing of the DATA_DIR/NER/dev.tsv is complete
[NeMo I 2022-04-08 00:57:08 import_from_iob_format:119] Processing DATA_DIR/NER/test.tsv
[NeMo I 2022-04-08 00:57:08 import_from_iob_format:124] Processing of the DATA_DIR/NER/test.tsv is complete
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
O O O O O O O O B-Disease I-Disease I-Disease I-Disease O O
O B-Disease I-Disease I-Disease I-Disease I-Disease I-Disease I-Disease O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
O O O O O O O O O
O B-Disease I-Disease O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
O O O O O O O O O O O O O
O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
O O O O O O O O O O O O O O O O O O O O O O O O O O B-Disease I-Disease O O
O O O O O O O
O O
O O O O O O O O O O O B-Disease O
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
MODEL_CONFIG = "token_classification_config.yaml"
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below. Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
accelerator = 'gpu' if torch.cuda.is_available() else 'cpu'
config.trainer.devices = 1
config.trainer.accelerator = accelerator
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.strategy = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
os.makedirs(WORK_DIR, exist_ok=True)
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
To load the pretrained BERT LM model, we can get the list of names by following command
###Code
from nemo.collections.nlp.models.language_modeling.megatron_bert_model import MegatronBertModel
print([model.pretrained_model_name for model in MegatronBertModel.list_available_models()])
###Output
_____no_output_____
###Markdown
We can change the `model.language_mode` config to use it```python add the specified above model parameters to the configconfig.model.language_model.pretrained_model_name = MODEL_NAME```In this notebook, we will use 'biomegatron345m_biovocab_30k_cased', which is BioMegatron, [Megatron-LM BERT](https://arxiv.org/abs/1909.08053) pre-trained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) biomedical text corpus.
###Code
# add the specified above model parameters to the config
# config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
config.model.language_model.lm_checkpoint = None
config.model.language_model.pretrained_model_name = 'biomegatron345m_biovocab_30k_cased'
config.model.tokenizer.tokenizer_name = None
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.half().evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.distributed_backend = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 5
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'ddp'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 5
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'DDP'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Convert the Megatron-LM Weights to Nemo fileIf you prefer to use the Huggingface BERT models, please skip this section and refer to `Setting up a NeMo Experiment` section to load a model from `nemo_nlp.modules.get_pretrained_lm_models_list()`NeMo Megatron BERT can [load from a pretrained model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html?highlight=nemo%20filerestore) using `.nemo` file. We can convert the Megatron-LM checkpoint to the `.nemo` file. Let's first download the pretrained model weights and vocabulary file.
###Code
from nemo.collections.nlp.modules.common.megatron.megatron_utils import MEGATRON_CONFIG_MAP
import pathlib
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
checkpoint_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['checkpoint']
vocab_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['vocab']
checkpoint_filename = pathlib.Path(checkpoint_url).name
vocab_filename = pathlib.Path(vocab_url).name
if not pathlib.Path(checkpoint_filename).exists():
print('downloading from checkpoint url', checkpoint_url)
!wget $checkpoint_url
if not pathlib.Path(vocab_filename).exists():
print('downloading from vocab url', vocab_url)
!wget $vocab_url
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
# Prepare the model parameters
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
MODEL_CONFIG = "megatron_bert_config.yaml"
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
config.model.num_layers = 24
config.model.hidden_size = 1024
config.model.ffn_hidden_size = 4096
config.model.num_attention_heads = 16
config.model.tokenizer.vocab_file = vocab_filename
config.model.tokenizer.type = 'BertWordPieceCase'
config.model.tensor_model_parallel_size = 1
config.model.data.data_prefix = ''
config.model.max_position_embeddings = 512
config.model.data.seq_length = 512
config.cfg = {}
config.cfg.cfg = config.model
with open('hparams.yaml', 'w') as f:
f.write(OmegaConf.to_yaml(config.cfg))
import os
PWD = os.getcwd()
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/megatron_lm_ckpt_to_nemo.py')
!python -m torch.distributed.run --nproc_per_node=1 megatron_lm_ckpt_to_nemo.py --checkpoint_folder=$PWD --checkpoint_name=$checkpoint_filename --hparams_file=$PWD/hparams.yaml --nemo_file_path=$PWD/biomegatron.nemo --model_type=bert --tensor_model_parallel_size=1
###Output
_____no_output_____
###Markdown
Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below. Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
os.makedirs(WORK_DIR, exist_ok=True)
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
To load the pretrained BERT LM model, we can either load it from the converted `.nemo` file as shown above or load it from a list of included model names. We can get the list of names by following command ```python complete list of supported BERT-like modelsprint(nemo_nlp.modules.get_pretrained_lm_models_list())```We can change the `model.language_mode` config to use it```python add the specified above model parameters to the configconfig.model.language_model.pretrained_model_name = MODEL_NAME```In this notebook, we will use the converted `.nemo` file as our LM model, which is BioMegatron, [Megatron-LM BERT](https://arxiv.org/abs/1909.08053) pre-trained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) biomedical text corpus.
###Code
# add the specified above model parameters to the config
# config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
config.model.language_model.nemo_file = 'biomegatron.nemo'
config.model.language_model.pretrained_model_name = 'megatron-bert-cased'
config.model.tokenizer.vocab_file='vocab.txt'
config.model.tokenizer.tokenizer_model = 'BertWordPieceCase'
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.half().evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'DDP'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'DDP'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Convert the Megatron-LM Weights to Nemo fileIf you prefer to use the Huggingface BERT models, please skip this section and refer to `Setting up a NeMo Experiment` setction to load a model from `nemo_nlp.modules.get_pretrained_lm_models_list()`NeMo Megatron BERT can [load from a pretrained model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html?highlight=nemo%20filerestore) using `.nemo` file. We can convert the Megatron-LM checkpoint to the `.nemo` file. Let's first download the pretrained model weights and vocabulary file.
###Code
from nemo.collections.nlp.modules.common.megatron.megatron_utils import MEGATRON_CONFIG_MAP
import pathlib
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
checkpoint_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['checkpoint']
vocab_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['vocab']
checkpoint_filename = pathlib.Path(checkpoint_url).name
vocab_filename = pathlib.Path(vocab_url).name
if not pathlib.Path(checkpoint_filename).exists():
print('downloading from checkpoint url', checkpoint_url)
!wget $checkpoint_url
if not pathlib.Path(vocab_filename).exists():
print('downloading from vocab url', vocab_url)
!wget $vocab_url
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
# Prepare the model parameters
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
MODEL_CONFIG = "megatron_bert_config.yaml"
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
config.model.num_layers = 24
config.model.hidden_size = 1024
config.model.ffn_hidden_size = 4096
config.model.num_attention_heads = 16
config.model.tokenizer.vocab_file = vocab_filename
config.model.tokenizer.type = 'BertWordPieceCase'
config.model.tensor_model_parallel_size = 1
config.model.data.data_prefix = ''
config.model.max_position_embeddings = 512
config.model.data.seq_length = 512
config.cfg = {}
config.cfg.cfg = config.model
with open('hparams.yaml', 'w') as f:
f.write(OmegaConf.to_yaml(config.cfg))
import os
PWD = os.getcwd()
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/megatron_lm_ckpt_to_nemo.py')
!python -m torch.distributed.run --nproc_per_node=1 megatron_lm_ckpt_to_nemo.py --checkpoint_folder=$PWD --checkpoint_name=$checkpoint_filename --hparams_file=$PWD/hparams.yaml --nemo_file_path=$PWD/biomegatron.nemo --model_type=bert --tensor_model_parallel_size=1
###Output
_____no_output_____
###Markdown
Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below. Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
os.makedirs(WORK_DIR, exist_ok=True)
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
To load the pretrained BERT LM model, we can either load it from the converted `.nemo` file as shown above or load it from a list of included model names. We can get the list of names by following command ```python complete list of supported BERT-like modelsprint(nemo_nlp.modules.get_pretrained_lm_models_list())```We can change the `model.language_mode` config to use it```python add the specified above model parameters to the configconfig.model.language_model.pretrained_model_name = MODEL_NAME```In this notebook, we will use the converted `.nemo` file as our LM model, which is BioMegatron, [Megatron-LM BERT](https://arxiv.org/abs/1909.08053) pre-trained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) biomedical text corpus.
###Code
# add the specified above model parameters to the config
# config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
config.model.language_model.nemo_file = 'biomegatron.nemo'
config.model.language_model.pretrained_model_name = 'megatron-bert-cased'
config.model.tokenizer.vocab_file='vocab.txt'
config.model.tokenizer.tokenizer_model = 'BertWordPieceCase'
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.half().evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Convert the Megatron-LM Weights to Nemo fileIf you prefer to use the Huggingface BERT models, please skip this section and refer to `Setting up a NeMo Experiment` section to load a model from `nemo_nlp.modules.get_pretrained_lm_models_list()`NeMo Megatron BERT can [load from a pretrained model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html?highlight=nemo%20filerestore) using `.nemo` file. We can convert the Megatron-LM checkpoint to the `.nemo` file. Let's first download the pretrained model weights and vocabulary file.
###Code
from nemo.collections.nlp.modules.common.megatron.megatron_utils import MEGATRON_CONFIG_MAP
import pathlib
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
checkpoint_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['checkpoint']
vocab_url = MEGATRON_CONFIG_MAP[PRETRAINED_BERT_MODEL]['vocab']
checkpoint_filename = pathlib.Path(checkpoint_url).name
vocab_filename = pathlib.Path(vocab_url).name
if not pathlib.Path(checkpoint_filename).exists():
print('downloading from checkpoint url', checkpoint_url)
!wget $checkpoint_url
if not pathlib.Path(vocab_filename).exists():
print('downloading from vocab url', vocab_url)
!wget $vocab_url
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
# Prepare the model parameters
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
MODEL_CONFIG = "megatron_bert_config.yaml"
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
config.model.num_layers = 24
config.model.hidden_size = 1024
config.model.ffn_hidden_size = 4096
config.model.num_attention_heads = 16
config.model.tokenizer.vocab_file = vocab_filename
config.model.tokenizer.type = 'BertWordPieceCase'
config.model.tensor_model_parallel_size = 1
config.model.data.data_prefix = ''
config.model.max_position_embeddings = 512
config.model.data.seq_length = 512
config.cfg = {}
config.cfg.cfg = config.model
with open('hparams.yaml', 'w') as f:
f.write(OmegaConf.to_yaml(config.cfg))
import os
PWD = os.getcwd()
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/megatron_lm_ckpt_to_nemo.py')
!python -m torch.distributed.run --nproc_per_node=1 megatron_lm_ckpt_to_nemo.py --checkpoint_folder=$PWD --checkpoint_name=$checkpoint_filename --hparams_file=$PWD/hparams.yaml --nemo_file_path=$PWD/biomegatron.nemo --model_type=bert --tensor_model_parallel_size=1
###Output
_____no_output_____
###Markdown
Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below. Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
os.makedirs(WORK_DIR, exist_ok=True)
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
To load the pretrained BERT LM model, we can either load it from the converted `.nemo` file as shown above or load it from a list of included model names. We can get the list of names by following command ```python complete list of supported BERT-like modelsprint(nemo_nlp.modules.get_pretrained_lm_models_list())```We can change the `model.language_mode` config to use it```python add the specified above model parameters to the configconfig.model.language_model.pretrained_model_name = MODEL_NAME```In this notebook, we will use the converted `.nemo` file as our LM model, which is BioMegatron, [Megatron-LM BERT](https://arxiv.org/abs/1909.08053) pre-trained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) biomedical text corpus.
###Code
# add the specified above model parameters to the config
# config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
config.model.language_model.nemo_file = 'biomegatron.nemo'
config.model.language_model.pretrained_model_name = 'megatron-bert-cased'
config.model.tokenizer.vocab_file='vocab.txt'
config.model.tokenizer.tokenizer_model = 'BertWordPieceCase'
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.half().evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*.Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .```In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer```For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
# If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:
! head -1 $DATA_DIR/NCBI_corpus_testing.txt
###Output
_____no_output_____
###Markdown
We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
NER_DATA_DIR = f'{DATA_DIR}/NER'
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', NER_DATA_DIR)
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', NER_DATA_DIR)
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv $NER_DATA_DIR/devel.tsv $NER_DATA_DIR/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/train.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/dev.tsv
! python import_from_iob_format.py --data_file=$NER_DATA_DIR/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 $NER_DATA_DIR/text_dev.txt > $NER_DATA_DIR/sample_text_dev.txt
! head -n 100 $NER_DATA_DIR/labels_dev.txt > $NER_DATA_DIR/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(NER_DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(NER_DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=False,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____
###Markdown
In this tutorial, we are going to describe how to finetune BioMegatron - a [BERT](https://arxiv.org/abs/1810.04805)-like [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf) model pre-trained on large biomedical text corpus ([PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and full-text commercial use collection) - on the [NCBI Disease Dataset](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/) for Named Entity Recognition.The model size of Megatron-LM can be larger than BERT, up to multi-billion parameters, compared to 345 million parameters of BERT-large.There are some alternatives of BioMegatron, most notably [BioBERT](https://arxiv.org/abs/1901.08746). Compared to BioBERT BioMegatron is larger by model size and pre-trained on larger text corpus.A more general tutorial of using BERT-based models, including Megatron-LM, for downstream natural language processing tasks can be found [here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/01_Pretrained_Language_Models_for_Downstream_Tasks.ipynb). Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For instance, **given sentences from medical abstracts, what diseases are mentioned?**In this case, our data input is sentences from the abstracts, and our labels are the precise locations of the named disease entities. Take a look at the information provided for the dataset.For more details and general examples on Named Entity Recognition, please refer to the [Token Classification and Named Entity Recognition tutorial notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb). DatasetThe [NCBI-disease corpus](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) is a set of 793 PubMed abstracts, annotated by 14 annotators. The annotations take the form of HTML-style tags inserted into the abstract text using the clearly defined rules. The annotations identify named diseases, and can be used to fine-tune a language model to identify disease mentions in future abstracts, *whether those diseases were part of the original training set or not*. Here's an example of what an annotated abstract from the corpus looks like:```html10021369 Identification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor . The adenomatous polyposis coli ( APC ) tumour-suppressor protein controls the Wnt signalling pathway by forming a complex with glycogen synthase kinase 3beta ( GSK-3beta ) , axin / conductin and betacatenin . Complex formation induces the rapid degradation of betacatenin . In colon carcinoma cells , loss of APC leads to the accumulation of betacatenin in the nucleus , where it binds to and activates the Tcf-4 transcription factor ( reviewed in [ 1 ] [ 2 ] ) . Here , we report the identification and genomic structure of APC homologues . Mammalian APC2 , which closely resembles APC in overall domain structure , was functionally analyzed and shown to contain two SAMP domains , both of which are required for binding to conductin . Like APC , APC2 regulates the formation of active betacatenin-Tcf complexes , as demonstrated using transient transcriptional activation assays in APC - / - colon carcinoma cells . Human APC2 maps to chromosome 19p13 . 3 . APC and APC2 may therefore have comparable functions in development and cancer .``` In this example, we see the following tags within the abstract:```htmladenomatous polyposis coli tumouradenomatous polyposis coli ( APC ) tumourcolon carcinomacolon carcinomacancer``` For our purposes, we will consider any identified category (such as "Modifier", "Specific Disease", and a few others) to generally be a "disease".Let's download the dataset.
###Code
DATA_DIR = "DATA_DIR"
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(os.path.join(DATA_DIR, 'NER'), exist_ok=True)
print('Downloading NCBI data...')
wget.download('https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip', DATA_DIR)
! unzip -o {DATA_DIR}/NCBI_corpus.zip -d {DATA_DIR}
###Output
_____no_output_____
###Markdown
If you want to see more examples, you can explore the text of the corpus using the file browser to the left, or open files directly, for example typing a command like the following in a code-cell:! head -1 $DATA_DIR/NCBI_corpus_testing.txt We have two datasets derived from this corpus: a text classification dataset and a named entity recognition (NER) dataset. The text classification dataset labels the abstracts among three broad disease groupings. We'll use this simple split to demonstrate the NLP text classification task. The NER dataset labels individual words as diseases. This dataset will be used for the NLP NER task. Pre-process datasetA pre-processed NCBI-disease dataset for NER can be found [here](https://github.com/spyysalo/ncbi-disease/tree/master/conll) or [here](https://github.com/dmis-lab/biobertdatasets).We download the files under {DATA_DIR/NER} directory.
###Code
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/train.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/devel.tsv', os.path.join(DATA_DIR, 'NER'))
wget.download('https://raw.githubusercontent.com/spyysalo/ncbi-disease/master/conll/test.tsv', os.path.join(DATA_DIR, 'NER'))
NER_DATA_DIR = 'DATA_DIR/NER'
!ls -lh $NER_DATA_DIR
###Output
_____no_output_____
###Markdown
Convert these to a format that is compatible with [NeMo Token Classification module](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py), using the [conversion script](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py).
###Code
! mv DATA_DIR/NER/devel.tsv DATA_DIR/NER/dev.tsv
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/data/import_from_iob_format.py')
! python import_from_iob_format.py --data_file=DATA_DIR/NER/train.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/dev.tsv
! python import_from_iob_format.py --data_file=DATA_DIR/NER/test.tsv
###Output
_____no_output_____
###Markdown
The NER task requires two files: the text sentences, and the labels. Run the next two cells to see a sample of the two files.
###Code
!head $NER_DATA_DIR/text_train.txt
!head $NER_DATA_DIR/labels_train.txt
###Output
_____no_output_____
###Markdown
IOB TaggingWe can see that the abstract has been broken into sentences. Each sentence is then further parsed into words with labels that correspond to the original HTML-style tags in the corpus. The sentences and labels in the NER dataset map to each other with _inside, outside, beginning (IOB)_ tagging. Anything separated by white space is a word, including punctuation. For the first sentence we have the following mapping:```textIdentification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor .O O O O O O O O B I I I O O ```Recall the original corpus tags:```htmlIdentification of APC2, a homologue of the adenomatous polyposis coli tumour suppressor .```The beginning word of the tagged text, "adenomatous", is now IOB-tagged with a B (beginning) tag, the other parts of the disease, "polyposis coli tumour" tagged with I (inside) tags, and everything else tagged as O (outside). Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
WORK_DIR = "WORK_DIR"
os.makedirs(WORK_DIR, exist_ok=True)
MODEL_CONFIG = "token_classification_config.yaml"
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
# Note: these are small batch-sizes - increase as appropriate to available GPU capacity
config.model.train_ds.batch_size=8
config.model.validation_ds.batch_size=8
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = os.path.join(DATA_DIR, 'NER')
# if you want to decrease the size of your datasets, uncomment the lines below:
# NUM_SAMPLES = 1000
# config.model.train_ds.num_samples = NUM_SAMPLES
# config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# for PyTorch Native AMP set precision=16
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# remove distributed training flags
config.trainer.accelerator = 'DDP'
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# complete list of supported BERT-like models
print(nemo_nlp.modules.get_pretrained_lm_models_list())
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "biomegatron-bert-345m-cased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_ner = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.If you're not using Colab, refer to [https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks](https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks) if you're facing issues with running the cell below.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_ner)
###Output
_____no_output_____
###Markdown
InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/NER/text_dev.txt > {DATA_DIR}/NER/sample_text_dev.txt
! head -n 100 {DATA_DIR}/NER/labels_dev.txt > {DATA_DIR}/NER/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_ner.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'NER', 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'NER', 'sample_labels_dev.txt'),
output_dir=exp_dir,
add_confusion_matrix=True,
normalize_confusion_matrix=True,
batch_size=1
)
# Please check matplotlib version if encountering any error plotting confusion matrix:
# https://stackoverflow.com/questions/63212347/importerror-cannot-import-name-png-from-matplotlib
###Output
_____no_output_____ |
notebooks/Example 6 - Target Volatility Builder.ipynb | ###Markdown
* 请在环境变量中设置`DB_URI`指向数据库
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Single Day Analysis-----------------------
###Code
ref_date = '2018-01-08'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('custom', ['zz800'])
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EPS', codes, 906, industry='sw', risk_model='day')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
sec_cov_df = pd.DataFrame(sec_cov, index=codes, columns=codes)
sec_cov_df.iloc[:5, :5]
###Output
_____no_output_____
###Markdown
Portfolio Construction* using `EPS` factor as alpha factor;* short selling is forbiden;* target of volatility for the activate weight is setting at 2.5% annually level.
###Code
er = factor['EPS'].values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
status, p_er, p_weight = \
target_vol_builder(er, sec_cov, bm, lbound, ubound, cons_mat, risk_targets, target_vol, target_vol)
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
###Output
_____no_output_____
###Markdown
2. Porfolio Construction: 2016 ~ 2018-------------------------------
###Code
"""
Back test parameter settings
"""
start_date = '2016-01-01'
end_date = '2018-02-08'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw_adj'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe("custom", ['zz800'])
data_source = os.environ['DB_URI']
benchmark_code = 906
target_vol = 0.05
weights_bandwidth = 0.02
"""
Factor Model
"""
alpha_factors = {'f01': CSRank(LAST('EPS'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
"""
Constraintes settings
"""
constraint_risk = ['SIZE', 'SIZENL', 'BETA'] + industry_names
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
"""
Running Settings
"""
running_setting = RunningSetting(universe,
start_date,
end_date,
freq,
benchmark=benchmark_code,
weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
"""
Strategy run
"""
strategy = Strategy(alpha_model, data_meta, running_setting)
ret_df, positions = strategy.run()
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
###Output
_____no_output_____
###Markdown
* 请在环境变量中设置`DB_URI`指向数据库
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Single Day Analysis-----------------------
###Code
ref_date = '2018-01-08'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('zz800')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EPS', codes, 906, industry='sw', risk_model='day')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
###Output
_____no_output_____
###Markdown
Portfolio Construction* using `EPS` factor as alpha factor;* short selling is forbiden;* target of volatility for the activate weight is setting at 2.5% annually level.
###Code
er = factor['EPS'].values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
###Output
total weight is 1.0000
portfolio activate weight forecasting vol is 0.0250
portfolio er: 2.2232 comparing with benchmark er: 1.2359
###Markdown
2. Porfolio Construction: 2016 ~ 2018-------------------------------
###Code
"""
Back test parameter settings
"""
start_date = '2016-01-01'
end_date = '2018-02-08'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw_adj'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('zz800')
data_source = os.environ['DB_URI']
benchmark_code = 906
target_vol = 0.05
weights_bandwidth = 0.02
"""
Factor Model
"""
alpha_factors = {'f01': CSRank(LAST('EPS'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
"""
Constraintes settings
"""
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
"""
Running Settings
"""
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
"""
Strategy run
"""
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
###Output
_____no_output_____
###Markdown
* 请在环境变量中设置`DB_URI`指向数据库
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Single Day Analysis-----------------------
###Code
ref_date = '2018-01-08'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('zz800')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EPS', codes, 906, industry='sw', risk_model='day')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
###Output
_____no_output_____
###Markdown
Portfolio Construction* using `EPS` factor as alpha factor;* short selling is forbiden;* target of volatility for the activate weight is setting at 2.5% annually level.
###Code
er = factor['EPS'].values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
###Output
total weight is 1.0000
portfolio activate weight forecasting vol is 0.0250
portfolio er: 2.2232 comparing with benchmark er: 1.2359
###Markdown
2. Porfolio Construction: 2016 ~ 2018-------------------------------
###Code
"""
Back test parameter settings
"""
start_date = '2016-01-01'
end_date = '2018-02-08'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw_adj'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('zz800')
data_source = os.environ['DB_URI']
benchmark_code = 906
target_vol = 0.05
weights_bandwidth = 0.02
"""
Factor Model
"""
alpha_factors = {'f01': CSRank(LAST('EPS'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
"""
Constraintes settings
"""
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
"""
Running Settings
"""
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
"""
Strategy run
"""
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
###Output
_____no_output_____
###Markdown
* 请在环境变量中设置`DB_URI`指向数据库
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Single Day Analysis-----------------------
###Code
ref_date = '2018-01-08'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('zz800')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EPS', codes, 906, industry='sw', risk_model='day')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
###Output
_____no_output_____
###Markdown
Portfolio Construction* using `EPS` factor as alpha factor;* short selling is forbiden;* target of volatility for the activate weight is setting at 2.5% annually level.
###Code
er = factor['EPS'].values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
###Output
total weight is 1.0000
portfolio activate weight forecasting vol is 0.0250
portfolio er: 2.2232 comparing with benchmark er: 1.2359
###Markdown
2. Porfolio Construction: 2016 ~ 2018-------------------------------
###Code
"""
Back test parameter settings
"""
start_date = '2016-01-01'
end_date = '2018-02-08'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw_adj'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('zz800')
data_source = os.environ['DB_URI']
benchmark_code = 906
target_vol = 0.05
weights_bandwidth = 0.02
"""
Factor Model
"""
alpha_factors = {'f01': CSRank(LAST('EPS'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
"""
Constraintes settings
"""
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
"""
Running Settings
"""
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
"""
Strategy run
"""
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
###Output
_____no_output_____
###Markdown
* 请在环境变量中设置`DB_URI`指向数据库
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Single Day Analysis-----------------------
###Code
ref_date = '2020-01-02'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('hs300')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EMA5D', codes, 300, industry='sw', risk_model='short')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
###Output
_____no_output_____
###Markdown
Portfolio Construction* using `EPS` factor as alpha factor;* short selling is forbiden;* target of volatility for the activate weight is setting at 2.5% annually level.
###Code
er = factor['EMA5D'].fillna(factor["EMA5D"].median()).values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
###Output
total weight is 1.0007
portfolio activate weight forecasting vol is 0.0250
portfolio er: 124.8849 comparing with benchmark er: 81.8985
###Markdown
2. Porfolio Construction: 2016 ~ 2018-------------------------------
###Code
"""
Back test parameter settings
"""
start_date = '2020-01-01'
end_date = '2020-02-21'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('hs300')
data_source = os.environ['DB_URI']
benchmark_code = 300
target_vol = 0.05
weights_bandwidth = 0.02
"""
Factor Model
"""
alpha_factors = {'f01': CSRank(LAST('EMA5D'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
"""
Constraintes settings
"""
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
"""
Running Settings
"""
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
"""
Strategy run
"""
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
###Output
_____no_output_____
###Markdown
* 请在环境变量中设置`DB_URI`指向数据库
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Single Day Analysis-----------------------
###Code
ref_date = '2020-01-02'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('hs300')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EMA5D', codes, 300, industry='sw', risk_model='short')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
###Output
_____no_output_____
###Markdown
Portfolio Construction* using `EPS` factor as alpha factor;* short selling is forbiden;* target of volatility for the activate weight is setting at 2.5% annually level.
###Code
er = factor['EMA5D'].fillna(factor["EMA5D"].median()).values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
###Output
total weight is 1.0007
portfolio activate weight forecasting vol is 0.0250
portfolio er: 124.8849 comparing with benchmark er: 81.8985
###Markdown
2. Porfolio Construction: 2016 ~ 2018-------------------------------
###Code
"""
Back test parameter settings
"""
start_date = '2020-01-01'
end_date = '2020-02-21'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('hs300')
data_source = os.environ['DB_URI']
benchmark_code = 300
target_vol = 0.05
weights_bandwidth = 0.02
"""
Factor Model
"""
alpha_factors = {'f01': CSRank(LAST('EMA5D'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
"""
Constraintes settings
"""
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
"""
Running Settings
"""
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
"""
Strategy run
"""
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
###Output
_____no_output_____ |
cochlear_implant/speech_enhancement_inference.ipynb | ###Markdown
Copyright 2021 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
###Code
import os
import numpy as np
from scipy.io import wavfile
from scipy.signal import resample
import tensorflow.compat.v1 as tf
from tensorflow.io import gfile
from colabtools import sound
# @title Helper class for separation model inference.
class SeparationModel(object):
"""Tensorflow audio separation model."""
def __init__(self,
checkpoint_path,
metagraph_path):
self.graph = tf.Graph()
self.sess = tf.Session(graph=self.graph)
with self.graph.as_default():
new_saver = tf.train.import_meta_graph(metagraph_path)
new_saver.restore(self.sess, checkpoint_path)
self.input_placeholder = self.graph.get_tensor_by_name(
'input_audio/receiver_audio:0')
self.output_tensor = self.graph.get_tensor_by_name('denoised_waveforms:0')
def separate(self, mixture_waveform):
"""Separates a mixture waveform into sources.
Args:
mixture_waveform: numpy.ndarray of shape (num_mics, num_samples).
Returns:
numpy.ndarray of separated waveforms of shape (num_sources, num_samples).
dict of additional tensor outputs.
"""
mixture_waveform_input = np.expand_dims(mixture_waveform, 0)
feed_dict = {self.input_placeholder: mixture_waveform_input}
separated_waveforms = self.sess.run(self.output_tensor, feed_dict=feed_dict)
return separated_waveforms[0]
###Output
_____no_output_____
###Markdown
Manually download the pre-trained speech enhancement model files using [gsutil](https://cloud.google.com/storage/docs/gsutil) with:`gsutil cp -r gs://gresearch/cochlear_implant/speech_enhancement_model .`
###Code
# @title Load speech enhancement model.
MODEL_PATH = '/path/to/model'
checkpoint = os.path.join(MODEL_PATH, 'checkpoint')
metagraph = os.path.join(MODEL_PATH, 'inference.meta')
model = SeparationModel(checkpoint, metagraph)
# @title Get some wav paths.
PATH_AUDIO = '/path/to/audio'
PATH_ENHANCED = PATH_AUDIO + '_enhanced'
audio_clip_matcher = '*.wav' #@param
wavs = gfile.Glob(os.path.join(PATH_AUDIO, audio_clip_matcher))
#@title File read/write functions.
def write_wav(filename, waveform, sample_rate=16000):
"""Write a audio waveform (float numpy array) as .wav file."""
gfile.MakeDirs(os.path.dirname(filename))
with gfile.GFile(filename, 'w') as fh:
wavfile.write(
fh, sample_rate,
np.round(np.clip(waveform * 2**15, -32768, 32767)).astype(np.int16))
def read_wav(wav_path, sample_rate=16000, channel=None):
"""Read a wav file as numpy array.
Args:
wav_path: String, path to .wav file.
sample_rate: Int, sample rate for audio to be converted to.
channel: Int, option to select a particular channel for stereo audio.
Returns:
Audio as float numpy array.
"""
with gfile.Open(wav_path, 'rb') as f:
sr_read, x = wavfile.read(f)
x = x.astype(np.float32) / (2**15)
if sr_read != sample_rate:
x = resample(x, int(round((float(sample_rate) / sr_read) * len(x))))
if x.ndim > 1 and channel is not None:
return x[:, channel]
return x
# @title Enhance some wavs and play audio.
for wav in wavs:
print('wav path:', wav)
print('Input')
sr, receiver_audio = read_wav(wav)
sound.PlaySound(receiver_audio, sr)
enhanced = model.separate(receiver_audio[np.newaxis])[0]
output_path = os.path.join(PATH_ENHANCED, os.path.basename(wav))
gfile.MakeDirs(os.path.dirname(output_path))
write_wav(output_path, enhanced[0], sr)
print('Speech estimate')
sound.PlaySound(enhanced[0], sr)
print('Noise estimate')
sound.PlaySound(enhanced[1], sr)
###Output
_____no_output_____
###Markdown
Copyright 2021 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
###Code
import os
import numpy as np
from scipy.io import wavfile
from scipy.signal import resample
import tensorflow.compat.v1 as tf
from tensorflow.io import gfile
from IPython.display import Audio, display
from ipywidgets import widgets
# @title Mount Google Drive
from google.colab import drive
ROOT_DIR = '/content/gdrive'
drive.mount(ROOT_DIR, force_remount=True)
# @title Helper function for playing audio.
def PlaySound(samples, sample_rate=16000):
out = widgets.Output()
with out:
display(Audio(samples, rate=sample_rate))
display(out)
# @title Helper class for separation model inference.
class SeparationModel(object):
"""Tensorflow audio separation model."""
def __init__(self,
checkpoint_path,
metagraph_path):
self.graph = tf.Graph()
self.sess = tf.Session(graph=self.graph)
with self.graph.as_default():
new_saver = tf.train.import_meta_graph(metagraph_path)
new_saver.restore(self.sess, checkpoint_path)
self.input_placeholder = self.graph.get_tensor_by_name(
'input_audio/receiver_audio:0')
self.output_tensor = self.graph.get_tensor_by_name('denoised_waveforms:0')
def separate(self, mixture_waveform):
"""Separates a mixture waveform into sources.
Args:
mixture_waveform: numpy.ndarray of shape (num_mics, num_samples).
Returns:
numpy.ndarray of separated waveforms of shape (num_sources, num_samples).
dict of additional tensor outputs.
"""
mixture_waveform_input = np.expand_dims(mixture_waveform, 0)
feed_dict = {self.input_placeholder: mixture_waveform_input}
separated_waveforms = self.sess.run(self.output_tensor, feed_dict=feed_dict)
return separated_waveforms[0]
# @title Download the pre-trained speech enhancement model files.
%%shell
gsutil cp -r gs://gresearch/cochlear_implant/speech_enhancement_model /tmp/
# @title Load speech enhancement model.
MODEL_PATH = '/tmp/speech_enhancement_model'
checkpoint = os.path.join(MODEL_PATH, 'checkpoint')
metagraph = os.path.join(MODEL_PATH, 'inference.meta')
model = SeparationModel(checkpoint, metagraph)
# @title Get some wav paths.
PATH_AUDIO = 'gdrive/My Drive/cihack_audio' # E.g. /content/gdrive/My Drive/cihack_audio
PATH_ENHANCED = PATH_AUDIO + '_enhanced'
audio_clip_matcher = '*.wav' #@param
wavs = gfile.glob(os.path.join(PATH_AUDIO, audio_clip_matcher))
#@title File read/write functions.
def write_wav(filename, waveform, sample_rate=16000):
"""Write a audio waveform (float numpy array) as .wav file."""
wavfile.write(
filename, sample_rate,
np.round(np.clip(waveform * 2**15, -32768, 32767)).astype(np.int16))
def read_wav(wav_path, sample_rate=16000, channel=None):
"""Read a wav file as numpy array.
Args:
wav_path: String, path to .wav file.
sample_rate: Int, sample rate for audio to be converted to.
channel: Int, option to select a particular channel for stereo audio.
Returns:
Audio as float numpy array.
"""
sr_read, x = wavfile.read(wav_path)
x = x.astype(np.float32) / (2**15)
if sr_read != sample_rate:
x = resample(x, int(round((float(sample_rate) / sr_read) * len(x))))
if x.ndim > 1 and channel is not None:
return x[:, channel]
return x
# @title Enhance some wavs and play audio.
for wav in wavs:
print('wav path:', wav)
print('Input')
receiver_audio = read_wav(wav)
PlaySound(receiver_audio, 16000)
enhanced = model.separate(receiver_audio[np.newaxis])
output_path = os.path.join(PATH_ENHANCED, os.path.basename(wav))
output_path = os.path.join(PATH_ENHANCED, os.path.basename(wav))
gfile.makedirs(os.path.dirname(output_path))
write_wav(output_path, enhanced.T, 16000)
print('Speech estimate')
PlaySound(enhanced[0], 16000)
print('Noise estimate')
PlaySound(enhanced[1], 16000)
###Output
_____no_output_____ |
ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb | ###Markdown
Germany: LK Rhein-Pfalz-Kreis (Rheinland-Pfalz)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Rhein-Pfalz-Kreis", weeks=5);
overview(country="Germany", subregion="LK Rhein-Pfalz-Kreis");
compare_plot(country="Germany", subregion="LK Rhein-Pfalz-Kreis", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Rhein-Pfalz-Kreis")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Rhein-Pfalz-Kreis (Rheinland-Pfalz)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Rhein-Pfalz-Kreis");
# load the data
cases, deaths, region_label = germany_get_region(landkreis="LK Rhein-Pfalz-Kreis")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Rhein-Pfalz-Kreis (Rheinland-Pfalz)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Rhein-Pfalz-Kreis", weeks=5);
overview(country="Germany", subregion="LK Rhein-Pfalz-Kreis");
compare_plot(country="Germany", subregion="LK Rhein-Pfalz-Kreis", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Rhein-Pfalz-Kreis")
# get population of the region for future normalisation:
inhabitants = population(country="Germany", subregion="LK Rhein-Pfalz-Kreis")
print(f'Population of country="Germany", subregion="LK Rhein-Pfalz-Kreis": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-LK-Rhein-Pfalz-Kreis.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
nbs/00_web_scraping.ipynb | ###Markdown
web scraping> API details.
###Code
#hide
from nbdev.showdoc import *
import sys
sys.path.append("..")
%load_ext autoreload
%autoreload 2
#export
#Webscrapping
from time import sleep
from selenium import webdriver
import datetime
#Save data
import re
import pandas as pd
#Resources
import random
from fastcore.script import * # @Callparser
from swfd.resources import getInfo
###Output
_____no_output_____
###Markdown
getHtmlSfu(): Funcion que por medio del driver de selenium hace el log en la ESA y obtiene el código html conla información
###Code
#export
def getHtmlSfu(currentdate,originaldate):
source_code=""
try:
op = webdriver.ChromeOptions()
op.add_argument('headless')
driver = webdriver.Chrome(options=op)
currentday = datetime.datetime.now()
driver.get("https://esc.cbk.waw.pl/products/api.php?parameter=f10_7&start_date="+str(originaldate.year)+"%2F"+str(originaldate.month)+"%2F"+str(originaldate.day)+"+00%3A00&end_date="+str(currentdate.year)+"%2F"+str(currentdate.month)+"%2F"+str(currentdate.day)+"+00%3A00&output_type=html")
sleep(random.uniform(8,10))
user = driver.find_element_by_name("callback_0")
password = driver.find_element_by_name("callback_1")
button = driver.find_element_by_name("callback_2")
credentials=getCredentials()
if(credentials[0] and credentials[1]):
user.send_keys(credentials[0])
password.send_keys(credentials[1])
button.click()
sleep(random.uniform(15,10))
elem = driver.find_element_by_xpath("//*")
source_code = elem.get_attribute("outerHTML")
else:
print("user or password not found in txt")
except:
e = sys.exc_info()[0]
print("Exception: ", e)
print("Make sure you have the internet and the selenium driver is in the sfw.")
finally:
driver.quit()
return source_code
###Output
_____no_output_____
###Markdown
getCredentials(): Funcion que obtiene el user y password previamente escrito en el fichero settings.txt para el login de la EESA
###Code
#export
def getCredentials():
credentials=[]
user=getInfo("user")
password=getInfo("password")
credentials.append(user)
credentials.append(password)
return credentials
#hide
#If txt not exist return [[], []]
getCredentials()
###Output
_____no_output_____
###Markdown
getDatosSfu(html): Función que analiza y obtiene los datos (fecha y valor de sfu) del html
###Code
#export
def getListDataSfu(html):
datalist=[]
oldsfu=re.findall('<tbody>(.*?)<td>1996-02-14',str(html),re.DOTALL)
dailysfu=re.findall('<tr>(.*?)</tr>',str(oldsfu),re.DOTALL)
for day in dailysfu:
if(re.findall("(.*?)00:00:00</td>",day)):
datedata=re.findall("<td>(.*?) 00:00:00</td>",day)
sfudata=re.findall("</td><td>(.*?)<",day)
datalist.append([datedata[0],sfudata[0]])
newsfu=re.findall('<td>1996-02-13 00:00:00</td><td>68</td></tr>(.*?)</tbody></table></div></body></html>',str(html),re.DOTALL)
dailysfu=re.findall('<tr>(.*?)</tr>',str(newsfu),re.DOTALL)
for day in dailysfu:
if(re.findall("(.*?)20:00:00</td>",day)):
datedata=re.findall("<td>(.*?) 20:00:00</td>",day)
sfudata=re.findall("</td><td>(.*?)<",day)
datalist.append([datedata[0],sfudata[0]])
return datalist[::-1]
###Output
_____no_output_____
###Markdown
fixDates():Función que añade las fechas faltantes de los datos desde el día "ayer" hasta el ultimo día obtenido
###Code
#export
def fixDates(datosSfu):
alldatasfu=[]
#Dia actual, primer registro siempre sera hoy-1
csvdaybefore=datetime.datetime.combine(datetime.datetime.today(), datetime.time.min)
for record in datosSfu:
realdaybefore=csvdaybefore-datetime.timedelta(days=1)
csvdaybefore = datetime.datetime.strptime(record[0], '%Y-%m-%d')
#Si el dia de ayer no es igual al de hoy-1 faltan datos de un dia
if not(realdaybefore == csvdaybefore):
while(not(realdaybefore == csvdaybefore)):
alldatasfu.append([realdaybefore.strftime('%Y-%m-%d'),"-1"])
realdaybefore=realdaybefore-datetime.timedelta(days=1)
alldatasfu.append(record)
return alldatasfu
#hide
#This test was 21-05-23
fixDates([ ['2021-05-21',"15"] , ['2021-05-09',"34"], ['2021-05-07',"23" ]])
###Output
_____no_output_____
###Markdown
fixSfuValues(): Función que arregla los valores vacíos (-1) cambiandolos por la media entre los 2 valores no vacíos más cercanos
###Code
#export
def fixSfuValues(datossfu):
meanvalue=-1
if datossfu[0][1]=='-1':
cont=1
while datossfu[0][1] == '-1' and cont<len(datossfu):
if datossfu[cont][1] != '-1':
datossfu[0][1]=datossfu[cont][1]
cont=cont+1
for i in range (1, len(datossfu)-1):
if(datossfu[i][1]=='-1'):
cont=i
while datossfu[i][1] == '-1' and cont<len(datossfu):
if datossfu[cont][1] != '-1':
datossfu[i][1]=str((float(datossfu[i-1][1])+float(datossfu[cont][1]))/2)
cont=cont+1
if(datossfu[len(datossfu)-1][1]=='-1'):
datossfu[len(datossfu)-1][1]=datossfu[len(datossfu)-2][1]
return datossfu
#hide
testfixvalues=fixSfuValues([["date",'-1'],["date",'5'],["date",'-1'],["date",'-1'],["date",'4'],["date",'-1']])
print(testfixvalues)
###Output
[['date', '5'], ['date', '5'], ['date', '4.5'], ['date', '4.25'], ['date', '4'], ['date', '4']]
###Markdown
fixValues(): Función que llama a las funciones para crear las fechas faltantes con sus valores aproximados
###Code
#export
def fixValues(datossfu):
fixdata=fixDates(datossfu)
fixdata=fixSfuValues(fixdata)
return fixdata
#hide
ex=fixValues([ ['2021-05-20',"-1"] , ['2021-05-18',"75"], ['2021-05-14',"55" ],['2021-05-11',"50" ],['2021-05-8',"130" ]])
ex
###Output
_____no_output_____
###Markdown
sfuScv(listaDatos): Función que guarda los datos(fecha y valor de sfu) en un fichero csv
###Code
#export
def sfuScv(datalist):
path=getInfo("csvdirectory")+"sfuData.csv"
df = pd.DataFrame(datalist, columns= ['Date', 'Sfu'])
df.to_csv(path, header = False, index = False)
#hide
sfuScv([ ['2021-05-11',"-1"] , ['2021-05-09',"75"], ['2021-05-08',"55" ],['2021-05-07',"50" ],['2021-05-06',"130" ]])
path=getInfo("csvdirectory")+"sfuData.csv"
df = pd.read_csv(path,header=None)
df
###Output
_____no_output_____
###Markdown
web_scraping(): Función general que obtiene los valores de la ESA, trata los datos y los guarda en un scv para posterior uso
###Code
#export
def webScraping():
datosSfu=[]
currentdate = datetime.datetime.now()
originaldate=datetime.datetime(1949, 1, 1)
sourcecode=getHtmlSfu(currentdate,originaldate)
pagelogin=re.findall("<title>OpenAM - (.*?)</title>",sourcecode)
if(sourcecode!="" and not(pagelogin) ):
datosSfu=getListDataSfu(sourcecode)
datosSfu=fixValues(datosSfu)
sfuScv(datosSfu)
print("Sfu updated")
elif(pagelogin):
if(pagelogin[0]=="Login"):
print("Please check that your username and password are entered correctly. ")
###Output
_____no_output_____
###Markdown
getMaxValueSfu(): Saca el mayor valor de sfu historicamente
###Code
#export
@call_parse
def main():
webScraping()
###Output
_____no_output_____ |
python-data-structures/leetocde/3-sum.ipynb | ###Markdown
3SumGiven an integer array nums, return all the triplets [nums[i], nums[j], nums[k]] such that i != j, i != k, and j != k, and nums[i] + nums[j] + nums[k] == 0.Notice that the solution set must not contain duplicate triplets. Example 1:- Input: nums = [-1,0,1,2,-1,-4]- Output: [[-1,-1,2],[-1,0,1]] Example 2:- Input: nums = []- Output: [] Example 3:- Input: nums = [0]- Output: [] Solution Intuition 1. The very naive apraoch is to loop through all possible triplets, sum and if the result is 0, save the result. In this case the time complexity will be O(n^3). 2. Another approach is to load all numbers to hashmap. In this case we can run only an all possible pairs, and to look for a number [0 - (x[i] + x[j])] in a hashmap. In such case the time complexity will be O(n^2)3. Another approac Implementation
###Code
def three_sum(nums: list) -> list:
nums.sort()
ans = []
for i in range(len(nums)):
j, k = i+1, len(nums)-1
while j < k:
if nums[i] + nums[j] + nums[k] > 0:
k -= 1
elif nums[i] + nums[j] + nums[k] < 0:
j += 1
else:
l = [nums[i],nums[j],nums[k]]
if l not in ans:
ans.append(l)
j, k = j + 1, k - 1
return ans
assert len(three_sum([-1,0,1,2,-1,-4])) == 2
###Output
_____no_output_____ |
StockPricePrediction_fh21/StockPricePrediction_v5_TExpSmoothing.ipynb | ###Markdown
Objective* 20190822: * Given prices for the last N days, we do prediction for the next N+H days, where H is the forecast horizon * We use triple exponential smoothing to predict
###Code
%matplotlib inline
import math
import matplotlib
import multiprocessing
import numpy as np
import pandas as pd
import pickle
import time
from collections import defaultdict
from datetime import date, datetime, timedelta
from joblib import Parallel, delayed
from matplotlib import pyplot as plt
from pylab import rcParams
#### Input params ##################
stk_path = "./data/VTI_20130102_20181231.csv"
H = 21
train_size = 252*3 # Use 3 years of data as train set. Note there are about 252 trading days in a year
val_size = 252 # Use 1 year of data as validation set
# alpha - smoothing coeff
alphaMax = 0.999
alphaMin = 0.01
alphaStep = 0.01
# beta - trend coeff
betaMax = 0.999
betaMin = 0.01
betaStep = 0.01
# gamma - seasonality coeff
gammaMax = 0.99
gammaMin = 0.1
gammaStep = 0.1
L = 252 # seasonality period
# for plot display
daysBackward = 30
daysForward = 60
i_list = range(1008, 1008+84*5+42+1, 42) # we want to do a forecast on these days
fontsize = 14
ticklabelsize = 14
####################################
train_val_size = train_size + val_size # Size of train+validation set
print("No. of days in train+validation set = " + str(train_val_size))
print("We will start forecasting on day %d" % (train_val_size+1))
###Output
We will start forecasting on day 1009
###Markdown
Common functions
###Code
def get_mape(y_true, y_pred):
"""
Compute mean absolute percentage error (MAPE)
"""
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def get_mae(a, b):
"""
Comp mean absolute error e_t = E[|a_t - b_t|]. a and b can be lists.
Returns a vector of len = len(a) = len(b)
"""
return np.mean(abs(np.array(a)-np.array(b)))
def get_rmse(a, b):
"""
Comp RMSE. a and b can be lists.
Returns a scalar.
"""
return math.sqrt(np.mean((np.array(a)-np.array(b))**2))
def initial_trend(series, L):
"""
Initial trend, b_1 = ( (y_L+1 - y_1)/L + (y_L+2 - y_2)/L + ... + (y_L+L - y_L)/L ) / L
"""
sum = 0.0
for i in range(L):
sum += float(series[i+L] - series[i]) / L
return sum / L
def initial_seasonal_components(series, L):
"""
Initial seasonal index,
I_t = ( y_t-A_1 + y_{t+L}-A_2 + ... + y_{t+(P-1)L}-A_P ) / P,
t = 1, 2, ..., L
Here P is the number of seasons we have in the series.
For example, for sales data, we have 2018 Q1, 2018 Q2, 2018 Q3, 2018 Q4 data. These 4 points represent 1 season.
A_1 is the mean of the values in the first season, and so on.
Returns the seasonal components of length L in a list
"""
seasonals = []
season_averages = []
n_seasons = int(len(series)/L)
# compute season averages
for j in range(n_seasons):
season_averages.append(sum(series[L*j:L*j+L])/float(L))
# compute initial values
for i in range(L):
sum_of_vals_over_avg = 0.0
for j in range(n_seasons):
sum_of_vals_over_avg += series[L*j+i]-season_averages[j]
seasonals.append(sum_of_vals_over_avg/n_seasons)
return seasonals
def triple_exponential_smoothing(series, L, H, alpha=0.3, beta=0.3, gamma=0.3, return_all=False):
"""
Overall smoothing: S_t = alpha*(y_t - I_{t-L}) + (1-alpha)(S_{t-1} + b_{t-1}})
Trend smoothing: b_t = beta*(S_t - S_{t-1}) + (1-beta)*b_{t-1}
Seasonal smoothing: I_t = gamma*(y_t - S_t) + (1-gamma)*I_{t-L}
Forecast: F_{t+m} = S_t + m*b_t + I_{t-L+m}, m >= 1
Note here m has to be < len(series)
result[len(series)] is the estimate of series[len(series)]
The length of result is len(series) + H, where H >= 1
"""
result = [0, series[0]]
smooth = series[0]
trend = initial_trend(series, L)
seasonals = initial_seasonal_components(series, L)
seasonals.append(seasonals[0]) # To make the seasonals elements align with series elements
for n in range(1, len(series)+H-1):
if n >= len(series): # we are forecasting
m = n - len(series) + 2
result.append(smooth + m*trend + seasonals[n+1])
else:
val = series[n]
last_smooth, smooth = smooth, alpha*(val-seasonals[n]) + (1-alpha)*(smooth+trend)
trend = beta * (smooth-last_smooth) + (1-beta)*trend
seasonals.append(gamma*(val-smooth) + (1-gamma)*seasonals[n])
result.append(smooth + trend + seasonals[n+1])
# e.g. result[2] uses series[1], seasonals[1], and seasonals[2]
# ie. result[2] is the estimate of series[2]
# e.g. result[len(series)] uses series[len(series)-1], seasonals[len(series)-1], and seasonals[len(series)]
# ie. result[len(series)] is the estimate of series[len(series)]
if return_all == True:
return result, seasonals
else:
return result[len(series):len(series)+H], seasonals
def get_error_metrics(series, train_size, L, H, alpha, beta, gamma):
"""
Given a series consisting of both train+validation, do predictions of forecast horizon H on the validation set,
at H/2 intervals.
Inputs
series : series to forecast, with length = (train_size + val_size)
train_size : length of series to use as train ie. train set is series[:train_size]
L : period
H : forecast horizon
alpha : smoothing coeff
beta : trend coeff
gamma : seasonality coeff
Outputs
mean of rmse, mean of mape, mean of mae
"""
# Predict using single exponential smoothing, and compute error metrics also
rmse = [] # root mean square error
mape = [] # mean absolute percentage error
mae = [] # mean absolute error
preds_dict = {}
for i in range(train_size, len(series)-H+1, int(H/2)):
preds_list, seasonals = triple_exponential_smoothing(series[i-train_size:i], L, H, alpha, beta, gamma)
rmse.append(get_rmse(series[i:i+H], preds_list))
mape.append(get_mape(series[i:i+H], preds_list))
mae.append(get_mae(series[i:i+H], preds_list))
preds_dict[i] = preds_list
return np.mean(rmse), np.mean(mape), np.mean(mae), preds_dict
def hyperparam_tune_alpha_beta_gamma(series, train_size, L, H):
"""
Given a series, tune hyperparameter alpha, fit and predict
Inputs
series : series to forecast, with length = (train_size + val_size)
train_size : length of series to use as train ie. train set is series[:train_size]
L : period
H : forecast horizon
Outputs
optimum hyperparameters, error metrics dataframe
"""
err_dict = defaultdict(list)
alpha = alphaMin
while alpha <= alphaMax:
beta = betaMin
while beta <= betaMax:
gamma = gammaMin
while gamma <= gammaMax:
rmse_mean, mape_mean, mae_mean, _ = get_error_metrics(series, train_size, L, H, alpha, beta, gamma)
# Append alpha and beta
err_dict['alpha'].append(alpha)
err_dict['beta'].append(beta)
err_dict['gamma'].append(gamma)
# Compute error metrics
err_dict['rmse'].append(rmse_mean)
err_dict['mape'].append(mape_mean)
err_dict['mae'].append(mae_mean)
# Increase gamma by one step
gamma = gamma + gammaStep
# Increase beta by one step
beta = beta + betaStep
# Increase alpha by one step
alpha = alpha + alphaStep
# Convert to dataframe
err_df = pd.DataFrame(err_dict)
# Get min RMSE
rmse_min = err_df['rmse'].min()
return err_df[err_df['rmse'] == rmse_min]['alpha'].values[0], \
err_df[err_df['rmse'] == rmse_min]['beta'].values[0], \
err_df[err_df['rmse'] == rmse_min]['gamma'].values[0], \
err_df
def get_error_metrics2(series, train_size, L, H, alpha, beta, gamma):
"""
Given a series consisting of both train+validation, do predictions of forecast horizon H on the validation set,
at H/2 intervals.
Inputs
series : series to forecast, with length = (train_size + val_size)
train_size : length of series to use as train ie. train set is series[:train_size]
L : period
H : forecast horizon
alpha : smoothing coeff
beta : trend coeff
gamma : seasonality coeff
Outputs
mean of rmse, mean of mape, mean of mae
"""
# Predict using single exponential smoothing, and compute error metrics also
rmse = [] # root mean square error
mape = [] # mean absolute percentage error
mae = [] # mean absolute error
preds_dict = {}
for i in range(train_size, len(series)-H+1, int(H/2)):
preds_list, seasonals = triple_exponential_smoothing(series[i-train_size:i], L, H, alpha, beta, gamma)
rmse.append(get_rmse(series[i:i+H], preds_list))
mape.append(get_mape(series[i:i+H], preds_list))
mae.append(get_mae(series[i:i+H], preds_list))
preds_dict[i] = preds_list
return np.mean(rmse), np.mean(mape), np.mean(mae), preds_dict, alpha, beta, gamma
def hyperparam_tune_alpha_beta_gamma_parallelized(series, train_size, L, H):
"""
This is a parallelized implementation of hyperparam_tune_alpha_beta_gamma_finetune.
Given a series, tune hyperparameter alpha, fit and predict
Inputs
series : series to forecast, with length = (train_size + val_size)
train_size : length of series to use as train ie. train set is series[:train_size]
L : period
H : forecast horizon
Outputs
optimum hyperparameters, error metrics dataframe
"""
num_cores = multiprocessing.cpu_count()
inputs = []
alpha = alphaMin
while alpha <= alphaMax:
beta = betaMin
while beta <= betaMax:
gamma = gammaMin
while gamma <= gammaMax:
inputs.append((alpha, beta, gamma))
gamma = gamma + gammaStep
beta = beta + betaStep
alpha = alpha + alphaStep
results = Parallel(n_jobs=num_cores)(delayed(get_error_metrics2)(series, train_size, L, H, item[0], item[1], item[2]) for item in inputs)
# results has format [(rmse_mean1, mape_mean1, mae_mean1, preds_dict1, alpha1, beta1, gamma1), (rmse_mean2, mape_mean2, mae_mean2, preds_dict2, alpha2, beta2, gamma2), ...]
err_dict = defaultdict(list)
for item in results:
# Append error metrics
err_dict['rmse'].append(item[0])
err_dict['mape'].append(item[1])
err_dict['mae'].append(item[2])
# Append alpha and beta
err_dict['alpha'].append(item[4])
err_dict['beta'].append(item[5])
err_dict['gamma'].append(item[6])
# Convert to dataframe
err_df = pd.DataFrame(err_dict)
# Get min RMSE
rmse_min = err_df['rmse'].min()
alpha_opt = err_df[err_df['rmse'] == rmse_min]['alpha'].values[0]
beta_opt = err_df[err_df['rmse'] == rmse_min]['beta'].values[0]
gamma_opt = err_df[err_df['rmse'] == rmse_min]['gamma'].values[0]
print("alpha_opt = " + str(alpha_opt) +
", beta_opt = " + str(beta_opt) +
", gamma_opt = " + str(gamma_opt) +
", rmse_min = " + str(rmse_min))
return alpha_opt, beta_opt, gamma_opt, err_df
###Output
_____no_output_____
###Markdown
Load data
###Code
df = pd.read_csv(stk_path, sep = ",")
# Convert Date column to datetime
df.loc[:, 'Date'] = pd.to_datetime(df['Date'],format='%Y-%m-%d')
# Change all column headings to be lower case, and remove spacing
df.columns = [str(x).lower().replace(' ', '_') for x in df.columns]
# Sort by datetime
df.sort_values(by='date', inplace=True, ascending=True)
df.head(10)
df['date'].min(), df['date'].max()
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='adj_close', style='b-', grid=True)
ax.set_xlabel("date")
ax.set_ylabel("USD")
###Output
_____no_output_____
###Markdown
Predict for a specific H (forecast horizon) and a specific date, plot original series too
###Code
i = 1008 # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# Predict
preds_list, seasonals = triple_exponential_smoothing(df['adj_close'][i-train_val_size:i].values, L, H, 0.3, 0.55, 0.6, True)
print("For forecast horizon %d, predicting on day %d, date %s, the RMSE is %f" % (H, i, df['date'][i], get_rmse(df[i:i+H]['adj_close'], preds_list[train_val_size:train_val_size+H])))
print("For forecast horizon %d, predicting on day %d, date %s, the mean MAPE is %f" % (H, i, df['date'][i], get_mape(df[i:i+H]['adj_close'], preds_list[train_val_size:train_val_size+H])))
print("For forecast horizon %d, predicting on day %d, date %s, the mean MAE is %f" % (H, i, df['date'][i], get_mae(df[i:i+H]['adj_close'], preds_list[train_val_size:train_val_size+H])))
# Plot the predictions
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = df.plot(x='date', y='adj_close', style='bx-', grid=True)
# Plot the predictions
ax.plot(df['date'][i-train_val_size:i+H], preds_list, marker='s', color='r')
ax.plot(df['date'][i:i+H], preds_list[train_val_size:train_val_size+H], marker='x')
ax.set_xlabel("date")
ax.set_ylabel("USD")
ax.legend(['adj_close', 'predictions'])
ax.set_ylim([min(min(preds_list[1:]), df[(df['date']>=df['date'][i]-timedelta(days=daysBackward)) & (df['date']<=df['date'][i]+timedelta(days=daysForward))]['adj_close'].min()),
max(max(preds_list[1:]), df[(df['date']>=df['date'][i]-timedelta(days=daysBackward)) & (df['date']<=df['date'][i]+timedelta(days=daysForward))]['adj_close'].max())])
ax.set_xlim([df['date'][i-train_val_size+1], df['date'][i]+timedelta(days=H)])
# ax.set_xlim(['2016-06-01', '2017-02-01'])
# ax.set_ylim([60, 85])
# Plot the seasonals
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
plt.plot(seasonals)
plt.grid()
# plt.xlim([0, 252])
# Plot the seasonals with actual values on dual axes
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('date')
ax1.set_ylabel('USD', color=color)
ax1.plot(df['date'][:len(seasonals)], df['adj_close'][:len(seasonals)], color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('seasonal components', color=color) # we already handled the x-label with ax1
ax2.plot(df['date'][:len(seasonals)], seasonals, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
# ax1.set_xlim(['2016-01-01', '2016-12-31'])
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Predict for a specific H (forecast horizon) and a specific date
###Code
i = 1008 # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# Predict
preds_list, seasonals = triple_exponential_smoothing(df['adj_close'][i-train_val_size:i].values, L, H, 0.3, 0.55, 0.6)
print("For forecast horizon %d, predicting on day %d, date %s, the RMSE is %f" % (H, i, df['date'][i], get_rmse(df[i:i+H]['adj_close'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the mean MAPE is %f" % (H, i, df['date'][i], get_mape(df[i:i+H]['adj_close'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the mean MAE is %f" % (H, i, df['date'][i], get_mae(df[i:i+H]['adj_close'], preds_list)))
# Plot the predictions
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = df.plot(x='date', y='adj_close', style='bx-', grid=True)
# Plot the predictions
ax.plot(df['date'][i:i+H], preds_list, marker='x')
ax.set_xlabel("date")
ax.set_ylabel("USD")
ax.legend(['adj_close', 'predictions'])
ax.set_ylim([min(min(preds_list), df[(df['date']>=df['date'][i]-timedelta(days=daysBackward)) & (df['date']<=df['date'][i]+timedelta(days=daysForward))]['adj_close'].min()),
max(max(preds_list), df[(df['date']>=df['date'][i]-timedelta(days=daysBackward)) & (df['date']<=df['date'][i]+timedelta(days=daysForward))]['adj_close'].max())])
ax.set_xlim([df['date'][i]-timedelta(days=daysBackward), df['date'][i]+timedelta(days=daysForward)])
###Output
_____no_output_____
###Markdown
Predict for a specific H (forecast horizon) and a specific date, with hyperparameter tuning - alpha, beta, gamma
###Code
i = 1008 # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# # Get optimum hyperparams
# tic = time.time()
# alpha_opt, beta_opt, gamma_opt, err_df = hyperparam_tune_alpha_beta_gamma(df['adj_close'][i-train_val_size:i].values, train_size, L, H)
# toc = time.time()
# print("Time taken = {0:.2f} mins".format((toc-tic)/60.0))
# print("alpha_opt = " + str(alpha_opt))
# print("beta_opt = " + str(beta_opt))
# print("gamma_opt = " + str(gamma_opt))
# print("rmse opt = " + str(err_df[(err_df['alpha']==alpha_opt) & (err_df['beta']==beta_opt)]['rmse'].values[0]))
# err_df
# Get optimum hyperparams - parallelized
tic = time.time()
alpha_opt, beta_opt, gamma_opt, err_df = hyperparam_tune_alpha_beta_gamma_parallelized(df['adj_close'][i-train_val_size:i].values, train_size, L, H)
toc = time.time()
print("Time taken = {0:.2f} mins".format((toc-tic)/60.0))
print("alpha_opt = " + str(alpha_opt))
print("beta_opt = " + str(beta_opt))
print("gamma_opt = " + str(gamma_opt))
print("rmse opt = " + str(err_df[(err_df['alpha']==alpha_opt) & (err_df['beta']==beta_opt)]['rmse'].values[0]))
err_df
# Predict
preds_list, seasonals = triple_exponential_smoothing(df['adj_close'][i-train_val_size:i].values, L, H, alpha_opt, beta_opt, gamma_opt)
print("For forecast horizon %d, predicting on day %d, date %s, the RMSE is %f" % (H, i, df['date'][i], get_rmse(df[i:i+H]['adj_close'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the mean MAPE is %f" % (H, i, df['date'][i], get_mape(df[i:i+H]['adj_close'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the mean MAE is %f" % (H, i, df['date'][i], get_mae(df[i:i+H]['adj_close'], preds_list)))
# Plot the predictions
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = df.plot(x='date', y='adj_close', style='bx-', grid=True)
# Plot the predictions
ax.plot(df['date'][i:i+H], preds_list, marker='x')
ax.set_xlabel("date")
ax.set_ylabel("USD")
ax.legend(['adj_close', 'predictions'])
ax.set_ylim([min(min(preds_list), df[(df['date']>=df['date'][i]-timedelta(days=daysBackward)) & (df['date']<=df['date'][i]+timedelta(days=daysForward))]['adj_close'].min()),
max(max(preds_list), df[(df['date']>=df['date'][i]-timedelta(days=daysBackward)) & (df['date']<=df['date'][i]+timedelta(days=daysForward))]['adj_close'].max())])
ax.set_xlim([df['date'][i]-timedelta(days=daysBackward), df['date'][i]+timedelta(days=daysForward)])
###Output
_____no_output_____
###Markdown
Predict for a specific H (forecast horizon), and various dates, using model trained in previous step
###Code
print("alpha_opt = " + str(alpha_opt))
print("beta_opt = " + str(beta_opt))
print("gamma_opt = " + str(gamma_opt))
# Predict and compute error metrics also
preds_dict = {}
results_final_no_tune = defaultdict(list)
for i in i_list:
print("Predicting on day %d, date %s" % (i, df.iloc[i]['date']))
preds_list, seasonals = triple_exponential_smoothing(df['adj_close'][i-train_val_size:i].values, L, H, alpha_opt, beta_opt, gamma_opt)
# Collect the predictions
preds_dict[i] = preds_list
# Compute error metrics
results_final_no_tune['rmse'].append(get_rmse(df[i:i+H]['adj_close'], preds_list))
results_final_no_tune['mape'].append(get_mape(df[i:i+H]['adj_close'], preds_list))
results_final_no_tune['mae'].append(get_mae(df[i:i+H]['adj_close'], preds_list))
results_final_no_tune['day'].append(df.iloc[i]['date'])
results_final_no_tune = pd.DataFrame(results_final_no_tune)
print("Altogether we made %d forecasts, each of length %d days" % (len(results_final_no_tune), H))
print("For forecast horizon %d, the mean RMSE is %f" % (H, results_final_no_tune['rmse'].mean()))
print("For forecast horizon %d, the mean MAPE is %f" % (H, results_final_no_tune['mape'].mean()))
print("For forecast horizon %d, the mean MAE is %f" % (H, results_final_no_tune['mae'].mean()))
# Put results into pickle
pickle.dump(results_final_no_tune['rmse'].mean(),
open("./out/rmse_no_tune_" + str(train_size) + "_" + str(val_size) + ".pickle", "wb"))
results_final_no_tune['alpha_opt'] = [alpha_opt]*len(i_list)
results_final_no_tune['beta_opt'] = [beta_opt]*len(i_list)
results_final_no_tune['gamma_opt'] = [gamma_opt]*len(i_list)
results_final_no_tune
# Plot the predictions, and zoom in
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='adj_close', style='b-', grid=True)
# Plot the predictions
for key in preds_dict:
ax.plot(df['date'][key:key+H], preds_dict[key])
ax.set_xlabel("date")
ax.set_ylabel("USD")
ax.legend(['adj_close', 'predictions'])
firstDay = df['date'][min(i_list)]-timedelta(days=daysBackward)
lastDay = df['date'][max(i_list)]+timedelta(days=daysForward)
ax.set_ylim([min(min(min(preds_dict.values())),
df[(df['date']>=firstDay) & (df['date']<=lastDay)]['adj_close'].min()
),
max(max(max(preds_dict.values())),
df[(df['date']>=firstDay) & (df['date']<=lastDay)]['adj_close'].max()
)
])
ax.set_xlim([firstDay, lastDay])
###Output
_____no_output_____
###Markdown
Predict for a specific H (forecast horizon), and various dates, tuning model for every prediction
###Code
# Predict and compute error metrics also
preds_dict = {}
results_final = defaultdict(list)
tic = time.time()
for i in i_list:
tic1 = time.time()
print("Predicting on day %d, date %s" % (i, df.iloc[i]['date']))
# Get optimum hyperparams
alpha_opt, beta_opt, gamma_opt, err_df = hyperparam_tune_alpha_beta_gamma_parallelized(df['adj_close'][i-train_val_size:i].values, train_size, L, H)
preds_list, seasonals = triple_exponential_smoothing(df['adj_close'][i-train_val_size:i].values, L, H, alpha_opt, beta_opt, gamma_opt)
# Collect the predictions
preds_dict[i] = preds_list
# Compute error metrics
results_final['rmse'].append(get_rmse(df[i:i+H]['adj_close'], preds_list))
results_final['mape'].append(get_mape(df[i:i+H]['adj_close'], preds_list))
results_final['mae'].append(get_mae(df[i:i+H]['adj_close'], preds_list))
results_final['alpha_opt'].append(alpha_opt)
results_final['beta_opt'].append(beta_opt)
results_final['gamma_opt'].append(gamma_opt)
results_final['day'].append(df.iloc[i]['date'])
toc1 = time.time()
print("Time taken = " + str((toc1-tic1)/60.0) + " mins")
toc = time.time()
print("Total time taken = " + str((toc-tic)/60.0) + " mins")
results_final = pd.DataFrame(results_final)
print("Altogether we made %d forecasts, each of length %d days" % (len(results_final), H))
results_final
print("For forecast horizon %d, the mean RMSE is %f" % (H, results_final['rmse'].mean()))
print("For forecast horizon %d, the mean MAPE is %f" % (H, results_final['mape'].mean()))
print("For forecast horizon %d, the mean MAE is %f" % (H, results_final['mae'].mean()))
# Put results into pickle
pickle.dump(results_final['rmse'].mean(),
open("./out/rmse_w_tune_" + str(train_size) + "_" + str(val_size) + ".pickle", "wb")
)
# Put results into pickle
pickle.dump(results_final,
open("./out/results_final_" + str(train_size) + "_" + str(val_size) + ".pickle", "wb")
)
# Plot the predictions, and zoom in
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='adj_close', style='b-', grid=True)
# Plot the predictions
for key in preds_dict:
ax.plot(df['date'][key:key+H], preds_dict[key])
ax.set_xlabel("date")
ax.set_ylabel("USD")
firstDay = df['date'][min(i_list)]-timedelta(days=daysBackward)
lastDay = df['date'][max(i_list)]+timedelta(days=daysForward)
ax.set_ylim([min(min(min(preds_dict.values())),
df[(df['date']>=firstDay) & (df['date']<=lastDay)]['adj_close'].min()
),
max(max(max(preds_dict.values())),
df[(df['date']>=firstDay) & (df['date']<=lastDay)]['adj_close'].max()
)
])
ax.set_xlim([firstDay, lastDay])
ax.legend(['adj_close', 'forecasts'])
# Save figure:
fig = ax.get_figure()
fig.savefig("./out/forecasts_" + str(train_size) + "_" + str(val_size) + ".png", bbox_inches='tight')
# Plot scatter plot of actual values vs. predictions
for key in preds_dict:
plt.plot(df['adj_close'][key:key+H], preds_dict[key], 'x')
plt.plot(range(int(min(min(preds_dict.values()))), int(max(max(preds_dict.values()))), 1),
range(int(min(min(preds_dict.values()))), int(max(max(preds_dict.values()))), 1), 'b-')
plt.xlabel('adj_close')
plt.ylabel('forecasts')
plt.grid()
# Save figure:
plt.savefig("./out/forecasts_vs_actuals_" + str(train_size) + "_" + str(val_size) + ".png", bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Findings
###Code
rmse_no_tune = pickle.load(open( "./out/rmse_no_tune_" + str(756) + "_" + str(21) + ".pickle", "rb"))
rmse_no_tune
rmse_w_tune = pickle.load(open( "./out/rmse_w_tune_" + str(756) + "_" + str(21) + ".pickle", "rb"))
rmse_w_tune
results_final = pickle.load(open( "./out/results_final_" + str(756) + "_" + str(21) + ".pickle", "rb"))
results_final
train_size
val_size
# Collect the results from various experiments
# # alpha - smoothing coeff
# alphaMax = 0.99
# alphaMin = 0.01
# alphaStep = 0.01
# # beta - trend coeff
# betaMax = 0.99
# betaMin = 0.01
# betaStep = 0.01
# # gamma - seasonality coeff
# gammaMax = 0.9
# gammaMin = 0.1
# gammaStep = 0.1
# L = 252 # seasonality period
results_dict = pd.DataFrame({'train_size': [756, 756, 756, 756],
'val_size': [21, 63, 126, 252],
'before_tune_rmse': [11.4343, 10.7199, 6.2241, 6.3890],
'after_tune_rmse': [4.5537, 5.0689, 4.5988, 3.9139]})
results_dict
# Compare results across all exponential smoothing methods
all_results = pd.DataFrame({'Method': ['Last value', 'Single exponential smoothing', 'Double exponential smoothing', 'Triple exponential smoothing'],
'RMSE': [2.53, 2.52, 2.29, 3.91],
'MAPE(%)': [1.69, 1.69, 1.53, 2.65],
'MAE': [2.26, 2.25, 2.04, 3.50]})
all_results
###Output
_____no_output_____ |
content/lessons/04/Now-You-Code/NYC3-Coin-Flip.ipynb | ###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
heads
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs: Outputs: head or taleAlgorithm (Steps in Program):
###Code
import random
h = 0
t = 0
times = int(input("how many times should it flip?: "))
for i in range(times):
coin = ['heads','tales']
flip = random.choice(coin)
if flip == 'heads':
h = h + 1
else:
t = t + 1
print(h,t)
###Output
how many times should it flip?: 1000
528 472
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
tails
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs:Outputs:Algorithm (Steps in Program):
###Code
import random
heads = 0
tails = 0
flips = 0
number = int(input("Enter number of flips:"))
coin = ['heads', 'tails']
while flips != number:
flip = random.choice(coin)
heads = heads + flip.count("heads")
tails = tails + flip.count("tails")
flips = flips + 1
print("After %d flips. Heads:%d, Tails:%d" % (number,heads,tails))
from collections import Counter
import random
Counter(random.choice(['Heads', 'Tails']) for _ in range(1000))
###Output
_____no_output_____
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
heads
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs:Outputs:Algorithm (Steps in Program):
###Code
import random
# Step 2: write code for program
###Output
How many times should I flip the coin? 10000000
After 10000000 flips. Heads: 4999224, Tails 5000776
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
heads
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs:how many times should the coin be flippedOutputs:Algorithm (Steps in Program):
###Code
import random
number=int(input("how many times should I flip the coin?"))
heads= random.randint(0,number)
tails= number-heads
print("your coin landed on heads %d times, and it landed on tails %d times."%(heads,tails))
###Output
how many times should I flip the coin?20000
your coin landed on heads 16786 times, and it landed on tails 3214 times.
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
heads
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs: heads or tailsOutputs: number of heads and tailsAlgorithm (Steps in Program):make random choicemake loopoutput heads and tails
###Code
h=0
t=0
times=int(input("How many flips?"))
for i in range (times):
import random
coin=['heads', 'tails']
flip=random.choice(coin)
heads="heads"
tails="tails"
if flip=='heads':
h=h+1
if flip=='tails':
t=t+1
print("Heads=",h, "Tails=", t)
###Output
How many flips?1000000
Heads= 500229 Tails= 499771
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
h=0
t=0
Times=int(input("how many times should i flip this coin?: "))
for I in range(Times):
coin=['heads', 'tails']
flip = random.choice(coin)
if flip=='heads':
h=h+1
else:
t=t+1
print("The total of %d flips resulted in %d heads and %d tails" %(Times,h,t))
###Output
how many times should i flip this coin?: 1000000
The total of 1000000 flips resulted in 499338 heads and 500662 tails
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs:Outputs:Algorithm (Steps in Program):
###Code
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
heads
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
heads
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs:Outputs:Algorithm (Steps in Program):
###Code
count_heads=0
count_tails=0
flips=int(input("How many times should I flip this coin: "))
for i in range (0,flips):
import random
coin = ['heads', 'tails']
flip=random.choice(coin)
if flip=='heads':
count_heads= count_heads+1
if flip=='tails':
count_tails= count_tails+1
print("There are %d heads and %d tails after %d flips" %(count_heads, count_tails, flips))
###Output
How many times should I flip this coin: 100000
There are 50050 heads and 49950 tails after 100000 flips
###Markdown
Now You Code 3: Coin FlipsWrite a program to simulate 100,000 coin flips. The program should outputthe tally of heads and tails across the number of flips.For Example:After 100000 flips. Heads: 50030, Tails 49970Of course, since the flips are random, the counts will vary each time the program executes.Your strategy should be to use a loop to flip a coin 100,000 times.After you flip you should check for heads or tails and increment the countaccordingly. How do you make Python simulate a coin flip?
###Code
# Sample code which demostrates how to flip a coin in python
import random
coin = ['heads', 'tails']
flip = random.choice(coin)
print(flip)
###Output
_____no_output_____
###Markdown
Run the cell above a couple of times. Notice how each time you execute the code, it comes up as either `heads` or `tails`.Here's a breakdown of the code```line 1 imports the random moduleline 2 sets up a coin to have two choices heads or tailsline 3 "flips" a coin by making a random choice among the values of the coin ('heads' or 'tails')line 4 prints the results of the coin flip.```Now that you understand how to simulate a coin flip in Python, try to design an algorithm and then write code for the program. Step 1: Problem Analysis Inputs:Outputs:Algorithm (Steps in Program):
###Code
import random
n=int(input("Enter the number of times to flip the coin"))
coin=['heads', 'tails']
t=0
h=0
for i in range (n):
flip= random.choice(coin)
if flip == 'tails':
t=t+1
else:
h=h+1
print( "After %d flips. Heads: %d, Tails: %d" %(n,h,t))
###Output
Enter the number of times to flip the coin10
After 10 flips. Heads: 6, Tails: 4
|
framework/expert/ocr.ipynb | ###Markdown
单纯跳过空行空行数 val 1 train 22 test7```51/51 [==============================] - 54s 787ms/step - loss: 1.2442 - accuracy: 0.4017 - val_loss: 0.9536 - val_accuracy: 0.6584Epoch 2/551/51 [==============================] - 40s 784ms/step - loss: 0.8754 - accuracy: 0.6450 - val_loss: 0.6879 - val_accuracy: 0.7267Epoch 3/551/51 [==============================] - 39s 773ms/step - loss: 0.6197 - accuracy: 0.7611 - val_loss: 0.6713 - val_accuracy: 0.7391Epoch 4/551/51 [==============================] - 40s 779ms/step - loss: 0.4725 - accuracy: 0.8172 - val_loss: 0.6613 - val_accuracy: 0.7391Epoch 5/551/51 [==============================] - 40s 777ms/step - loss: 0.3838 - accuracy: 0.8777 - val_loss: 0.6660 - val_accuracy: 0.751614/14 [==============================] - 4s 261ms/step - loss: 0.4056 - accuracy: 0.8643[0.4056240916252136, 0.8642534017562866]``` 使用空字符""替换```51/51 [==============================] - 54s 803ms/step - loss: 1.1917 - accuracy: 0.4449 - val_loss: 0.8905 - val_accuracy: 0.6481Epoch 2/551/51 [==============================] - 40s 793ms/step - loss: 0.8183 - accuracy: 0.6726 - val_loss: 0.6978 - val_accuracy: 0.7531Epoch 3/551/51 [==============================] - 40s 785ms/step - loss: 0.5872 - accuracy: 0.7865 - val_loss: 0.6661 - val_accuracy: 0.7222Epoch 4/551/51 [==============================] - 40s 789ms/step - loss: 0.4591 - accuracy: 0.8480 - val_loss: 0.6833 - val_accuracy: 0.7346Epoch 5/551/51 [==============================] - 40s 787ms/step - loss: 0.3869 - accuracy: 0.8818 - val_loss: 0.6666 - val_accuracy: 0.753115/15 [==============================] - 4s 247ms/step - loss: 0.3995 - accuracy: 0.8686[0.39946186542510986, 0.8685969114303589]```
###Code
# !rm -rf /content/drive/MyDrive/store-type-recognition
# !cd /content/drive/MyDrive && unzip /content/drive/MyDrive/store-type-recognition.zip
!ls /content/drive/MyDrive/store-type-recognition/split/train/food/ |wc -l
!ls /content/drive/MyDrive/store-type-recognition/split/test/store/ |wc -l
!ls /content/drive/MyDrive/store-type-recognition/split/val/food/ |wc -l
!ls /content/store-type-recognition/split/test/store |wc -l
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/chinese_L-12_H-768_A-12.tar.gz"
tf.io.gfile.copy(gs_folder_bert, "/content/drive/MyDrive/chinese_L-12_H-768_A-12.tar.gz")
!cd /content/drive/MyDrive &&tar -zxvf chinese_L-12_H-768_A-12.tar.gz -C chinese_L-12_H-768_A-12
!nvidia-smi
!kill -9 60
import os , shutil
data_path = "/content/drive/MyDrive/store-type-recognition/split/test"
labels = os.listdir(data_path)
data_path="/content/store-type-recognition/split/test/store"
for label in os.listdir(data_path):
label_image = os.path.join(data_path, label)
shutil.copy(os.path.join(label_image),os.path.join("/content/drive/MyDrive/store-type-recognition/split/test/store",label))
for label in labels:
if label =="food":
print(label)
label_dir = os.path.join(data_path, label)
val_label_dir=label_dir.replace("test","val")
if not os.path.exists(val_label_dir):
os.mkdir(val_label_dir)
with open(f"/content/drive/MyDrive/{label}.txt", "r")as f:
images_name = f.readlines()
for img in images_name:
number = re.findall(r"\d+", img)[0].strip()
for train_img in os.listdir(label_dir):
train_number=re.findall(r"\d+", train_img)[0].strip()
if number == train_number:
print(number,train_number)
print(os.path.join(label_dir,train_img),os.path.join(val_label_dir,train_img))
# os.remove(os.path.join(label_dir,train_img))
shutil.move(os.path.join(label_dir,train_img),os.path.join(val_label_dir,train_img))
print(os.path.join(label_dir,train_img))
###Output
_____no_output_____ |
guide/dev.ipynb | ###Markdown
ContributeBefore we can accept contributions, you need to become a CLAed contributor.E-mail a signed copy of the[CLAI](https://github.com/vita-epfl/openpifpaf/blob/main/CLAI.txt)(and if applicable the[CLAC](https://github.com/vita-epfl/openpifpaf/blob/main/CLAC.txt))as PDF file to [email protected]. (modify-code)= Modify CodeFor development of the openpifpaf source code itself, you need to clone this repository and then:```shpip3 install numpy cythonpip3 install --editable '.[dev,train,test]'```The last command installs the Python package in the current directory(signified by the dot) with the optional dependencies needed for training andtesting. If you modify `functional.pyx`, run this last command again whichrecompiles the static code.Develop your features in separate feature branches. Create a pull request with your suggested changes. Make sure your code passes`pytest`, `pylint` and `pycodestyle` checks:```shpylint openpifpafpycodestyle openpifpafpytestcd guidepython download_data.pypytest --nbval-lax --current-env *.ipynb``` Things to ContributeThis is a research project and changing fast. Contributions can be in many areas:* Add a new dataset?* Add a new base network? * Try a different loss? * Try a better multi-task strategy?* Try a different head architecture? * Add a new task? * Run on new hardware (mobile phones, embedded devices, ...)? * Improve training schedule/procedure?* Use it to build an app?* Improve documentation (!!)* ... Missing DependenciesOpenPifPaf has few core requirements so that you can run it efficiently on servers without graphical interface. Sometimes, you just want to install all possible dependencies. Those are provided as "extra" requirements. Use the following `pip3` command to install all extras.
###Code
import sys
if sys.version_info >= (3, 8):
import importlib.metadata
extras = importlib.metadata.metadata('openpifpaf').get_all('Provides-Extra')
print(f'pip3 install "openpifpaf[{",".join(extras)}]"')
###Output
_____no_output_____
###Markdown
Your Project and OpenPifPafLet us know about your open source projects. We would like to feature them in our "related projects" section. The simplest way to integrate with OpenPifPaf is to write a plugin. If some functionality is not possible through our plugin architecture, open an issue to discuss and if necessary send us a pull request that enables the missing feature you need.If you do need to make a copy of OpenPifPaf, you must respect our license. Build Guide```shcd guidejb build .```If you encounter issues with the kernel spec in the notebooks, this is a piece of code that is used to normalize the kernel spec names in the guide geployment script:```shfor fn in *.ipynb; do jq '(.metadata.kernelspec.name,.metadata.kernelspec.display_name)="python3"' ${fn} > ${fn}_ mv ${fn}_ ${fn}done``` Build Environment
###Code
!pip freeze
!python -m openpifpaf.predict --version
###Output
_____no_output_____
###Markdown
ContributeBefore we can accept contributions, you need to become a CLAed contributor.E-mail a signed copy of the[CLAI](https://github.com/openpifpaf/openpifpaf/blob/main/docs/CLAI.txt)(and if applicable the[CLAC](https://github.com/openpifpaf/openpifpaf/blob/main/docs/CLAC.txt))as PDF file to [email protected]. (modify-code)= Modify CodeFor development of the openpifpaf source code itself, you need to clone this repository and then:```shpip3 install numpy cythonpip3 install --editable '.[dev,train,test]'```The last command installs the Python package in the current directory(signified by the dot) with the optional dependencies needed for training andtesting. If you modify `functional.pyx`, run this last command again whichrecompiles the static code.Develop your features in separate feature branches. Create a pull request with your suggested changes. Make sure your code passes`pytest`, `pylint` and `pycodestyle` checks:```shpylint openpifpafpycodestyle openpifpafpytestcd guidepython download_data.pypytest --nbval-lax --current-env *.ipynb``` Things to ContributeThis is a research project and changing fast. Contributions can be in many areas:* Add a new dataset?* Add a new base network? * Try a different loss? * Try a better multi-task strategy?* Try a different head architecture? * Add a new task? * Run on new hardware (mobile phones, embedded devices, ...)? * Improve training schedule/procedure?* Use it to build an app?* Improve documentation (!!)* ... Missing DependenciesOpenPifPaf has few core requirements so that you can run it efficiently on servers without graphical interface. Sometimes, you just want to install all possible dependencies. Those are provided as "extra" requirements. Use the following `pip3` command to install all extras.
###Code
# NO CODE
import sys
if sys.version_info >= (3, 8):
import importlib.metadata
extras = importlib.metadata.metadata('openpifpaf').get_all('Provides-Extra')
print(f'pip3 install "openpifpaf[{",".join(extras)}]"')
###Output
_____no_output_____
###Markdown
Your Project and OpenPifPafLet us know about your open source projects. We would like to feature them in our "related projects" section. The simplest way to integrate with OpenPifPaf is to write a plugin. If some functionality is not possible through our plugin architecture, open an issue to discuss and if necessary send us a pull request that enables the missing feature you need.If you do need to make a copy of OpenPifPaf, you must respect our license. Build Guide```shcd guidejb build .```If you encounter issues with the kernel spec in a notebook, open the notebookwith a text editor and find `metadata.kernelspec.name` and set it to `python3`.Alternatively, you can patch your local package yourself. Open `venv/lib/python3.9/site-packages/jupyter_cache/executors/utils.py`in your editor and add `kernel_name='python3'` to the arguments of `nbexecute()`[here](https://github.com/executablebooks/jupyter-cache/blob/1431ed72961fabc2f09a13553b0aa45f4c8a7c23/jupyter_cache/executors/utils.pyL56).Alternatively, for continuous integration, the `kernel_name` is replace in the json of the Jupyter Notebook before executing jupyter-book [here](https://github.com/openpifpaf/openpifpaf/blob/6db797205cd082bf7a80e967372957e56c9835fb/.github/workflows/deploy-guide.ymlL47).Only use this operation on a discardable copy as `jq` changes all formatting. Build Environment
###Code
%%bash
pip freeze
%%bash
python -m openpifpaf.predict --version
###Output
_____no_output_____
###Markdown
ContributeBefore we can accept contributions, you need to become a CLAed contributor.E-mail a signed copy of the[CLAI](https://github.com/openpifpaf/openpifpaf/blob/main/docs/CLAI.txt)(and if applicable the[CLAC](https://github.com/openpifpaf/openpifpaf/blob/main/docs/CLAC.txt))as PDF file to [email protected]. (modify-code)= Modify CodeFor development of the openpifpaf source code itself, you need to clone this repository and then:```shpip3 install numpy cythonpip3 install --editable '.[dev,train,test]'```The last command installs the Python package in the current directory(signified by the dot) with the optional dependencies needed for training andtesting. If you modify `functional.pyx`, run this last command again whichrecompiles the static code.Develop your features in separate feature branches. Create a pull request with your suggested changes. Make sure your code passes`pytest`, `pylint` and `pycodestyle` checks:```shpylint openpifpafpycodestyle openpifpafpytestcd guidepython download_data.pypytest --nbval-lax --current-env *.ipynb``` Things to ContributeThis is a research project and changing fast. Contributions can be in many areas:* Add a new dataset?* Add a new base network? * Try a different loss? * Try a better multi-task strategy?* Try a different head architecture? * Add a new task? * Run on new hardware (mobile phones, embedded devices, ...)? * Improve training schedule/procedure?* Use it to build an app?* Improve documentation (!!)* ... Missing DependenciesOpenPifPaf has few core requirements so that you can run it efficiently on servers without graphical interface. Sometimes, you just want to install all possible dependencies. Those are provided as "extra" requirements. Use the following `pip3` command to install all extras.
###Code
import sys
if sys.version_info >= (3, 8):
import importlib.metadata
extras = importlib.metadata.metadata('openpifpaf').get_all('Provides-Extra')
print(f'pip3 install "openpifpaf[{",".join(extras)}]"')
###Output
_____no_output_____
###Markdown
Your Project and OpenPifPafLet us know about your open source projects. We would like to feature them in our "related projects" section. The simplest way to integrate with OpenPifPaf is to write a plugin. If some functionality is not possible through our plugin architecture, open an issue to discuss and if necessary send us a pull request that enables the missing feature you need.If you do need to make a copy of OpenPifPaf, you must respect our license. Build Guide```shcd guidejb build .```If you encounter issues with the kernel spec in a notebook, open the notebookwith a text editor and find `metadata.kernelspec.name` and set it to `python3`.Alternatively, you can patch your local package yourself. Open `venv/lib/python3.9/site-packages/jupyter_cache/executors/utils.py`in your editor and add `kernel_name='python3'` to the arguments of `nbexecute()`[here](https://github.com/executablebooks/jupyter-cache/blob/1431ed72961fabc2f09a13553b0aa45f4c8a7c23/jupyter_cache/executors/utils.pyL56).Alternatively, for continuous integration, the `kernel_name` is replace in the json of the Jupyter Notebook before executing jupyter-book [here](https://github.com/openpifpaf/openpifpaf/blob/6db797205cd082bf7a80e967372957e56c9835fb/.github/workflows/deploy-guide.ymlL47).Only use this operation on a discardable copy as `jq` changes all formatting. Build Environment
###Code
%%bash
pip freeze
%%bash
python -m openpifpaf.predict --version
###Output
_____no_output_____ |
tutorials/Connection Requests.ipynb | ###Markdown
Use cases: Connection requests
###Code
import pandas as pd
import numpy as np
import copy
import pandapower as pp
import os,sys,inspect
import json
import math
import yaml
import folium
from folium.plugins import BeautifyIcon
import plotly.graph_objs as go
#import packages from parent subfolder
currentdir = os.path.abspath(os.getcwd())
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
def distance_to_poi(poi, df):
"""
Calculate the Haversine distance.
Parameters
----------
origin : tuple of float
(lat, long)
destination :
Returns
-------
distance_in_km : float
"""
lat1, lon1 = poi
closest_point = None
min_d = None
df['distance to poi'] = None
for idx,row in df.iterrows():
lat2 = row['y']
lon2 = row['x']
radius = 6371 # km
dlat = math.radians(lat2 - lat1)
dlon = math.radians(lon2 - lon1)
a = (math.sin(dlat / 2) * math.sin(dlat / 2) +
math.cos(math.radians(lat1)) * math.cos(math.radians(lat2)) *
math.sin(dlon / 2) * math.sin(dlon / 2))
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
d= radius * c
df['distance to poi'].loc[idx] = d
return df
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
# Pandapower grid
net = pp.from_json(os.path.join(currentdir,'data\svedala\svedala.json'))
#Read headroom
head_load= pd.read_json(os.path.join(currentdir,'data\svedala\svedala_headroom.json'))
head_sgen= pd.read_json(os.path.join(currentdir,'data\svedala\svedala_headroom_sgen.json'))
# Data preparation
df = copy.deepcopy(net.bus_geodata)
#Need to adjust coordinates (NOT NEEDED WITH REAL DATA)
df['x'] = df['x']*0.1 + 13.5
df['y'] = df['y']*0.1 + 57.4
df['sgen'] = head_sgen['Headroom']
df['load'] = head_load['Headroom']
df['voltage'] = net.bus.vn_kv
df = df.drop('coords', 1)
###Output
_____no_output_____
###Markdown
Use case 1 "As a new customer, I want to see whether I can connect at a grid connection point"For this use case the customer has a connection request in mind. The customer would like to connect a load or generator of a specific size and would want to see on a map where it is possible to connect.Vision is that this information should be public available meaning only necessary information should be published. In this case the customer is only intrested in available headroom (here its dependent on type of connection) and position. For understanding which type of connection is available we have choosen to also include voltage limits, but this could of course be omited if needed.
###Code
# Customer requested connection size and type
size = 50 # in MW
connection_type = 'sgen' #'load' or 'sgen'
#Data for use case:
df.head()
###Output
_____no_output_____
###Markdown
The example data, shown above, needs to be decorated based on the customer request. For example position on the map should change color depending on if the request is possible or not, the marker will change shape and color based on voltage level.
###Code
# Create map layer depending on customer request
icon = []
for idx,row in df.iterrows():
if row[connection_type] < size:
color= 'red'
else:
color= 'green'
#shape
if row['voltage']<50:
shape = 'circle-dot'
width = 4
else:
shape = 'rectangle-dot'
width = 6
icon.append(BeautifyIcon(border_color=color,border_width=width,icon_shape=shape, prefix='fa'))
df['icon'] = icon
map2 = folium.Map(
location=[57.772080,12.779226],
tiles='cartodbpositron',
zoom_start=9,
)
df.apply(lambda row:folium.Marker(location=[row["y"], row["x"]],tooltip=row[connection_type],icon=row['icon']).add_to(map2), axis=1)
map2
###Output
_____no_output_____
###Markdown
Use case 2 "As a connection request handler, I want to be able to quickly give preliminary answers to new connection requests."For this use case the customer request handler recives a request from a customer. The request will include connection type, connection size and geographical coordinates. From the coordinates the user want to find the nearest connection point and see if the connection size can be added to this point.
###Code
size = 45
poi = (57.172080,13.449226)#svedala.grid.bus.index of intrested connections
connection_type = 'load'
connection_request = pd.DataFrame(columns = ['x','y','request'], data = [[poi[0],poi[1],size]])
#Calculate distance from point of intrest (poi) to all possible connections points and sort on distance
df = distance_to_poi(poi, df)
df_sorted = df.sort_values(by=['distance to poi'])
# Create map layer depending on customer request
map = folium.Map(
location=[poi[0],poi[1]],
tiles='cartodbpositron',
zoom_start=9,
)
icon = []
for idx,row in df.iterrows():
#highlight closest 3 connections
if idx in df_sorted.head(3).index[:]:
color = 'red' if row[connection_type] < size else 'green'
#shape = 'circle-dot' if row['voltage']<50 else 'rectangle-dot'
icon.append(BeautifyIcon(border_color=color,border_width=2,icon_shape='doughnut', prefix='fa'))
popup = 'Distance '+ str(df_sorted.loc[idx,'distance to poi'])
folium.PolyLine([[row["y"], row["x"]],[connection_request.loc[0,'x'],connection_request.loc[0,'y']]], color="blue", weight=4, opacity=1,tooltip=popup).add_to(map)
continue
if row[connection_type] < size:
color= 'red'
else:
color= 'green'
#shape
if row['voltage']<50:
shape = 'circle-dot'
width = 4
else:
shape = 'rectangle-dot'
width = 4
icon.append(BeautifyIcon(border_color=color,border_width=width,icon_shape=shape, prefix='fa'))
df['icon'] = icon
df.apply(lambda row:folium.Marker(location=[row["y"], row["x"]],tooltip=row[connection_type],icon=row['icon']).add_to(map), axis=1)
connection_request.apply(lambda row:folium.Marker(location=[row["x"], row["y"]],tooltip='Request size '+str(row['request']),
color='blue',fill=True).add_to(map), axis=1)
map
###Output
_____no_output_____ |
fun_coding/basic concepts/BST(Binary_Search_Tree).ipynb | ###Markdown
A) 2번째 그림의 코드를 입력하였을때, 1번 그림과 같이 나오도록 만들고 B) Leaf_tree, 자식노드1, 자식노드2개가 있는 노드를 삭제하여도 알맞게 참조하도록 하시오 ex) Leaf삭제 bst.remove(9) bst.preorder_traverse(bst.get_root(), f)- `6 3 2 4 5 8 10 11` ex) 자식노드가 1개 노드 삭제 bst.remove(8) bst.preorder_traverse(bst.get_root(), f)- `6 3 2 4 5 10 9 11` ex) 자식노드2개 노드 삭제 bst.remove(6) bst.preorder_traverse(bst.get_root(), f)- `5 3 2 4 8 10 9 11` C) 3번그림과 같이 node를 insert하는 기능을 넣고 같은 결과값 출력
###Code
from binary_tree import *
from binary_tree import *
class BST:
#1. 이진 트리와 같음
def __init__(self):
self.root = None
#2. 이진 트리와 같음
def get_root(self):
return self.root
#3. 이진 트리와 같음
def preorder_traverse(self, cur, f):
if not cur:
return
f(cur.data)
self.preorder_traverse(cur.left, f)
self.preorder_traverse(cur.right, f)
def insert(self, data):
#삽입할 노드 생성 및 데이터 설정
new_node = TreeNode()
new_node.data = data
cur = self.root
#루트 노드가 없을 때
if cur == None:
self.root = new_node
return
#삽입할 노드의 위치를 찾아 삽입
while True:
#parent는 현재 순회 중인 노드의 부모 노드를 가리킴
parent = cur
#삽입할 데이터가 현재 노드 데이터보다 작을 때
if data < cur.data:
cur = cur.left
#왼쪽 서브 트리가 None이면 새 노드를 위치시킨다.
if not cur:
parent.left = new_node
return
#삽입할 데이터가 현재 노드 데이터보다 클 때
else:
cur = cur.right
#오른쪽 서브 트리가 None이면 새 노드를 위치시킨다.
if not cur:
parent.right = new_node
return
def search(self, target):
cur = self.root
if not cur:
return None
while cur:
#target 데이터를 찾으면 노드를 반환
if target == cur.data:
return cur
#target 데이터가 노드 데이터보다 작으면
#왼쪽 자식 노드로 이동
elif target < cur.data:
cur = cur.left
#target 데이터가 노드 데이터보다 크면
#오른쪽 자식 노드로 이동
elif target > cur.data:
cur = cur.right
#while 문을 빠져나온 경우
#target 데이터가 트리 안에 없다.
return cur
def __remove_recursion(self, cur, target):
#탈출 조건 1
#대상 데이터가 트리 안에 없다.
if cur == None:
return None, None
#대상 데이터가 노드 데이터보다 작으면
#노드의 왼쪽 자식에서 대상 데이터를 가진 노드를 지운다.(재귀 함수 호출)
elif target < cur.data:
cur.left, rem_node = \
self.__remove_recursion(cur.left, target)
#대상 데이터가 노드 데이터보다 크면
#노드의 오른쪽 자식에서 대상 데이터를 가진 노드를 지운다.(재귀 함수 호출)
elif target > cur.data:
cur.right, rem_node = self.__remove_recursion(cur.right, target)
#탈출 조건 2
#target == cur.data
else:
#1. 리프 노드일 때
if not cur.left and not cur.right:
rem_node = cur
cur = None
#2-1. 자식 노드가 하나일 때: 왼쪽 자식이 있을 때
elif not cur.right:
rem_node = cur
cur = cur.left
#2-2. 자식 노드가 하나일 때: 오른쪽 자식이 있을 때
elif not cur.left:
rem_node = cur
cur = cur.right
#3. 자식 노드가 두개일 때
else:
#4. 대체노드를 찾는다.
replace = cur.left
while replace.right:
replace = replace.right
#5. 삭제 노드와 대체 노드의 값을 교환한다.
cur.data, replace.data = replace.data, cur.data
#6. 대체 노드를 삭제하면서 삭제된 노드를 받아온다.
cur.left, rem_node = self.__remove_recursion(cur.left, replace.data)
#삭제 노드가 루트 노드일 경우
#루트가 변경될 수 있기 때문에
#삭제 후 현재 루트를 반환
return cur, rem_node
def remove(self, target):
#루트 노드의 변경 가능성이 있으므로
#루트를 업데이트 해주어야 한다.
self.root, removed_node = self.__remove_recursion(self.root, target)
#삭제된 노드의 자식 노드를 None으로 만든다
removed_node.left = removed_node.right = None
return removed_node
#인자가 데이터가 아니라 노드이다.
#insert() 메서드에서
#노드 생성 코드만 빼면
#코드 흐름은 완전히 같다.
def insert_node(self, node):
#노드 생성 코드 없음
#노드 생성에 따른 부담을 덜 수 있다
cur = self.root
#insert() 메서드와 다른 점
#new_node -> node
if cur == None:
self.root = node
return
while True:
parent = cur
#insert() 메서드와 다른 점
#data->node.data
if node.data < cur.data:
cur = cur.left
if not cur:
#insert() 메서드와 다른 점
#new_node -> node
parent.left = node
return
else:
cur = cur.right
if not cur:
#insert() 메서드와 다른 점
#new_node -> node
parent.right = node
return
if __name__ == "__main__":
bst = BST()
bst.insert(6)
bst.insert(3)
bst.insert(2)
bst.insert(4)
bst.insert(5)
bst.insert(8)
bst.insert(10)
bst.insert(9)
bst.insert(11)
f = lambda x: print(x, end = " ")
bst.preorder_traverse(bst.get_root(), f)
print()
bst.remove(6)
bst.preorder_traverse(bst.get_root(), f)
###Output
6 3 2 4 5 8 10 9 11
TreeNode of 6 is deleted
5 3 2 4 8 10 9 11 |
python/ElectronRadiation.ipynb | ###Markdown
Coupling Electron Tracing to Radiation Field Diagnostics QTNM Meeting 11/03/21 Import Non-Relativistic Electron Tracking and Radiation modules```python Add location of Single Electron Radiationsys.path.insert(1, '/Users/tomgoffrey/dev/QTNM/SingleElectronRadiationQTNM/')import SingleElectronRadiation as SERfrom ford1991 import analytic_solution, rhs, solvefrom utils import error_plot, calculate_omega```
###Code
# Add location of Single Electron Radiation
sys.path.insert(1, '/Users/tomgoffrey/dev/QTNM/SingleElectronRadiationQTNM/')
import SingleElectronRadiation as SER
from ford1991 import Ford1991Solver
from utils import error_plot, calculate_omega
%run config.py
print(Ford1991Solver.analytic_solution_1d.__doc__)
## Test values
tau = 0.015
B = 3.0
q = -1.0
m = 1.0
# Initialise particle such that x = gyro-radius
x0 = np.array([m/np.abs(q*B), 0.0]) # Initial velocity = 1
solver = Ford1991Solver(charge=q, mass=m, b_field=B, tau=tau)
# Solve equation numerically
res = solver.solve_1d(5, x0=x0)
# Get analytic solution
x_soln, y_soln, vx_soln, vy_soln = solver.analytic_solution_1d(res.t, x0=x0)
# Plot result
error_plot(res.y[0], res.y[1], x_exact=x_soln, y_exact=y_soln,
title='Electron Trajectory (Test values)', xlabel='x', ylabel='y')
plt.gcf().set_size_inches(3,3)
ax = plt.gca()
ax.set_yticks(plt.xticks()[0])
ax.set_ylim(ax.get_xlim())
ax.set_aspect(1)
plt.tight_layout()
# Set-up detector, centred on (0, 0, 0.05)
x1 = 2.5e-2
x0 = -x1
y1 = 1e-2
y0 = -y1
z = 0.05
npixels = 20
# Detector Position Array
DPA = SER.CreateDetectorArray(npixels,x0,x1,y0,y1,z)
# Set-up using ~QTNM values
B = 1
ke = 18.6
gamma_rel = 1 + 1e3 * ke * qe / me / c**2
beta_rel = np.sqrt(1 - 1 / gamma_rel**2)
vel = beta_rel * c # Initial velocity
# Calculate omega, with and without relativistic correction
omega = calculate_omega(B)
omega_rel = calculate_omega(B, energy=18.6)
print('omega = %.4E' % omega)
print('omega relativistic = %.4E' % omega_rel)
solver = Ford1991Solver(b_field=B, tau=0.0)
# Get analytic solution at midpoint of rotation
time = np.pi / np.abs(omega)
x_soln, y_soln, vx_soln, vy_soln = solver.analytic_solution_1d(time, v0=vel)
x = [x_soln, y_soln, vx_soln, vy_soln, 0.0]
# Also need acceleration
_, _, accx, accy, _ = solver.rhs_1d(time, x)
# Daniel's code assumes negative x-direction B-field, so transform accordingly
EField = SER.CalcNonRelEFieldArray(DPA, time, [0, y_soln, x_soln], [0, accy, accx], npixels)
Emag = np.linalg.norm(EField,axis=2)
p = plt.pcolormesh(Emag*1e6)
cbar = plt.colorbar(p)
cbar.set_label(r'$|\vec{E}|\;\mathrm{\mu V m^{-1}}$', rotation=90, fontsize=12)
plt.tight_layout()
# Poynting flux magnitude - Does this need multiplying by pixel area?
Poynting = SER.CalcPoyntingVectorMagnitude(Emag)
p = plt.pcolormesh(Poynting*1e15)
plt.colorbar(p)
plt.tight_layout()
## Actual values
## 6.26e-24
tau = qe * qe / (6.0 * np.pi * ep0 * c**3 * me)
solver = Ford1991Solver(b_field=B, tau=tau)
x0 = np.array([me*vel/np.abs(qe*B), 0.0])
# Solve equation numerically
res = solver.solve_1d(1, x0=x0, v0=vel)
# Get analytic solution
x_soln, y_soln, vx_soln, vy_soln = solver.analytic_solution_1d(res.t, x0=x0, v0=vel)
# Plot result
error_plot(res.y[0] * 1000, res.y[1] * 1000, x_exact=x_soln * 1000, y_exact=y_soln * 1000,
title='Electron Trajectory', xlabel='x (mm)', ylabel='y (mm)')
plt.gcf().set_size_inches(3.25,3.25)
ax = plt.gca()
#ax.set_aspect(1)
plt.tight_layout()
# Solve equation numerically for 100 rotations
n_rot = 100
res = solver.solve_1d(n_rot, x0=x0, v0=vel)
# Get analytic solution
x_soln, y_soln, vx_soln, vy_soln = solver.analytic_solution_1d(res.t, x0=x0, v0=vel)
# Kinetic energy of electron as function of time
ke = 0.5 * me * (res.y[2]**2 + res.y[3]**2)
# Analytic solution
mu = omega**2 * tau
taue = - 2.0 * mu / (1.0 + tau**2)
error_plot(res.t * np.abs(omega), 1.0 - ke / ke[0], y_exact=1.0 - np.exp(taue * res.t),
title='Fractional Electron Energy Loss', xlabel=r'$\omega_c t$',
ylabel=r'$\frac{\Delta T}{T_0}$')
plt.gcf().set_size_inches(5,3)
plt.title('Fractional Electron Energy Loss', y=1.08, fontsize=14)
plt.tight_layout()
rad = np.sqrt(res.y[1]**2 + res.y[0]**2)
rad_exact = np.sqrt(y_soln**2 + x_soln**2)
error_plot(res.t * np.abs(omega), rad/rad[0] - 1.0, y_exact=rad_exact/rad_exact[0] - 1.0,
xlabel='$\omega_c t$', ylabel=r'$\frac{\Delta r}{r(t=0)}$')
plt.title('Fractional Change in Radius', y=1.08, fontsize=14)
plt.tight_layout()
print(x_soln[0], y_soln[0])
frames_per_rot = 25
frames = n_rot * frames_per_rot
cadence = int(len(res.t) / frames)
# Only animate first 5 orbits
frames = int(frames / 20)
i = 0
time = res.t[i]
Epos = [0.0, res.y[1,i], -res.y[0,i]]
_, _, accx, accy, _ = solver.rhs_1d(time, res.y[:,i])
Eacc = [0.0, accy, accx]
EField = SER.CalcNonRelEFieldArray(DPA, time, Epos, Eacc, npixels)
Emag = np.linalg.norm(EField,axis=2)
Ex = EField[:,:,0]
Ey = EField[:,:,1]
Ez = EField[:,:,2]
fig, ax = plt.subplots(1,1)
Q = ax.quiver(DPA[:,:,0] * 100, DPA[:,:,1] * 100, Ex, Ey)
plt.xlabel('x(cm)')
plt.ylabel('y(cm)')
plt.title('Electric Field')
plt.tight_layout()
def update_plot(i, Q):
j = i * cadence
time = res.t[j]
Epos = [0.0, res.y[1,j], -res.y[0,j]]
_, _, accx, accy, _ = solver.rhs_1d(time, res.y[:,j])
Eacc = [0.0, accy, accx]
EField = SER.CalcNonRelEFieldArray(DPA, time, Epos, Eacc, npixels)
Ex = EField[:,:,0]
Ey = EField[:,:,1]
Q.set_UVC(Ex, Ey)
return Q,
anim = animation.FuncAnimation(fig, update_plot, fargs=(Q,), frames=frames,
interval=200, blit=False)
fig.tight_layout()
plt.show()
plt.rcParams['animation.embed_limit'] = 2**128
HTML(anim.to_jshtml())
def calc_emag(j):
time = res.t[j]
Epos = [0.0, res.y[1,j], -res.y[0,j]]
_, _, accx, accy, _ = solver.rhs_1d(time, res.y[:,j])
Eacc = [0.0, accy, accx]
EField = SER.CalcNonRelEFieldArray(DPA, time, Epos, Eacc, npixels)
Emag = np.linalg.norm(EField,axis=2)
return Emag
Emin = 1e100
Emax = 0.0
for i in range(frames):
Emag = calc_emag(i*cadence)
Emin = min(np.min(Emag), Emin)
Emax = max(np.max(Emag), Emax)
Emag = calc_emag(0)
fig, ax = plt.subplots(1,1)
data = plt.pcolormesh(DPA[:,:,0]*100, DPA[:,:,1]*100, Emag,
vmin=Emin, vmax=Emax, shading='gouraud')
def fmt(x, pos):
a, b = '{:1.0e}'.format(x).split('e')
b = int(b)
return r'${} \times 10^{{{}}}$'.format(a, b)
cb = plt.colorbar(data, format=ticker.FuncFormatter(fmt))
cb.ax.tick_params(labelsize=10)
plt.xlabel('x(cm)')
plt.ylabel('y(cm)')
plt.title('Electric Field Magnitude', y=1.08)
plt.tight_layout()
def update_plot(i, data):
j = i * cadence
Emag = calc_emag(j)
data.set_array(Emag)
return data,
anim = animation.FuncAnimation(fig, update_plot, fargs=(data,), frames=frames,
interval=200, blit=False)
fig.tight_layout()
plt.show()
HTML(anim.to_jshtml())
###Output
_____no_output_____ |
generation.ipynb | ###Markdown
Inference on pretrained models.[Install Jupyter](http://jupyter.org/install) to open this file.You can use this to run inference on pretrained models saved to Google Cloud, ormodify it to do inference on your own models.
###Code
import io
import os
import numpy as np
import scipy
import scipy.misc
import sys
import tensorflow as tf
import skimage
import math
import matplotlib.pyplot as plt
from io import StringIO
from PIL import Image
from tensorflow.python.platform import gfile
import prediction_input
import prediction_model
def save_png(image_array, path):
"""Saves an image to disk.
Args:
image_array: numpy array of shape [image_size, image_size, 3].
path: str, output file.
"""
buf = io.BytesIO()
scipy.misc.imsave(buf, image_array, format='png')
buf.seek(0)
f = tf.gfile.GFile(path, 'w')
f.write(buf.getvalue())
f.close()
COLOR_CHAN = 3
IMG_WIDTH = 64
IMG_HEIGHT = 64
IMAGE_FEATURE_NAME = "images"
JOINT_POSE_FEATURE_NAME = "joint_poses"
ACTION_FEATURE_NAME = "actions"
def get_input_fn_queue_determ(pattern, batch_size, flags):
def input_fn(params=None):
"""Input function using queues for GPU, always returns examples in the same order."""
del params
filenames = gfile.Glob(os.path.join(flags.data_dir, pattern))
if not filenames:
raise RuntimeError('No data files found.')
filename_queue = tf.train.string_input_producer(filenames, shuffle=False)
reader = tf.TFRecordReader()
_, val = reader.read(filename_queue)
serialized_input = tf.reshape(val, shape=[1])
image_seq = None
for i in range(0, flags.sequence_length, flags.skip_num):
image_name = 'image_' + str(i)
if flags.dataset_type == 'robot':
pose_name = 'state_' + str(i)
action_name = 'action_' + str(i)
joint_pos_name = 'joint_positions_' + str(i)
features = {
pose_name:
tf.FixedLenFeature([flags.pose_dim], tf.float32),
image_name:
tf.FixedLenFeature([1], tf.string),
action_name:
tf.FixedLenFeature([flags.pose_dim], tf.float32),
joint_pos_name:
tf.FixedLenFeature([flags.joint_pos_dim], tf.float32)
}
else:
features = {
image_name: tf.FixedLenFeature([1], tf.string),
}
parsed_input = tf.parse_example(serialized_input, features)
# Process image
image_buffer = tf.reshape(parsed_input[image_name], shape=[])
image = tf.image.decode_jpeg(image_buffer, channels=COLOR_CHAN)
image = tf.image.resize_images(
image, (IMG_HEIGHT, IMG_WIDTH), method=tf.image.ResizeMethod.BICUBIC)
image = tf.cast(tf.expand_dims(image, 0), tf.float32) / 255.0
if flags.dataset_type == 'robot':
pose = tf.reshape(parsed_input[pose_name], shape=[flags.pose_dim])
pose = tf.expand_dims(pose, 0)
action = tf.reshape(parsed_input[action_name], shape=[flags.pose_dim])
action = tf.expand_dims(action, 0)
joint_pos = tf.reshape(
parsed_input[joint_pos_name], shape=[flags.joint_pos_dim])
joint_pos = tf.expand_dims(joint_pos, 0)
else:
pose = tf.zeros([1, flags.pose_dim])
action = tf.zeros([1, flags.pose_dim])
joint_pos = tf.zeros([1, flags.joint_pos_dim])
if i == 0:
image_seq = image
action_seq, pose_seq, joint_pos_seq = action, pose, joint_pos
else:
image_seq = tf.concat([image_seq, image], 0)
action_seq = tf.concat([action_seq, action], 0)
pose_seq = tf.concat([pose_seq, pose], 0)
joint_pos_seq = tf.concat([joint_pos_seq, joint_pos], 0)
[images, actions, poses, joint_pos] = tf.train.batch(
[image_seq, action_seq, pose_seq, joint_pos_seq],
batch_size,
enqueue_many=False,
capacity=100 * batch_size)
print(flags.sequence_length)
joint_poses = tf.concat([joint_pos, poses], 2)
output_features = {
IMAGE_FEATURE_NAME: images,
JOINT_POSE_FEATURE_NAME: joint_poses,
ACTION_FEATURE_NAME: actions
}
return output_features, None
return input_fn
def get_flags(dataset_type):
import prediction_train
FLAGS = prediction_train.FLAGS
try:
tf.app.flags.DEFINE_string('f', '', 'kernel')
except:
pass
FLAGS.is_training = False
FLAGS.use_tpu = False
FLAGS.use_image_summary = False
FLAGS.dataset_type = dataset_type
FLAGS.use_legacy_vars = True
if dataset_type == "robot":
FLAGS.data_dir="<Your download path>"
FLAGS.sequence_length = 20
FLAGS.skip_num = 1
FLAGS.context_frames = 2
FLAGS.use_image_summary = False
else:
FLAGS.data_dir="gs://unsupervised-hierarch-video/data/"
FLAGS.sequence_length = 256
FLAGS.skip_num = 2
FLAGS.context_frames = 5
FLAGS.use_image_summary = False
FLAGS.use_legacy_vars = False
return FLAGS
dataset_type = "human"
def get_images(model_dir, flags, num_to_eval=100, pattern="humans-test"):
run_config = tf.contrib.learn.RunConfig(
model_dir=set_model_dir,
)
estimator = tf.estimator.Estimator(
model_fn=prediction_model.make_model_fn(flags), config=run_config)
predictions = estimator.predict(
input_fn=get_input_fn_queue_determ(pattern, 8, flags))
num_evals = 0
van_out_psnr_all = []
van_on_enc_psnr_all = []
print(predictions)
all_runs = []
for prediction in predictions:
all_rows = {}
gt_images = prediction["gt_images"] #[1:]
#van_on_enc = prediction["van_on_enc_all"]
mask_out = prediction["mask_out_all"]
van_out = prediction["van_out_all"]
gt_images_row = []
van_out_row = []
mask_out_row = []
for frame_i in range(len(van_out)):
van_out_row.append(van_out[frame_i])
mask_rgb = np.tile(mask_out[frame_i], [1, 1, 3])
mask_out_row.append(mask_rgb)
for frame_i in range(len(gt_images)):
gt_images_row.append(gt_images[frame_i])
all_rows["gt_images"] = gt_images_row
all_rows["van_out"]= van_out_row
all_rows["mask_out"] = mask_out_row
#all_rows["van_on_enc"]= van_on_enc
all_runs.append(all_rows)
num_evals += 1
if num_evals >= num_to_eval:
break
del predictions
return all_runs
# Change this to your path to save the images.
base_dir = "/mnt/brain6/scratch/rubville/projects/unsupervised-hierarch-video-prediction/gen_frames/"
def save_imgs(images, folder, key="van_out"):
for run_num in range(len(images)):
sys.stdout.flush()
frame_nums = range(len(images[run_num][key]))
sys.stdout.flush()
dir_path = os.path.join(folder, str(run_num))
if not os.path.exists(dir_path):
os.makedirs(dir_path)
for frame_i in frame_nums:
frame = images[run_num][key][frame_i]
#frame = scipy.misc.imresize(frame, 4.0)
save_name = frame_i
if key == "gt_images":
# Make the number of the ground truth frames line up with the predicted frames.
save_name = frame_i - 1
save_png(frame, os.path.join(dir_path, "frame"+str(save_name)+'.png'))
# Run to save results from EPVA Gan
# This code will take a while to run since it hast to construct a large graph.
# Decrease flags.sequence_length for a faster runtime.
flags = get_flags(dataset_type)
flags.enc_pred_use_l2norm = True
flags.enc_size = 64
flags.pred_noise_std = 1.0
set_model_dir = "gs://unsupervised-hierarch-video/pretrained_models/epva_wgan_human/"
# flags.sequence_length = 64 # Comment out to repo the results in the paper.
all_runs_epva_wgan = get_images(set_model_dir, flags, num_to_eval=1000)
save_imgs(all_runs_epva_wgan, os.path.join(base_dir, "human_epva_wgan_frames"), key="van_out")
save_imgs(all_runs_epva_wgan, os.path.join(base_dir, "human_epva_wgan_masks"), key="mask_out")
# Also saves the ground truth images.
save_imgs(all_runs_epva_wgan, os.path.join(base_dir, "human_gt"), key="gt_images")
all_runs_epva_wgan = None
del all_runs_epva_wgan
# Run to save results from EPVA
flags = get_flags(dataset_type)
flags.enc_pred_use_l2norm = False
flags.enc_size = 64
flags.pred_noise_std = 0
flags.sequence_length = 64 # Comment out to repo the results in the paper.
set_model_dir = "gs://unsupervised-hierarch-video/pretrained_models/epva_human/"
all_runs_epva = get_images(set_model_dir, flags, num_to_eval=1000)
save_imgs(all_runs_epva, os.path.join(base_dir, "human_epva"), key="van_out")
all_runs_epva = None
del all_runs_epva
# Run to save results from E2E
flags = get_flags(dataset_type)
flags.enc_pred_use_l2norm = False
flags.enc_size = 32
flags.use_legacy_vars = True
flags.sequence_length = 64 # Comment out to repo the results in the paper.
set_model_dir = "gs://unsupervised-hierarch-video/pretrained_models/e2e_human/"
all_runs_e2e = get_images(set_model_dir, flags, num_to_eval=1000)
save_imgs(all_runs_e2e, os.path.join(base_dir, "human_e2e"), key="van_out")
all_runs_e2e = None
del all_runs_e2e
###Output
_____no_output_____
###Markdown
This notebook tests the generation of the CLEVR with masks dataset. Image generation (example)
###Code
%cd image_generation
!./blender/blender --background \
--python render_images.py -- \
--num_images 10 --use_gpu 1 --min_objects 2 --max_objects 6
%cd ..
###Output
_____no_output_____
###Markdown
Single CLEVR_scenes.json generation
###Code
%cd image_generation
!python collect_scenes.py --date "$(date)" \
--input_dir /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/scenes \
--output /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/partial/CLEVR_scenes.json
%cd ..
###Output
_____no_output_____
###Markdown
Question template generation
###Code
!pip install pyjson5
%cd relation_generator
!python generate_relations.py
%cd ..
###Output
_____no_output_____
###Markdown
Question generation
###Code
%cd question_generation/
!python generate_questions.py \
--input_scene_file /dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/CLEVR_scenes.json \
--output_questions_file ./questions.json \
--template_dir babyarc_easy --max-num-objects 6
%cd ..
###Output
_____no_output_____
###Markdown
Question analysis
###Code
from collections import defaultdict
import json
from typing import List
from relation_generator.generate_relations import RELATIONS
def get_unique_task_string(program: List[str]):
"""
Parses the program for a given question and returns a unique string that identifies the
babyARC task that it embodies.
This function is somewhat hacky in that it doesn't deal with the AST directly, but it
works for the generated babyARC template programs.
"""
inputs = []
object_str = []
for node in program:
# Generate a new object str every time we see a new "scene" (which implies
# a new object)
if node["type"] == "scene":
if len(object_str) != 0:
inputs.append(",".join(object_str))
object_str = []
continue
# If we're not at a scene, then we're in the middle of an object
if node["type"].startswith("filter_"):
# This node filters some property of the input. Let's consider it.
object_str.append(node["type"][7:] + "=" + node["value_inputs"][0])
inputs.append(",".join(object_str))
relations = sorted([node["type"] for node in program if node["type"] in RELATIONS])
return "+".join(relations) + "-" + ";".join(inputs)
# Load the question data
file = "question_generation/questions.json"
with open(file) as f:
data = json.load(f)
question_list = data["questions"]
observed_question_types = dict()
# Count the number of times each question type occurs
for question in question_list:
template_filename = question["template_filename"]
question_family_index = question["question_family_index"]
program = question["program"]
image = question["image"]
task_str = get_unique_task_string(program)
if task_str not in observed_question_types:
observed_question_types[task_str] = {"count": 0, "questions": [], "images": []}
observed_question_types[task_str]["count"] += 1
observed_question_types[task_str]["questions"].append(question)
observed_question_types[task_str]["images"].append(image)
for task_str, data in observed_question_types.items():
print("{} - {}".format(task_str, data["count"]))
import pandas as pd
df = pd.DataFrame.from_dict(observed_question_types, orient='index')
df.sort_values(by=["count"], ascending=False).images[0]
observed_question_types["same_size-size=large,color=purple,material=metal"]
###Output
_____no_output_____
###Markdown
Dataset
###Code
%load_ext autoreload
%autoreload 2
from dataset import ClevrRelationDataset
dataset = ClevrRelationDataset(image_dir="/dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/images",
question_dir="/dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000",
output_type="full-color")
dataset.save("/dfs/user/tailin/.results/CLEVR_relation/relations-dataset-2021-08-18-608-tasks.pt")
dataset = ClevrRelationDataset(image_dir="/dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/images",
question_dir="./question_generation/")
# Visualize the dataset
import matplotlib.pyplot as plt
el = dataset[-3]
# Find a compound task
# for el in dataset:
# if "+" in el["task_str"]:
# break
# else:
# assert False
print(el["task_str"])
# print(el["questions"][0]["question"])a
plt.figure(figsize=(15,50)) # specifying the overall grid size
for i in range(min(len(el["inputs"]), 5)):
plt.subplot(len(el["inputs"]),2, 2 * i + 1)
plt.imshow(el["inputs"][i]["image"].permute(1, 2, 0))
plt.subplot(len(el["inputs"]),2, 2 * i + 2)
plt.imshow(el["outputs"][i].permute(1, 2, 0))
plt.show()
from PIL import Image
plt.imshow(Image.open("/dfs/user/tailin/.results/CLEVR_relation/test1/0/images/CLEVR_new_000000.png"))
###Output
_____no_output_____
###Markdown
CLEVR relation "easy" dataset Partial dataset Question generation
###Code
%cd question_generation/
!python generate_questions.py \
--input_scene_file /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/partial/CLEVR_scenes.json \
--output_questions_file /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-100000/partial/questions.json \
--template_dir babyarc_easy --max-num-objects 6
%cd ..
dataset = ClevrRelationDataset(image_dir="/dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/images",
question_dir="/dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/partial",
output_type="full-color")
###Output
_____no_output_____
###Markdown
Full dataset Single CLEVR_scenes.json generation
###Code
%cd image_generation
!python collect_scenes.py --date "$(date)" \
--input_dir /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/scenes \
--output /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/full/CLEVR_scenes.json
%cd ..
!mkdir /dfs/user/tailin/.results/CLEVR_relation/clevr-relation-easy-mpi-0-100000/full
###Output
_____no_output_____
###Markdown
Generate questions
###Code
%cd question_generation/
!python generate_questions.py \
--input_scene_file /dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/CLEVR_scenes.json \
--output_questions_file /dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/easy-questions/questions.json \
--template_dir babyarc_easy --max-num-objects 6
%cd ..
###Output
_____no_output_____
###Markdown
Test dataset
###Code
dataset = ClevrRelationDataset(image_dir="/dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/images",
question_dir="/dfs/user/tailin/.results/CLEVR_relation/mpi-0-10000/easy-questions",
output_type="mask-only", is_easy_dataset=True)
dataset.save("/dfs/user/tailin/.results/CLEVR_relation/relations-dataset-easy-2021-09-16-461-tasks.pt")
len(dataset)
###Output
_____no_output_____
###Markdown
Helper functions
###Code
from dataset import create_full_dataset, create_easy_dataset
train_set, val_set, test_set = create_full_dataset()
assert len(train_set) + len(val_set) + len(test_set) == 608
train_set, val_set, test_set = create_easy_dataset()
assert len(train_set) + len(val_set) + len(test_set) == 461
###Output
Loaded 461 tasks.
###Markdown
Generational changes among the religious and non-religiousAllen DowneyCopyright 2020[MIT License](https://en.wikipedia.org/wiki/MIT_License) IntroductionIn this notebook I use data from the GSS to explore differences in beliefs and attitudes between Christians and Nones (people with no religious affiliation) and look at generational changes in those differences. SetupIf you are running this notebook in Colab, the following cell downloads the `empiricaldist` library.If you are running in another environment, you will need to install it yourself.
###Code
# If we're running in Colab, install empiricaldist
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
###Output
_____no_output_____
###Markdown
The following cell downloads `utils.py`, which contains function I use in many data science projects.
###Code
# Load some utility code
import os
file = 'utils.py'
if not os.path.exists(file):
!wget https://github.com/AllenDowney/PoliticalAlignmentCaseStudy/raw/master/utils.py
###Output
_____no_output_____
###Markdown
If everything we need is installed, the following cell should run without error.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from empiricaldist import Cdf
from utils import decorate
from utils import underride
from utils import values
###Output
_____no_output_____
###Markdown
Loading the dataThe following cell downloads an HDF5 file that contains the data we need.The HDF file is created by a notebook called `01_clean.ipynb`, which you should find in the same directory as this notebook, if you want to see the details.
###Code
# Load the data file
import os
datafile = 'gss_eda.hdf5'
if not os.path.exists(datafile):
!wget https://github.com/AllenDowney/PoliticalAlignmentCaseStudy/raw/master/gss_eda.hdf5
###Output
_____no_output_____
###Markdown
Now we can read the data.
###Code
gss = pd.read_hdf(datafile, 'gss')
gss.shape
values(gss['year'])
###Output
_____no_output_____
###Markdown
For modeling purposes, I'll use data from 1998 to 2018 and respondents born during or after 1940.These choices are a compromise between using more data, so the results are more likely to be statistically valid, and using recent data, so the models are not too influenced by irrelevant history.I classify as "Christian" anyone who reports that their religious affiliation is Catholic, Protestant, or Christian.
###Code
def prepare_df(df):
# compute quadratic and cubic terms for the models
df['y2'] = (df['year']-2004)**2
df['c2'] = (df['cohort']-1970)**2
df['c3'] = (df['cohort']-1970)**3
# classify religious affiliation
df['christian'] = df['relig'].isin([1,2,11])
df['none'] = (df['relig'] == 4)
# select recent years and generations
recent = (df['year'] >= 1998)
notold = (df['cohort'] >= 1940)
return df[recent & notold]
###Output
_____no_output_____
###Markdown
For exploration, I'll use this subset of the data without resampling, which means it oversamples some groups.
###Code
df = prepare_df(gss)
df.shape
###Output
_____no_output_____
###Markdown
For inference, I'll use 101 resamplings of the data, weighted to be representative of the adult population in the U.S.
###Code
from utils import resample_rows_weighted
dfs = None
dfs = [resample_rows_weighted(df, df['wtssall'])
for i in range(101)]
###Output
_____no_output_____
###Markdown
Exploring the groupsTo see what the groups look like, I select Christians and Nones.
###Code
christian = df[df['christian']]
christian.shape
none = df[df['none']]
none.shape
###Output
_____no_output_____
###Markdown
Here's what the distribution of birth years looks like for the two groups:
###Code
Cdf.from_seq(christian['cohort']).plot(label='Christian')
Cdf.from_seq(none['cohort']).plot(label='None')
decorate(xlabel='Birth year',
ylabel='CDF',
title='Distribution of birth year')
plt.legend();
###Output
_____no_output_____
###Markdown
The Christians in this dataset come from earlier cohorts, so we have to control for that.
###Code
Cdf.from_seq(christian['year']).plot(label='Christian')
Cdf.from_seq(none['year']).plot(label='None')
decorate(xlabel='Interview year',
ylabel='CDF',
title='Distribution of interview year')
plt.legend();
###Output
_____no_output_____
###Markdown
Also, the fraction of Christians was declining over the observation period, so the Christians in the dataset were more likely to be observed earlier. We'll have to control for that, too. Selecting variablesThe following is a list of the variables we'll explore.For each variable, I identify a response or list of responses to consider, and provide text that describes what that response means.I do my best to paraphrase the wording of the question accurately, but if you have any questions, you can [consult the documentation](https://gssdataexplorer.norc.org/projects/52787/variables/data_cart).
###Code
variables = [
# allow to speak
('spkmil', 1, 'Anti-democratic\nmilitarist'),
('spkmslm', 1, 'Anti-U.S.\nMuslim clergyman'),
('spkath', 1, 'Opponent of churches\nand religion'),
('spkcom', 1, 'Communist'),
('spkrac', 1, 'Racist'),
('spkhomo', 1, 'Homosexual'),
# allow to teach at a college or university
('colmil', 4, 'Anti-democratic\nmilitarist'),
('colmslm', 4, 'Anti-U.S.\nMuslim clergyman'),
('colath', 4, 'Opponent of churches\nand religion'),
('colcom', 5, 'Communist'), # not fired
('colrac', 4, 'Racist'),
('colhomo', 4, 'Homosexual'),
# do not remove from library
('libmil', 2, 'Anti-democratic\nmilitarist'),
('libmslm', 2, 'Anti-U.S.\nMuslim clergyman'),
('libath', 2, 'Opponent of churches\nand religion'),
('libcom', 2, 'Communist'),
('librac', 2, 'Racist'),
('libhomo', 2, 'Homosexual'),
# items related to sex
('homosex', 1, 'Same-sex relations\nalways wrong'),
('premarsx', 1, 'Premarital sex\nalways wrong'),
('xmarsex', 1, 'Extramarital sex\nalways wrong'),
('teensex', 1, 'Teen sex\nalways wrong'),
('sexeduc', 1, 'Favor sex education\nin schools'),
# items related to abortion
('abany', 1, 'A pregnant woman wants it\nfor any reason'),
('abdefect', 1, 'There is a strong chance\nof a serious defect'),
('abnomore', 1, 'She is married and\ndoes not want more children'),
('abhlth', 1, 'Her health is seriously endangered'),
('abpoor', 1, 'She has very low income and\ncannot afford more children'),
('abrape', 1, 'She became pregnant\nas a result of rape'),
('absingle', 1, 'She is not married and\ndoes not want to marry the man'),
# other items related to public policy
('cappun', 2, 'Oppose death penalty\nfor murder'),
('gunlaw', 1, 'Favor permit to buy gun'),
('grass', 1, 'Marijuana should be\nmade legal'),
('divlaw', 1, 'Divorce should be\neasier to obtain'),
('prayer', 1, 'Approve SCOTUS ruling\nprohibiting school prayer'),
('letdie1', 1, 'Allow doctor to end life\nof terminal patient'),
('racopen', 2, 'Favor law barring\nhousing discrimination'),
('pornlaw', [2,3], 'Pornography should be legal'),
('affrmact', [1,2], 'Favor affirmative action\nfor blacks'),
# items related to spending
('natroad', 1, 'Highways and bridges'),
('natsoc', 1, 'Social Security'),
('natmass', 1, 'Mass transportation'),
('natpark', 1, 'Parks and recreation'),
('natchld', 1, 'Assistance for child care'),
('natsci', 1, 'Supporting scientific research'),
('natenrgy', 1, 'Developing alternative\nenergy sources'),
('natspac', 1, 'Space exploration'),
('natenvir', 1, 'Improving and protecting\nthe environment'),
('natheal', 1, "Improving and protecting\nthe nation's health"),
('natcity', 1, 'Solving the problems of\nthe big cities'),
('natcrime', 1, 'Halting the rising\ncrime rate'),
('natdrug', 1, 'Dealing with drug addiction'),
('nateduc', 1, "Improving the nation's\neducation system"),
('natrace', 1, 'Improving the conditions\nof Blacks'),
('natarms', 1, 'The military, armaments\nand defense'),
('nataid', 1, 'Foreign aid'),
('natfare', 1, 'Welfare'),
# confidence in institutions
('confinan', 1, 'Banks and financial instituions'),
('conbus', 1, 'Major companies'),
('conclerg', 1, 'Organized religion'),
('coneduc', 1, 'Education'),
('confed', 1, 'Executive branch of\nthe federal government'),
('conlabor', 1, 'Organized labor'),
('conpress', 1, 'Press'),
('conmedic', 1, 'Medicine'),
('contv', 1, 'Television'),
('conjudge', 1, 'U.S. Supreme Court'),
('consci', 1, 'Scientific community'),
('conlegis', 1, 'Congress'),
('conarmy', 1, 'Military'),
# religious beliefs
('god', 6, 'Know God exists with no doubts'),
('reborn', 1, 'Had a born again experience'),
('savesoul', 1, 'Tried to convince others\nto accept Jesus'),
('bible', 1, 'Bible is the actual word of God\nto be taken literally'),
('postlife', 1, 'Believe there is a life after death'),
('relpersn', [1,2], 'Considers self very or\nmoderately religious'),
('sprtprsn', [1,2], 'Considers self very or\nmoderately spiritual'),
('relexp', 1, 'Had a religious or spiritual\nexperience that changed life'),
('relactiv', [8,9,10,11], 'Church activities weekly or more\nnot including services'),
('pray', [1,2,3,4], 'Prayer weekly or more often'),
('attend', [7,8], 'Attend religious services\nweekly or more often'),
# outlook on people
('helpful', 1, 'People try to be helpful'),
('fair', 2, 'People try to be fair'),
('trust', 1, 'People can be trusted'),
('fear', 2, 'Not afraid to walk alone at night'),
# miscellaneous
('spanking', [1,2], 'Spanking sometimes\nnecessary'),
# gender roles and work
('fepol', 1, 'Agree men are emotionally\nmore suited for politics'),
('fejobaff', [1,2], 'Favor preferential hiring\nand promotion of women'),
('fehire', [1,2], 'Favor special effort to hire\nand promote qualified women'),
('fechld', [1,2], 'Working mother can have warm\nsecure relationship with child'),
('fepresch', [1,2], 'Preschool child is likely\nto suffer if mother works'),
('fefam', [1,2], 'Much better if man achieves and\nwoman takes care of family'),
('discaffm', [1,2], 'Likely that a less qualified\nwoman gets job or promotion'),
('discaffw', [1,2], 'Likely that a less qualified\nman gets job or promotion'),
('meovrwrk', [1,2], 'Family life suffers because men\nconcentrate too much on work'),
]
###Output
_____no_output_____
###Markdown
The following cell makes maps from variable names to values and labels.
###Code
label_map = {}
value_map = {}
for varname, value, label in variables:
value_map[varname] = value
label_map[varname] = label
###Output
_____no_output_____
###Markdown
Period and cohort effectsFirst I select the subset of one group where the dependent variable is valid.
###Code
varname = 'natmass'
value = value_map[varname]
print(varname, value)
group = christian.copy()
valid = group.dropna(subset=[varname]).copy()
valid.shape
###Output
_____no_output_____
###Markdown
Here's the fraction of this variable that has the selected value.
###Code
value = np.atleast_1d(value)
(valid[varname].isin(value)).mean()
###Output
_____no_output_____
###Markdown
For logistic regression, we need the dependent variable to be 0 or 1.
###Code
valid['y'] = (valid[varname].isin(value)).astype(int)
valid['y'].value_counts()
###Output
_____no_output_____
###Markdown
Here's what the changes look like over time.
###Code
from utils import plot_series_lowess
by_year = valid.groupby('year')['y'].mean()
plot_series_lowess(by_year, 'C2')
###Output
_____no_output_____
###Markdown
And here's what they look like by cohort.
###Code
by_cohort = valid.groupby('cohort')['y'].mean()
plot_series_lowess(by_cohort, 'C3')
###Output
_____no_output_____
###Markdown
Testing the modelNow we can run logistic regression with year and cohort as explanatory variables.I consider two versions of this model, with and without quadratic terms. It seems like the model with quadratic terms does a better job of capturing the period and cohort effects.
###Code
import statsmodels.formula.api as smf
formula = ('y ~ year + cohort')
model = smf.logit(formula, data=valid).fit()
model.summary()
formula = ('y ~ year + y2 + cohort + c2')
model = smf.logit(formula, data=valid).fit()
model.summary()
formula = ('y ~ year + y2 + cohort + c2 + c3')
model = smf.logit(formula, data=valid).fit()
model.summary()
###Output
Optimization terminated successfully.
Current function value: 0.655770
Iterations 5
###Markdown
The following plot shows the data grouped by cohort and the model for someone interviewed in 2008.We can use it to confirm that the model is capturing the cohort effect.
###Code
xs = np.arange(1940, 2000)
dfp = pd.DataFrame()
dfp['cohort'] = xs
dfp['year'] = 2008
dfp['y2'] = (dfp['year']-2004)**2
dfp['c2'] = (dfp['cohort']-1970)**2
dfp['c3'] = (dfp['cohort']-1970)**3
plot_series_lowess(by_cohort, 'C3')
ys = model.predict(dfp)
plt.plot(xs, ys, color='C7', label='Model at year 2008')
plt.xlim(1938, 2002)
decorate(xlabel='Birth year',
ylabel='Fraction',
title='Mean response by year of birth',
loc='lower left')
plt.tight_layout()
plt.savefig('generation_by_cohort.png', dpi=150)
###Output
_____no_output_____
###Markdown
The following plot shows the data grouped by year along with predictions for people born in 1968 and 1993.We can use it to confirm that the model captures the period effect, and we can see the generational difference as measured by the model.
###Code
plot_series_lowess(by_year, 'C2')
xs = np.arange(1998, 2020)
dfp = pd.DataFrame()
dfp['year'] = xs
dfp['cohort'] = 1968
dfp['y2'] = (dfp['year']-2004)**2
dfp['c2'] = (dfp['cohort']-1970)**2
dfp['c3'] = (dfp['cohort']-1970)**3
ys = model.predict(dfp)
plt.plot(xs, ys, color='C4', label='Born 1968')
dfp['cohort'] = 1993
ys = model.predict(dfp)
plt.plot(xs, ys, color='C6', label='Born 1993')
decorate(xlabel='Interview year',
ylabel='Fraction',
title='Mean response by year of interview',
loc='lower left')
plt.xlim(1996, 2020)
plt.savefig('generation_by_year', dpi=150)
###Output
_____no_output_____
###Markdown
Comparing generationsNow let's see how things change from one generation to the next, controlling to period effects.I'll use the model to generate predictions for two hypothetical members of this group, born in 1968 and 1993, both interviewed in 2018.Here's a DataFrame that describes these hypothetical people.
###Code
df_pred = pd.DataFrame()
df_pred['cohort'] = [1968, 1993]
df_pred['year'] = 2018
df_pred['y2'] = (df_pred['year']-2004)**2
df_pred['c2'] = (df_pred['cohort']-1970)**2
df_pred['c3'] = (df_pred['cohort']-1970)**3
df_pred
###Output
_____no_output_____
###Markdown
And here are the predictions.
###Code
model.predict(df_pred)
###Output
_____no_output_____
###Markdown
Running the modelThe following function encapsulates the steps in the previous section.
###Code
def run_model(df, varname, value, formula):
"""Runs the model and generates predictions.
df: DataFrame
varname: string variable name
value: value or list of values considered "yes"
formula: string patsy model
returns: array of predicted values based on df_pred
"""
value = np.atleast_1d(value)
valid = df.dropna(subset=[varname]).copy()
valid['y'] = (valid[varname].isin(value)).astype(int)
model = smf.logit(formula, data=valid).fit(disp=0)
res = model.predict(df_pred)
return res.values
###Output
_____no_output_____
###Markdown
Depending on `formula`, we can run either the linear or quadratic version of the model.
###Code
#formula = 'y ~ year + cohort'
#formula = 'y ~ year + y2 + cohort + c2'
formula = 'y ~ year + y2 + cohort + c2 + c3'
###Output
_____no_output_____
###Markdown
Here are the results for Christians and Nones.
###Code
run_model(christian, varname, value, formula)
run_model(none, varname, value, formula)
###Output
_____no_output_____
###Markdown
Comparing resultsThe following function runs the analysis for the two groups and return an array with predictions for 4 hypothetical people:* Christian born in 1968* Christian born in 1993* None born in 1968* None born in 1993
###Code
def compare(df, varname, value):
christian = df[df['christian']]
none = df[df['none']]
c = run_model(christian, varname, value, formula)
n = run_model(none, varname, value, formula)
return np.hstack([c, n]) * 100
compare(df, varname, value)
###Output
_____no_output_____
###Markdown
The following function runs the same analysis 101 times, using each of the resampled datasets.It computes the 5th, 50th, and 95th percentiles in each column and returns an array with one row for each percentile and one column for each of the 4 hypothetical people.
###Code
def compare_iter(dfs, varname, value):
t = [compare(df, varname, value) for df in dfs]
a = np.array(t)
percentiles = np.percentile(a, [5, 50, 95], axis=0)
return percentiles
percentiles = compare_iter(dfs, varname, value)
percentiles
###Output
_____no_output_____
###Markdown
Plotting the resultsThe following functions visualize the results.
###Code
def plot_interval(y, interval, dy=0.16, **options):
"""Show a confidence interval.
y: vertical coordinate
interval: triple of low, med, and high
dy: height of the error bars
options: passed to plt
"""
low, mid, high = interval
plt.hlines(y+dy, low, high, alpha=0.6, **options)
plt.vlines([low, high], y+dy, y+dy/2, alpha=0.6, **options)
color_map = {'Christian':'C0',
'None':'C1'}
def plot_arrow(y, row, group):
"""Plot an arrow showing generational changes.
y: vertical coordinate
row: x1, x2 pair
group: string group name
"""
color = color_map[group]
label1 = f'{group} born 1968'
label2 = f'{group} born 1993'
x1, x2 = row
dx = x2 - x1
plt.hlines(y, x1, x2, color=color, alpha=0.3)
plt.plot(x1, y, 'o', color=color, label=label1)
style = '>' if x2 > x1 else '<'
plt.plot(x2, y, style, color=color, label=label2)
def plot_percentiles(y, percentiles, dy=0.12):
"""Plot the results from the resampled analysis.
y: vertical coordinate
percentiles: array with a row for each percentile and
column for each hypothetical person
dy: vertical offset for the first group
"""
plot_interval(y+dy, percentiles[:, 2], color='C1')
plot_arrow(y+dy, percentiles[1, 2:4], group='None')
plot_interval(y-dy, percentiles[:, 0], color='C0')
plot_arrow(y-dy, percentiles[1, 0:2], group='Christian')
def miniplot(percentiles):
"""Make a plot with just one variable"""
plot_percentiles(0, percentiles)
plt.ylim(-1.5, 1.5)
plt.gca().invert_yaxis()
miniplot(percentiles)
plt.legend()
plt.yticks([0], [varname])
decorate(xlabel='Percentage')
###Output
_____no_output_____
###Markdown
Beliefs and attitudesNow let's see what the results look like for a variety of beliefs and attitudes.The following list contains variable names, the response I've selected, and a label that summarizes the selected response. As an example, here's a complete analysis of a single variable.
###Code
varname = 'pray'
value = value_map[varname]
print(varname, value)
res = compare(df, varname, value)
print(res)
percentiles = compare_iter(dfs, varname, value)
print(percentiles)
miniplot(percentiles)
###Output
pray [1, 2, 3, 4]
[88.64014414 79.02553422 42.42939496 27.36045101]
[[87.36607046 77.5964954 40.73463178 23.39981756]
[88.60220296 79.5213753 44.20904724 26.65050877]
[89.62262558 81.53221359 48.17391681 28.93114145]]
###Markdown
For testing, we can loop through the variables and run one analysis with the unresampled data.
###Code
for varname, value, _ in variables:
print(varname)
res = compare(df, varname, value)
print(res)
###Output
spkmil
[73.5113101 76.76801946 82.09560968 78.39559141]
spkmslm
[45.49596897 36.16927047 59.53604459 53.6932326 ]
spkath
[79.22311623 78.04532283 89.69533261 89.5105957 ]
spkcom
[69.46353225 68.75260772 81.12576215 83.42409051]
spkrac
[60.3880369 49.91127651 68.47108633 58.15589464]
spkhomo
[90.06235488 90.4766765 94.4764429 95.82608918]
colmil
[59.49662759 66.43455177 66.02818803 68.87707294]
colmslm
[32.43295797 26.14526947 45.31159224 35.16720515]
colath
[64.03971692 72.31061918 75.72620643 83.06921543]
colcom
[63.78654967 68.70265262 77.15069651 79.11245972]
colrac
[45.40934808 39.49995269 48.12065303 40.06686363]
colhomo
[89.17096349 92.52793162 94.96915779 93.93005577]
libmil
[72.26322264 76.80931845 85.93400702 84.70593488]
libmslm
[48.76353068 40.22516432 64.43893098 56.1513263 ]
libath
[74.79277381 76.19890115 92.86989688 92.6605855 ]
libcom
[71.47928341 74.34902723 85.86445345 86.97307526]
librac
[61.67809039 51.13471775 76.10257378 60.81905199]
libhomo
[81.79362286 88.88577597 94.88162589 94.56903031]
homosex
[43.2499736 26.02727507 14.60015087 8.47780471]
premarsx
[22.26112763 15.61701899 4.74451967 2.38574064]
xmarsex
[83.39192428 82.43078978 54.62779781 58.98866977]
teensex
[68.41449398 49.65675108 41.25854856 27.52858109]
sexeduc
[91.70737342 94.98287385 96.2774079 98.25713522]
abany
[43.41944174 41.3995639 71.55856513 66.42643421]
abdefect
[73.02303301 66.27051086 91.30318802 81.33778165]
abnomore
[44.31492364 39.77807224 72.00750498 66.08859556]
abhlth
[87.46737603 86.47912608 94.85623463 93.26452543]
abpoor
[41.62665525 41.84029891 67.75769386 66.49518121]
abrape
[74.89404181 77.31786536 90.46982819 90.56697658]
absingle
[38.84668806 34.19344447 66.37167504 60.53935644]
cappun
[33.55463848 44.48520016 38.21351605 42.42618274]
gunlaw
[71.20383307 66.33977015 69.18452013 69.66204428]
grass
[61.68805191 71.01634253 82.78437414 84.89693807]
divlaw
[43.00281337 46.85598636 51.33716417 59.40353458]
prayer
[33.74359486 52.04324277 67.26735582 76.1386634 ]
letdie1
[69.6644715 74.86999269 88.79556461 86.20789208]
racopen
[80.26737897 81.45531965 79.09489328 80.21025033]
pornlaw
[64.52978902 76.56828694 85.97367299 88.22481399]
affrmact
[21.05093081 26.50331869 30.32874825 34.98033455]
natroad
[56.45389489 35.53284828 65.23946114 39.69656434]
natsoc
[67.37905989 44.79243358 64.64223973 45.29214691]
natmass
[35.22213294 28.7643498 48.52230686 36.32573053]
natpark
[34.17935857 38.2896629 39.46412034 45.04948144]
natchld
[59.37417463 62.79809711 67.03702883 65.23955519]
natsci
[39.80269003 36.90575408 51.24566019 50.13529848]
natenrgy
[52.86066667 53.78547176 65.54730071 65.26712737]
natspac
[19.89561017 19.58949264 32.77135355 37.94810993]
natenvir
[57.20770297 70.91138832 72.45425865 81.37103326]
natheal
[64.75055397 63.13098615 72.13970354 64.46942054]
natcity
[55.53992497 54.05057976 51.59353382 55.07975863]
natcrime
[73.94955158 67.76348809 57.31752779 54.03097305]
natdrug
[68.31794005 67.26020567 70.19711738 65.31999371]
nateduc
[73.99285499 72.16437619 84.32265258 81.23015693]
natrace
[49.89317856 56.11094299 62.69301732 67.82830273]
natarms
[39.83380456 28.84803579 25.28695122 17.01997032]
nataid
[ 9.82466168 21.19327073 10.08446096 22.11511565]
natfare
[19.93343815 21.64751293 25.48356417 27.32928223]
confinan
[12.66682956 24.24425984 6.07534072 17.24309586]
conbus
[19.64942601 24.8910571 14.9580066 17.64472432]
conclerg
[22.85102756 29.62101982 4.2986863 8.27954864]
coneduc
[21.84336117 35.87421951 16.83152194 28.55130699]
confed
[10.45491829 13.93303892 9.40292797 9.52391555]
conlabor
[11.18874318 20.61281316 10.28513127 22.33094665]
conpress
[ 8.81439129 9.70186571 12.6718666 11.3784002 ]
conmedic
[32.47554257 49.23012708 30.20770207 44.76007786]
contv
[ 9.85957672 13.9771673 3.98226901 5.73194279]
conjudge
[26.3352463 27.96629237 22.38261828 22.1787875 ]
consci
[37.83470996 44.01941385 52.78142307 61.74872794]
conlegis
[3.90103339 9.54029367 1.44883188 5.36324345]
conarmy
[59.34348941 60.82389205 49.22218759 44.3586916 ]
god
[70.05585786 54.54176727 23.1488361 13.84997176]
reborn
[57.94039213 47.78098959 19.08488879 13.20532487]
savesoul
[56.70785044 51.78148764 17.91740306 18.08368455]
bible
[40.7100707 32.53023435 9.78488653 7.88023483]
postlife
[86.60628194 91.07626079 55.77191022 62.86114651]
relpersn
[66.62937988 55.68363263 11.59321878 7.28325125]
sprtprsn
[76.74946413 54.58924942 48.13054167 34.58637659]
relexp
[62.6117107 56.19613869 17.96450418 21.2967375 ]
relactiv
[11.01806423 7.55599771 3.8962701 0.92942076]
pray
[88.64014414 79.02553422 42.42939496 27.36045101]
attend
[28.08516892 20.12430165 3.44289599 3.81734432]
helpful
[47.47359947 34.36248075 49.7537416 34.16047289]
fair
[53.61717119 34.47184959 55.82503034 41.38746425]
trust
[29.26894445 18.43385666 40.44124049 23.60146494]
fear
[67.25857509 61.96407852 72.89262483 63.31583672]
spanking
[72.47779753 74.91842981 57.12181834 65.55452896]
fepol
[13.88866324 18.70822545 8.55305731 11.82774957]
fejobaff
[37.32595866 41.64868146 32.2215146 44.18340857]
fehire
[73.35485936 68.97477718 74.32198819 81.83595582]
fechld
[77.03561326 78.57964449 78.92801115 82.26175663]
fepresch
[25.75030164 18.79883107 23.52021269 14.83169073]
fefam
[23.6351855 24.05805615 13.37111586 12.41761707]
discaffm
[47.60762486 44.9277582 39.21370124 44.4863117 ]
discaffw
[73.30302634 64.9975628 73.78844031 73.7674552 ]
meovrwrk
[50.94775015 41.15332679 51.50807705 37.09845044]
###Markdown
And here's how we run the complete analysis for all variables.It takes a few minutes.
###Code
def generate_results(variables, results=None):
if results is None:
results = {}
for varname, value, _ in variables:
if varname in results:
continue
print(varname)
percentiles = compare_iter(dfs, varname, value)
results[varname] = percentiles
return results
# uncomment to clear saved results
# results = None
results = generate_results(variables, results)
###Output
sprtprsn
relexp
relactiv
pray
attend
###Markdown
The following function generates a plot for a collection of variable names.
###Code
def multiplot(results, varnames, **options):
"""Make a plot showing results for several variables.
results: map from varname to array of percentiles
varnames: list of string varnames
loc: string location for the legend.
"""
plt.figure(figsize=(8, 4.5))
for i, varname in enumerate(varnames):
percentiles = results[varname]
plot_percentiles(i, percentiles)
# label the y axis
labels = [label_map[varname] for varname in varnames]
plt.yticks(range(len(varnames)), labels)
# make a legend with just the first four entries
ax = plt.gca()
handles, labels = ax.get_legend_handles_labels()
loc = options.pop('loc', 'best')
ax.legend(handles[:4], labels[:4], loc=loc)
# flip the axes so the results go from top to bottom
plt.gca().invert_yaxis()
underride(options, xlabel='Percentage', legend=False)
decorate(**options)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Free speechThere are always some people whose ideas are considered bad or dangerous by other people. * For instance, somebody who is against all churches and religion . . .If such a person wanted to make a speech in your (city/town/community) against churches and religion, should he be allowed to speak, or not?* Or consider a person who believes that Blacks are genetically inferior...If such a person wanted to make a speech in your community claiming that Blacks are inferior, should he be allowed to speak, or not?* Now, I should like to ask you some questions about a man who admits he is a Communist.Suppose this admitted Communist wanted to make a speech in your community. Should he be allowed to speak, or not?* Consider a person who advocates doing away with elections and letting the military run the country.If such a person wanted to make a speech in your community, should he be allowed to speak, or not?* And what about a man who admits that he is a homosexual?Suppose this admitted homosexual wanted to make a speech in your community. Should he be allowed to speak, or not?* Now consider a Muslim clergyman who preaches hatred of the United States.If such a person wanted to make a speech in your community preaching hatred of the United States, should he be allowed to speak, or not?
###Code
varnames = ['spkmslm', 'spkrac', 'spkcom', 'spkmil', 'spkath', 'spkhomo']
multiplot(results, varnames, title='Allow to speak')
###Output
_____no_output_____
###Markdown
There are always some people whose ideas are considered bad or dangerous by other people. * For instance, somebody who is against all churches and religion . . .Should such a person be allowed to teach in a college or university, or not?* Now consider a Muslim clergyman who preaches hatred of the United States.Should such a person be allowed to teach in a college or university, or not?* Questions associated with this variable:Or consider a person who believes that Blacks are genetically inferior....Should such a person be allowed to teach in a college or university, or not?* Now, I should like to ask you some questions about a man who admits he is a Communist.Suppose he is teaching in a college. Should he be fired, or not?* Consider a person who advocates doing away with elections and letting the military run the country.Should such a person be allowed to teach in a college or university, or not?* And what about a man who admits that he is a homosexual?Should such a person be allowed to teach in a college or university, or not?
###Code
varnames = ['colmslm', 'colrac', 'colcom', 'colmil', 'colath', 'colhomo']
multiplot(results, varnames, title='Allow to teach at college or university');
###Output
_____no_output_____
###Markdown
There are always some people whose ideas are considered bad or dangerous by other people. * libath: For instance, somebody who is against all churches and religion . . .If some people in your community suggested that a book he wrote against churches and religion should be taken out of your public library, would you favor removing this book, or not?* librac: Or consider a person who believes that Blacks are genetically inferior.If some people in your community suggested that a book he wrote which said Blacks are inferior should be taken out of your public library, would you favor removing this book, or not?* libcom: Now, I should like to ask you some questions about a man who admits he is a Communist.Suppose he wrote a book which is in your public library. Somebody in your community suggests that the book should be removed from the library. Would you favor removing it, or not?* libmil: Consider a person who advocates doing away with elections and letting the military run the country.Suppose he wrote a book advocating doing away with elections and letting the military run the country. Somebody in your community suggests that the book be removed from the public library. Would you favor removing it, or not?* libhomo: And what about a man who admits that he is a homosexual?If some people in your community suggested that a book he wrote in favor of homosexuality should be taken out of your public library, would you favor removing this book, or not?* libmslm: Now consider a Muslim clergyman who preaches hatred of the United States.If some people in your community suggested that a book he wrote which preaches hatred of the United States should be taken out of your public library, would you favor removing this book, or not?
###Code
varnames = ['libmslm', 'librac', 'libcom', 'libmil', 'libath', 'libhomo']
multiplot(results, varnames, title='Keep book in library written by');
###Output
_____no_output_____
###Markdown
Confidence in institutions"I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them?"
###Code
varnames = ['conbus', 'coneduc', 'conjudge', 'conmedic', 'consci', 'conarmy']
multiplot(results, varnames, loc='upper right', title='Great deal of confidence in');
varnames = ['contv', 'conpress', 'conlegis', 'confed', 'conclerg', 'confinan', 'conlabor']
multiplot(results, varnames, loc='upper right', title='Great deal of confidence in');
###Output
_____no_output_____
###Markdown
Allocation of resources"We are faced with many problems in this country, none of which can be solved easily or inexpensively. I'm going to name some of these problems, and for each one I'd like you to tell me whether you think we're spending too much money on it, too little money, or about the right amount."
###Code
varnames = ['nataid', 'natfare', 'natpark', 'natrace', 'natchld', 'natenvir']
multiplot(results, varnames, title='We spend too little money on')
plt.savefig('generation5.png', dpi=150)
varnames = ['natarms', 'natmass', 'natsci', 'natroad', 'natsoc', 'natcrime']
multiplot(results, varnames, loc='upper right', title='We spend too little money on')
plt.savefig('generation6.png', dpi=150)
varnames = ['natspac', 'natcity', 'natenrgy', 'natdrug', 'natheal', 'nateduc']
multiplot(results, varnames, loc='upper right', title='We spend too little money on')
plt.savefig('generation7.png', dpi=150)
###Output
_____no_output_____
###Markdown
Issues related to sex
###Code
varnames = ['premarsx', 'homosex', 'teensex', 'xmarsex', 'sexeduc']
multiplot(results, varnames, loc='upper right')
plt.savefig('generation8.png', dpi=150)
###Output
_____no_output_____
###Markdown
Outlook
###Code
varnames = ['trust', 'helpful', 'fair', 'fear']
multiplot(results, varnames)
###Output
_____no_output_____
###Markdown
Public policy
###Code
varnames = ['affrmact', 'cappun', 'divlaw', 'prayer']
multiplot(results, varnames, loc='upper right', title='Law and public policy')
plt.savefig('generation3.png', dpi=150)
varnames = ['grass', 'pornlaw', 'gunlaw', 'letdie1', 'racopen']
multiplot(results, varnames, title='Law and public policy')
plt.savefig('generation2.png', dpi=150)
###Output
_____no_output_____
###Markdown
Abortion"Please tell me whether or not you think it should be possible for a pregnant woman to obtain a legal abortion if ...
###Code
varnames = ['abany', 'abnomore', 'absingle', 'abpoor', 'abdefect', 'abrape', 'abhlth']
multiplot(results, varnames, title='Abortion should be legal if', loc='lower left')
plt.savefig('generation4.png', dpi=150)
###Output
_____no_output_____
###Markdown
Religion
###Code
varnames = ['bible', 'reborn', 'savesoul', 'god', 'postlife']
multiplot(results, varnames,
loc='upper right',
title='Religious beliefs',
xlim=[0, 110])
plt.savefig('generation1.png', dpi=150)
varnames = ['relactiv', 'attend', 'relpersn', 'sprtprsn', 'relexp', 'pray']
multiplot(results, varnames,
loc='upper right',
title='Religious beliefs',
#xlim=[0, 110]
)
plt.savefig('generation1b.png', dpi=150)
###Output
_____no_output_____
###Markdown
Gender roles and work"Tell me if you agree or disagree with this statement: Most men are better suited emotionally for politics than are most women." Select "agree" df.fepol.replace([0, 8, 9], np.nan, inplace=True) df.fejobaff.replace([0, 8, 9], np.nan, inplace=True) df.fehire.replace([0, 8, 9], np.nan, inplace=True) df.fechld.replace([0, 8, 9], np.nan, inplace=True) df.fepresch.replace([0, 8, 9], np.nan, inplace=True) df.fefam.replace([0, 8, 9], np.nan, inplace=True) df.discaffm.replace([0, 8, 9], np.nan, inplace=True) df.discaffw.replace([0, 8, 9], np.nan, inplace=True) df.meovrwrk.replace([0, 8, 9], np.nan, inplace=True)
###Code
varnames = ['fepol', 'fefam', 'fepresch', 'fejobaff', 'discaffm', 'meovrwrk']
multiplot(results, varnames, title='Agree or strongly agree');
varnames = ['discaffw', 'fehire', 'fechld']
multiplot(results, varnames, title='Agree or strongly agree', loc='lower left')
plt.xlim(60,85);
###Output
_____no_output_____
###Markdown
Misc"Do you strongly agree, agree, disagree, or strongly disagree that it is sometimes necessary to discipline a child with a good, hard spanking?""Tell me if you agree or disagree with this statement: Most men are better suited emotionally for politics than are most women." Select "agree"
###Code
varnames = ['fepol', 'spanking']
multiplot(results, varnames, title='Agree or strongly agree');
t = []
for varname, _, _ in variables:
if varname.startswith('con'):
t.append(varname)
t
###Output
_____no_output_____ |
Hybrid_jobs_for_community_detection.ipynb | ###Markdown
Community Detection with Amazon Braket Hybrid JobsIn this tutorial, we use Amazon Braket Hybrid Jobs to run D-Wave QBSolv for a Community Detection problem.For more background on community detection using quantum annealing, please refer to the introduction notebook ([Notebook_QBSolv_community_detection.ipynb](https://github.com/aws-samples/amazon-braket-community-detection/blob/main/Notebook_QBSolv_community_detection.ipynb)). Table of contents* [Set Up Environment](Set_Up_Environment)* [Prepare Input Data](inputdata) * [Built-in Graph](Built-in_Graph) * [Graph from a Local Data File](local_graph)* [Create Algorithm Script](algorithm_script)* [Specify Hyperparameters](hyperparameters)* [Submit a Braket Hybrid Job](submit_braket_job)* [View Results](view_results)* [Run Hyperparameter Tuning](hpo) Set Up Environment
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import os
import time
import ast
import json
import collections
import networkx as nx
from braket.jobs.config import InstanceConfig
from braket.jobs.local.local_job import LocalQuantumJob
from braket.aws import AwsQuantumJob
from src.graph_community import CommunityGraphFactory, draw_graph_community
###Output
_____no_output_____
###Markdown
Prepare Input Data To prepare the input graph data, the first step is to run the '**Download Graph Data**' part in the [Notebook_QBSolv_community_detection.ipynb](https://github.com/aws-samples/amazon-braket-community-detection/blob/main/Notebook_QBSolv_community_detection.ipynb) that downloads real-world graphs from http://networkrepository.com and cleanses raw graph data files. Then we call `CommunityGraphFactory` to read and visualize a graph, and save it as a NetworkX graph.
###Code
# call CommunityGraphFactory for graph data preparation
cgf = CommunityGraphFactory(seed=1)
###Output
_____no_output_____
###Markdown
In order to load a graph from local data files via a name, we create a dictionary `graph_file_dict` with graph names, their local file path, and the delimiter used in the file. `graph_file_dict` is required to have this format: {graph_name: [data_path, delimiter]}
###Code
graph_file_dict = {"Jazz": ["./data/Jazz/arenas-jazz.edges", ","],
"Dolphins": ["./data/Dolphins/dolphins.mtx", None],
"LesMiserables": ["./data/LesMiserables/lesmis.mtx",None],
"Elegans": ["./data/Elegans/celegans_metabolic.mtx", None],
"Emailuniv": ["./data/Emailuniv/ia-email-univ.mtx", None],
"Cora": ["./data/Cora/cora.edges", ","]}
###Output
_____no_output_____
###Markdown
Built-in Graph This code example shows how to create the Zachary's karate club graph using NetworkX's built-in graph function, save the graph to a local file in NetworkX Graph format as input data for Braket jobs.
###Code
# Create a input graph folder to store files
if not os.path.exists("data/input_graph"):
os.makedirs("data/input_graph")
# using networkx graph
graph_name = "Zachary"
graph_zachary = nx.karate_club_graph()
# save the graph to a local file using NetworkX
nx.write_weighted_edgelist(graph_zachary, f"data/input_graph/{graph_name}.weighted.edgelist")
# draw a graph
draw_graph_community(graph_zachary, [list(range(graph_zachary.number_of_nodes()))], color_map = 'Dark2')
plt.show()
###Output
_____no_output_____
###Markdown
Graph from a Local Data File This code example shows how to create a graph through loading a local data file listed in `graph_file_dict`, and save it to another local file in NetworkX Graph format as input data for Braket jobs.
###Code
# load a graph from local files defined in 'graph_file_dict'
graph_name = "Dolphins"
graph_data = cgf.load_graph(graph_name, graph_file_dict)
# save the graph to a local file using NetworkX
nx.write_weighted_edgelist(graph_data, f"data/input_graph/{graph_name}.weighted.edgelist")
# draw a graph
draw_graph_community(graph_data, [list(range(graph_data.number_of_nodes()))], color_map = 'Dark2')
plt.show()
###Output
Name:
Type: Graph
Number of nodes: 62
Number of edges: 159
Average degree: 5.1290
###Markdown
Create Algorithm Script The algorithm script we are going to use for solving the Community Detection problem using QBSolv can be found [here](src/hybrid_job_community_detection.py) (`src/hybrid_job_community_detection.py`). Specify Hyperparameters The hyperparameters can be passed to our algorithm script (`src/hybrid_job_community_detection.py`) when you create your job, through the keyword argument hyperparameters. It usually includes all the algorithm settings you might want to adjust between runs to tailor your algorithm to the problem.
###Code
# specify an input graph file and its local path
graph_name = "Zachary"
input_graph_file_name = "Zachary.weighted.edgelist"
input_graph_path = os.path.join("data/input_graph", input_graph_file_name)
# specify hyperparameters for the algorithm script
hyperparams = {
"input_graph_file": input_graph_file_name, # str, the file name of the input graph
"num_community": 4, # int, the number of communities to detect
"solver_mode": "hybrid", # str, must be either 'classical' or 'hybrid'. Determines whether the classical or hybrid solver is called
"solver_limit": 100, # int, the maximum number of variables (n) for sub-QUBOs
"num_repeats": 2, # int, the maximum iterations to repeat QBSolv solver execution to discover a new best solution
"num_reads": 1000, # int, how many times the annealing is performed
"seed": 1, # int, random seed
"alpha": 5, # int, the penalty coefficient to enforce assigning only one community to each node
}
# JSON encode hyperparameters as required by Amazon Braket jobs
hyperparams = {str(k): json.dumps(v) for (k, v) in hyperparams.items()}
print(f"JSON encoded hyperparameters:\n{hyperparams}")
###Output
JSON encoded hyperparameters:
{'input_graph_file': '"Zachary.weighted.edgelist"', 'num_community': '4', 'solver_mode': '"hybrid"', 'solver_limit': '100', 'num_repeats': '2', 'num_reads': '1000', 'seed': '1', 'alpha': '5'}
###Markdown
Submit a Braket Job We have now finished preparing the input data, algorithm script, hyperparameters and other configurations. It's time to submit our Braket Job!Similar to this [Braket example notebook](https://github.com/aws/amazon-braket-examples/tree/main/examples/hybrid_jobs/2_Using_PennyLane_with_Braket_Jobs), we specify the following arguments to create our job: - device: The arn of the Braket simulator or QPU we want to use. It will be stored as an environment variable for the algorithm script.- instance_config: The configuration of classical resource such as instance type and data storage volume to run the algorithm script.- source_module: The path to a file or a python module that contains your algorithm script. It will be uploaded to the container for Braket Job execution.- job_name: A unique string to identify the job. It appears in the Braket Job console and in the job arn.- entry point: The path relative to the source_module. It points to the piece of code to be executed when the Braket Job starts.- hyperparameters: The Python dictionary containing the hyperparameter names and values (as strings).- input_data: A dictionary that maps channel names to either a file location in the local environment or a path to S3. We can also specify only a file location, in which case the channel name is treated as "input".- wait_until_complete: If True, the function call will wait until the Braket Job is completed, and will additionally print logs to the local console. Otherwise, it will run asynchronously. Defaults to False.**QPU cost warning**: QBSolv hybrid solver will cause addtional cost for using D-Wave QPU. Please evaluate potential cost before executing QBSolv hybrid job
###Code
# specify device that the job will primarily be targeting
device_arn = 'arn:aws:braket:::device/qpu/d-wave/Advantage_system4' # D-Wave QPU Device ARN
# submit a Braket job
aws_job = AwsQuantumJob.create(
device=device_arn,
instance_config=InstanceConfig(instanceType="ml.m5.xlarge"),
source_module="src",
job_name="Job-"+graph_name+"-" + str(int(time.time())),
entry_point="src.hybrid_job_community_detection",
hyperparameters=hyperparams,
input_data={"input-graph": input_graph_path},
wait_until_complete = False
)
###Output
_____no_output_____
###Markdown
The status of a Braket Job can be checked by calling `job.state()`. The state will be one of "QUEUED", "RUNNING", "FAILED", "COMPLETED", "CANCELLING", or "CANCELLED"
###Code
print(aws_job.state())
###Output
COMPLETED
###Markdown
View Results After the job is completed, we can view the results saved by the algorithm script.Result may vary between different job runs, especially for QBSolv hybrid solver jobs. You may want to increase the `num_repeats` value to get better results.
###Code
# view job results
print(aws_job.result())
# retrieve results for community assingments
community_results = ast.literal_eval(aws_job.result()['community_results'])
print(community_results)
# read the input graph file for plotting
#nx_G = nx.read_weighted_edgelist(input_graph_path)
# or use built-in graph
nx_G = nx.karate_club_graph()
# We draw a graph colored with communities
draw_graph_community(nx_G, list(community_results['comm']), color_map = 'Dark2')
plt.show()
###Output
_____no_output_____
###Markdown
Run Hyperparameter Tuning Similar to this [Braket example notebook on hyperparameter tuning](https://github.com/aws/amazon-braket-examples/tree/main/examples/hybrid_jobs/1_Hyperparameter_tuning), we use Braket jobs to submit and monitor many jobs simultaneously that have different hyperparameter settings. Here, we show an example to find the optimal number of communities for a given graph by scanning different numbers of communities to detect and comparing their modularity values.
###Code
jobs = []
names = []
for num_community in range(2, 6):
print(f"Creating job with {num_community} communities")
name = f"hyper-param-job-{graph_name}-{num_community}-" + str(int(time.time()))
hyperparams = {
"input_graph_file": input_graph_file_name,
"num_community": num_community,
"solver_mode": "hybrid",
"solver_limit": 100,
"num_repeats": 2,
"num_reads": 1000,
"seed": 1,
"alpha": 5,
}
# JSON encode hyperparameters as required by Amazon Braket jobs
hyperparams = {str(k): json.dumps(v) for (k, v) in hyperparams.items()}
# submit a Braket job
tmp_job = AwsQuantumJob.create(
device=device_arn,
instance_config=InstanceConfig(instanceType="ml.m5.xlarge"),
source_module="src",
job_name=name,
entry_point="src.hybrid_job_community_detection",
hyperparameters=hyperparams,
input_data={"input-graph": input_graph_path},
wait_until_complete = False
)
jobs.append(tmp_job)
names.append(name)
###Output
Creating job with 2 communities
Creating job with 3 communities
Creating job with 4 communities
Creating job with 5 communities
###Markdown
To monitor the job status in real time, you can access its Amazon CloudWatch Logs under the ‘Jobs’ tab in the Amazon Braket console. Because D-Wave QPUs are only available in the AWS ‘us-west-2’ Region, you will need to set the AWS Management Console Region to `us-west-2` in order to see the status of your jobs.![hp_job_console.png](attachment:hp_job_console.png)
###Code
jobs[-1].result() # check results for one of the jobs
###Output
_____no_output_____
###Markdown
We can now check the results from all the experiments once they finish.
###Code
results_dict = collections.defaultdict(list)
for job in jobs:
results_dict['Number of communities'].append(int(ast.literal_eval(job.result()['hyperparams'])['num_community']))
results_dict['Modularity'].append(ast.literal_eval(job.result()['community_results'])['modularity'])
df = pd.DataFrame.from_dict(results_dict)
df
###Output
_____no_output_____ |
exploration/explore_data.ipynb | ###Markdown
Introduction Traditional job sites can be overwhelming. Additionally, Lambda School graduates can save time and anxiety by focusing on companies who understand the unique value Lambda graduates bring as employees. Therefore we are developing a website where Lambda School students and alumni can post company and interview experiences and find helpful posts that others have made. For its first user feature, the Data Science team is developing automated content moderation. As the website scales, manual content moderation may not be a pragmatic way of enforcing content rules. Furthermore, inappropriate content undermines the site's core mission of saving Lambda students time and helping ease their anxieties. Therefore we seek to find a model that can automatically and accurately classify posts that should be flagged or removed.It is worth noting that many tweets in the data used within this notebook contain hateful or obscene content. Those who may be traumatized by such content may want to avoid reading this notebook. Modeling Plan At the risk of tautology, data science requires data. So we first assess the data we have available and what we can get. We face a classic problem right now in building out features for a fledgling site: because we don't have users, we don't have actual user data; however, we'll never generate enough users to have sufficient data if they're encountering abusive and hateful content. For this reason, we concluded starting with a model trained on external data was a possible route.We were able to find several labeled data sets that flagged hateful, abusive, or spam content (or some combination of the three). Our initial strategy will be to use these to train, validate, and test models. It is worth noting that all three data sets were composed of Tweets. Our hope is that patterns of abusive text are similar enough across websites that the model learning on this will transfer.We will load and explore the data, then fit sequentially more complex models in order to maximize predictive performance. Load Data
###Code
import pandas as pd
import numpy as np
import spacy
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt
import os
os.getcwd()
###Output
_____no_output_____
###Markdown
Data Set One: "Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior" The first data set is a set of 100,000 tweets labeled for use in the paper "Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior." The text of these tweets has been provided for research use courtesy of the author Antigoni-Marie Founta. https://github.com/ENCASEH2020/hatespeech-twitter@inproceedings{founta2018large, title={Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior}, author={Founta, Antigoni-Maria and Djouvas, Constantinos and Chatzakou, Despoina and Leontiadis, Ilias and Blackburn, Jeremy and Stringhini, Gianluca and Vakali, Athena and Sirivianos, Michael and Kourtellis, Nicolas}, booktitle={11th International Conference on Web and Social Media, ICWSM 2018}, year={2018}, organization={AAAI Press}}
###Code
hundred_k_tweets = pd.read_csv("data\\hatespeech_text_label_vote.csv", sep='\t', header=None, names=["tweet", "category", "votes"])
hundred_k_tweets.head()
hundred_k_tweets.shape
hundred_k_tweets['category'].value_counts()
###Output
_____no_output_____
###Markdown
For our purposes, content that is abusive, spam, or hateful should be identified by the model. The "votes" are the number of actual people who labeled the content by its majority label. This is a feature we may want to incorporate in the model down the line (examples where all five humans agreed than an item is appropriate should plausibly be weighted differently in training the model than examples where they were split). For the purposes of building a baseline model, I'll simplify data sets down to a binary target and the body of text. A note: emojis are characterised by their the string patterns "&NNNNNN", where each N is a number. As advanced models are capable of learning these patterns, I leave them as is for the time being. But that's the human interpretation where those patterns are seen.
###Code
hundred_k_tweets["inappropriate"] = (hundred_k_tweets["category"].isin(["spam", "abusive", "hateful"]))
hundred_k_tweets = hundred_k_tweets.drop(["category", "votes"], axis=1)
hundred_k_tweets.head()
hundred_k_tweets["inappropriate"].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Data Set Two: Automated Hate Speech Detection and the Problem of Offensive Language
###Code
additional_tweets = pd.read_csv("data\\labeled_data.csv")
additional_tweets.head()
additional_tweets = additional_tweets.drop(["Unnamed: 0"], axis=1)
additional_tweets.shape
###Output
_____no_output_____
###Markdown
This data set is composed of 24,783 tweets that have been manually labeled by CrowdFlower users. "Count" is the number of users who voted; "hate_speech", "offensive_language", and "neither" are the various categories that can be voted for. "Class" is the majority label.https://github.com/t-davidson/hate-speech-and-offensive-language/tree/master/data
###Code
additional_tweets["class"].value_counts()
###Output
_____no_output_____
###Markdown
"1" represents offensive language, "2" represents neither offensive language nor hate speech, and "0" represents hate speech. It's worth noting that inappropriate tweets are far more common in both data sets than they are in the real world. This is something to be cognizant of when training the model: weakly explanatory models will tend to default to baseline predictions, which in this case will result in many more flagged posts than is desirable.
###Code
additional_tweets.describe()
additional_tweets.sort_values(by="offensive_language", ascending=False).head(15).T
additional_tweets.sort_values(by="hate_speech", ascending=False).head(15).T
additional_tweets.sort_values(by="hate_speech", ascending=True).head(15).T
additional_tweets["inappropriate"] = additional_tweets["class"] != 2
additional_tweets.head()
additional_tweets = additional_tweets.drop(["count", "hate_speech", "offensive_language", "neither", "class"], axis=1)
additional_tweets.head()
additional_tweets["inappropriate"].value_counts()
###Output
_____no_output_____
###Markdown
Data Set 3: Kaggle Tweets - https://www.kaggle.com/vkrahul/twitter-hate-speechtrain_E6oV3lV.csv
###Code
kaggle_tweets = pd.read_csv("data/train_E6oV3lV.csv")
kaggle_tweets.head()
kaggle_tweets["inappropriate"] = kaggle_tweets["label"]
kaggle_tweets = kaggle_tweets.drop(["id", "label"], axis=1)
kaggle_tweets["inappropriate"].value_counts()
kaggle_tweets.head()
###Output
_____no_output_____
###Markdown
Lastly, we add in a data set of Tweets sourced from Kaggle. This data set has many more appropriate than inappropriate items, which is nice to counterbalance the last data set. Merging Data
###Code
dfs = [hundred_k_tweets, additional_tweets, kaggle_tweets]
df = pd.concat(dfs, ignore_index=True)
df.shape
###Output
_____no_output_____
###Markdown
Some of the data has duplication. Some duplicate tweets have identical labels -- in these cases, we simply drop one. Other duplicate tweets have contradictory labels. These are all inappropriate tweets, so we drop the ones labeled appropriate on these.
###Code
df = df.drop_duplicates(subset=['tweet', 'inappropriate'])
appropriate = ~df['inappropriate']
dupe_tweet = df.duplicated(subset=['tweet'], keep=False)
df = df[~((dupe_tweet) & (appropriate))].copy()
df.head()
df.duplicated(subset=['tweet']).any()
# df.to_csv('combined_deduped.csv', index=False)
###Output
_____no_output_____ |
Clases/Part 6 - Reinforcement Learning/32__Upper_Confidence_bound.ipynb | ###Markdown
Algoritmo de UCB
###Code
import pandas as pd
dataset = pd.read_csv('./Section 32 - Upper Confidence Bound (UCB)/Ads_CTR_Optimisation.csv')
import math
N = 10000 # numero de personas
d = 10 # numero de anuncios
number_of_selections = [0] * d
sums_of_rewards = [0] * d
ads_selected = []
total_reward = 0
for n in range(0, N):
max_upper_bound = 0
ad = 0
for i in range(0, d):
if(number_of_selections[i]>0):
average_reward = sums_of_rewards[i] / number_of_selections[i]
delta_i = math.sqrt(3/2*math.log(n+1)/number_of_selections[i])
upper_bound = average_reward + delta_i
else:
upper_bound = 1e400
if upper_bound > max_upper_bound:
max_upper_bound = upper_bound
ad = i
ads_selected.append(ad)
number_of_selections[ad] = number_of_selections[ad] + 1
reward = dataset.values[n, ad]
sums_of_rewards[ad] = sums_of_rewards[ad] + reward
total_reward = total_reward + reward
# ads_selected = list(map(lambda x: x+1, ads_selected))
total_reward
###Output
_____no_output_____
###Markdown
Histograma de resultados
###Code
import matplotlib.pyplot as plt
plt.hist(total_reward)
plt.title('"Histograma de anuncios with "UCB')
plt.xlabel("ID del Anuncio")
plt.ylabel("Frecuencia de visualización del anuncio")
plt.show()
###Output
_____no_output_____ |
notebooks/demo/Get_pheno_geno.ipynb | ###Markdown
Demoing how to get phenotype and genotype data
###Code
genetic_file = load_genetic_file(21)
###Output
Sample IDs are read from /lab/corradin_biobank/Raw_UKB_downloads/sample_files/ukb45624_imp_chr21_v3_s487275.sample.
###Markdown
Getting multiple ICD codes
###Code
get_phenotype(["I84", "R07"])[1]
###Output
_____no_output_____
###Markdown
Getting multiple ICD codes arranged by samples in the genetic fileHelpful when you want to subset samples and want to ensure the phenotype df follows the exact same order as the genotype
###Code
genetic_file.samples.values
sample_index, pheno_df_ordered = get_phenotype(["I84", "R07"], samples = genetic_file.samples)
pheno_df_ordered
assert sample_index.shape[0] == pheno_df_ordered.shape[0]
###Output
_____no_output_____
###Markdown
--- Get SNPs for a phenotype Get the SNPs that were used for a GWAS of the ICD10 code. You can output the entire dataframe or just the array. To find what `phenotype_code` to provide as function argument, refer to the column of the same name [here](https://docs.google.com/spreadsheets/d/1kvPoupSzsSFBNSztMzl04xMoSC3Kcx3CrjVf4yBmESU/edit?ts=5b5f17dbgid=178908679). For ICD codes phenotype, the values are the same, but this might not be true for your phenotype of interest
###Code
get_GWAS_snps_for_trait("I84")
###Output
_____no_output_____
###Markdown
The SNPs are sorted by pvalue by default. This is so that you can select **the lowest pvalue SNPs** or **the highest beta SNPs** but you can sort them by position by changing the `sort_val_cols_list` argument.
###Code
GWAS_snps_df = get_GWAS_snps_for_trait("I84", id_only=False, sort_val_cols_list= ['pval'], ascending_bool_list= [True])
GWAS_snps_df
###Output
_____no_output_____
###Markdown
Here we demonstrate these steps: 1) keep the sort by `pval` or beta2) return the dataframe 3) subset the number of SNPs you want (SNPs that have pvalue < 1e-5)4) and then sort again by position (to put into a CNN for example)
###Code
subset_snps = GWAS_snps_df.query("pval < 1e-5").sort_values("position")["full_id"].values
subset_snps
subset_snps.shape
###Output
_____no_output_____
###Markdown
Now we get the index of the chosen snps
###Code
sorter = np.argsort(genetic_file.bgen_reader_obj.ids)
variants_index = sorter[np.searchsorted(genetic_file.bgen_reader_obj.ids, subset_snps, sorter=sorter)]
variants_index
#get genetic data of all the chosen SNPs and all samples in phenotype
probs = genetic_file.bgen_reader_obj.read((sample_index, variants_index))
probs.shape
###Output
reading -- time=0:00:00.00, thread 1 of 10, part 1 of 1
###Markdown
Turn the probabilities to one hot encoded values
###Code
ohe_genetic_info = np.identity(3)[genetic_file.get_geno_each_sample(probs,"max").astype(int)] #sometimes it has Nans so need to convert to type int
ohe_genetic_info
###Output
_____no_output_____ |
notebooks/basics/1.train-lightgbm-local.ipynb | ###Markdown
Train with LightGBM in an Interactive Run Setup cloud tracking[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
###Code
import os
import mlflow
mlflow.set_tracking_uri(os.environ["AZUREML_MLFLOW_URI"])
mlflow.set_experiment("lightgbm-iris-local-example")
###Output
_____no_output_____
###Markdown
Load dataYou can read directly from public URIs into Pandas. For private Blob or ADLS data, you can use built in Azure data protocols and pass in `storage_options` for credentials.
###Code
data_uri = "https://azuremlexamples.blob.core.windows.net/datasets/iris.csv"
import pandas as pd
df = pd.read_csv(data_uri)
df.head()
###Output
_____no_output_____
###Markdown
Write functionsAfter some experimentation, you may refactor your code into a few functions for logical steps in the ML training process:
###Code
# imports
import time
import lightgbm as lgb
from sklearn.metrics import log_loss, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
# define functions
def preprocess_data(df):
X = df.drop(["species"], axis=1)
y = df["species"]
enc = LabelEncoder()
y = enc.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
return X_train, X_test, y_train, y_test, enc
def train_model(params, num_boost_round, X_train, X_test, y_train, y_test):
t1 = time.time()
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
model = lgb.train(
params,
train_data,
num_boost_round=num_boost_round,
valid_sets=[test_data],
valid_names=["test"],
)
t2 = time.time()
return model, t2 - t1
def evaluate_model(model, X_test, y_test):
y_proba = model.predict(X_test)
y_pred = y_proba.argmax(axis=1)
loss = log_loss(y_test, y_proba)
acc = accuracy_score(y_test, y_pred)
return loss, acc
###Output
_____no_output_____
###Markdown
Run a trialNow, you can easily run local trials, editing the parameters and seeing how the model performs.
###Code
from sklearn.metrics import accuracy_score, log_loss
# preprocess data
X_train, X_test, y_train, y_test, enc = preprocess_data(df)
# set training parameters
params = {
"objective": "multiclass",
"num_class": 3,
"learning_rate": 0.1,
"metric": "multi_logloss",
"colsample_bytree": 1.0,
"subsample": 1.0,
"seed": 42,
}
num_boost_round = 32
# start run
run = mlflow.start_run()
# enable automatic logging
mlflow.lightgbm.autolog()
# train model
model, train_time = train_model(
params, num_boost_round, X_train, X_test, y_train, y_test
)
mlflow.log_metric("training_time", train_time)
# evaluate model
loss, acc = evaluate_model(model, X_test, y_test)
mlflow.log_metrics({"loss": loss, "accuracy": acc})
# end run
mlflow.end_run()
###Output
_____no_output_____
###Markdown
Train with LightGBM in an Interactive Run Install requirements
###Code
%pip install -r requirements.txt
###Output
Requirement already satisfied: numpy in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (1.22.2)
Requirement already satisfied: scipy in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 3)) (1.5.2)
Requirement already satisfied: pandas>=1.2.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (1.4.1)
Requirement already satisfied: adlfs>=2021.8.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 5)) (2022.2.0)
Requirement already satisfied: scikit-learn in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 6)) (0.24.2)
Requirement already satisfied: lightgbm>=3.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 7)) (3.3.2)
Requirement already satisfied: mlflow in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 10)) (1.23.1)
Requirement already satisfied: azureml-core in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 13)) (1.38.0)
Requirement already satisfied: azureml-mlflow in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from -r requirements.txt (line 14)) (1.38.0)
Requirement already satisfied: python-dateutil>=2.8.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from pandas>=1.2.0->-r requirements.txt (line 4)) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from pandas>=1.2.0->-r requirements.txt (line 4)) (2021.3)
Requirement already satisfied: fsspec>=2021.10.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from adlfs>=2021.8.1->-r requirements.txt (line 5)) (2022.1.0)
Requirement already satisfied: azure-storage-blob>=12.5.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from adlfs>=2021.8.1->-r requirements.txt (line 5)) (12.9.0)
Requirement already satisfied: aiohttp in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from adlfs>=2021.8.1->-r requirements.txt (line 5)) (3.8.1)
Requirement already satisfied: azure-core>=1.7.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from adlfs>=2021.8.1->-r requirements.txt (line 5)) (1.21.1)
Requirement already satisfied: azure-datalake-store<0.1,>=0.0.46 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from adlfs>=2021.8.1->-r requirements.txt (line 5)) (0.0.52)
Requirement already satisfied: azure-identity in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from adlfs>=2021.8.1->-r requirements.txt (line 5)) (1.7.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from scikit-learn->-r requirements.txt (line 6)) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from scikit-learn->-r requirements.txt (line 6)) (0.14.1)
Requirement already satisfied: wheel in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from lightgbm>=3.0.0->-r requirements.txt (line 7)) (0.35.1)
Requirement already satisfied: cloudpickle in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (2.0.0)
Requirement already satisfied: requests>=2.17.3 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (2.27.1)
Requirement already satisfied: sqlparse>=0.3.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (0.4.2)
Requirement already satisfied: importlib-metadata!=4.7.0,>=3.7.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (4.10.1)
Requirement already satisfied: docker>=4.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (5.0.3)
Requirement already satisfied: sqlalchemy in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (1.4.31)
Requirement already satisfied: prometheus-flask-exporter in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (0.18.7)
Requirement already satisfied: packaging in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (21.3)
Requirement already satisfied: Flask in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (1.0.3)
Requirement already satisfied: click>=7.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (8.0.3)
Requirement already satisfied: gitpython>=2.1.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (3.1.26)
Requirement already satisfied: databricks-cli>=0.8.7 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (0.16.2)
Requirement already satisfied: pyyaml>=5.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (6.0)
Requirement already satisfied: querystring-parser in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (1.2.4)
Requirement already satisfied: entrypoints in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (0.3)
Requirement already satisfied: alembic in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (1.7.6)
Requirement already satisfied: gunicorn; platform_system != "Windows" in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (20.1.0)
Requirement already satisfied: protobuf>=3.7.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from mlflow->-r requirements.txt (line 10)) (3.19.3)
Requirement already satisfied: paramiko<3.0.0,>=2.0.8 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (2.9.2)
Requirement already satisfied: ndg-httpsclient<=0.5.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.5.1)
Requirement already satisfied: contextlib2<22.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (21.6.0)
Requirement already satisfied: azure-mgmt-containerregistry<9.0.0,>=8.2.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (8.2.0)
Requirement already satisfied: adal<=1.2.7,>=1.2.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.2.7)
Requirement already satisfied: jmespath<1.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.10.0)
Requirement already satisfied: knack~=0.8.2 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.8.2)
Requirement already satisfied: azure-common<2.0.0,>=1.1.12 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.1.27)
Requirement already satisfied: cryptography!=1.9,!=2.0.*,!=2.1.*,!=2.2.*,<37.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (36.0.1)
Requirement already satisfied: urllib3<=1.26.7,>=1.23 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.26.7)
Requirement already satisfied: backports.tempfile in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.0)
Requirement already satisfied: azure-mgmt-authorization<1.0.0,>=0.40.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.61.0)
Requirement already satisfied: pyopenssl<22.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (21.0.0)
Requirement already satisfied: humanfriendly<11.0,>=4.7 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (10.0)
Requirement already satisfied: pkginfo in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.8.2)
Requirement already satisfied: msrestazure<=0.6.4,>=0.4.33 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.6.4)
Requirement already satisfied: msal<2.0.0,>=1.15.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.16.0)
Requirement already satisfied: PyJWT<3.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (2.3.0)
Requirement already satisfied: azure-graphrbac<1.0.0,>=0.40.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.61.1)
Requirement already satisfied: argcomplete<2.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (1.12.3)
Requirement already satisfied: SecretStorage<4.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (3.3.1)
Requirement already satisfied: azure-mgmt-resource<21.0.0,>=15.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (20.0.0)
Requirement already satisfied: azure-mgmt-keyvault<10.0.0,>=0.40.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (9.3.0)
Requirement already satisfied: jsonpickle<3.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (2.1.0)
Requirement already satisfied: pathspec<1.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.9.0)
Requirement already satisfied: msal-extensions<0.4,>=0.3.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.3.1)
Requirement already satisfied: azure-mgmt-storage<20.0.0,>=16.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (19.0.0)
Requirement already satisfied: msrest<1.0.0,>=0.5.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-core->-r requirements.txt (line 13)) (0.6.21)
Requirement already satisfied: mlflow-skinny in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azureml-mlflow->-r requirements.txt (line 14)) (1.23.0)
Requirement already satisfied: six>=1.5 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from python-dateutil>=2.8.1->pandas>=1.2.0->-r requirements.txt (line 4)) (1.16.0)
Requirement already satisfied: aiosignal>=1.1.2 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (1.2.0)
Requirement already satisfied: yarl<2.0,>=1.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (1.7.2)
Requirement already satisfied: attrs>=17.3.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (21.4.0)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (2.0.10)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (4.0.2)
Requirement already satisfied: frozenlist>=1.1.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (1.3.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from aiohttp->adlfs>=2021.8.1->-r requirements.txt (line 5)) (6.0.2)
Requirement already satisfied: cffi in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azure-datalake-store<0.1,>=0.0.46->adlfs>=2021.8.1->-r requirements.txt (line 5)) (1.15.0)
Requirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from requests>=2.17.3->mlflow->-r requirements.txt (line 10)) (2021.10.8)
Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from requests>=2.17.3->mlflow->-r requirements.txt (line 10)) (3.3)
Requirement already satisfied: zipp>=0.5 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from importlib-metadata!=4.7.0,>=3.7.0->mlflow->-r requirements.txt (line 10)) (3.7.0)
Requirement already satisfied: websocket-client>=0.32.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from docker>=4.0.0->mlflow->-r requirements.txt (line 10)) (1.2.3)
Requirement already satisfied: greenlet!=0.4.17; python_version >= "3" and (platform_machine == "aarch64" or (platform_machine == "ppc64le" or (platform_machine == "x86_64" or (platform_machine == "amd64" or (platform_machine == "AMD64" or (platform_machine == "win32" or platform_machine == "WIN32")))))) in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from sqlalchemy->mlflow->-r requirements.txt (line 10)) (1.1.2)
Requirement already satisfied: prometheus-client in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from prometheus-flask-exporter->mlflow->-r requirements.txt (line 10)) (0.12.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from packaging->mlflow->-r requirements.txt (line 10)) (3.0.6)
Requirement already satisfied: itsdangerous>=0.24 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from Flask->mlflow->-r requirements.txt (line 10)) (2.0.1)
Requirement already satisfied: Werkzeug>=0.14 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from Flask->mlflow->-r requirements.txt (line 10)) (2.0.2)
Requirement already satisfied: Jinja2>=2.10 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from Flask->mlflow->-r requirements.txt (line 10)) (2.11.2)
Requirement already satisfied: gitdb<5,>=4.0.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from gitpython>=2.1.0->mlflow->-r requirements.txt (line 10)) (4.0.9)
Requirement already satisfied: tabulate>=0.7.7 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from databricks-cli>=0.8.7->mlflow->-r requirements.txt (line 10)) (0.8.9)
Requirement already satisfied: importlib-resources; python_version < "3.9" in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from alembic->mlflow->-r requirements.txt (line 10)) (5.4.0)
Requirement already satisfied: Mako in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from alembic->mlflow->-r requirements.txt (line 10)) (1.1.6)
Requirement already satisfied: setuptools>=3.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from gunicorn; platform_system != "Windows"->mlflow->-r requirements.txt (line 10)) (50.3.0)
Requirement already satisfied: bcrypt>=3.1.3 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from paramiko<3.0.0,>=2.0.8->azureml-core->-r requirements.txt (line 13)) (3.2.0)
Requirement already satisfied: pynacl>=1.0.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from paramiko<3.0.0,>=2.0.8->azureml-core->-r requirements.txt (line 13)) (1.5.0)
Requirement already satisfied: pyasn1>=0.1.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from ndg-httpsclient<=0.5.1->azureml-core->-r requirements.txt (line 13)) (0.4.8)
Requirement already satisfied: azure-mgmt-core<2.0.0,>=1.2.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from azure-mgmt-containerregistry<9.0.0,>=8.2.0->azureml-core->-r requirements.txt (line 13)) (1.3.0)
Requirement already satisfied: pygments in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from knack~=0.8.2->azureml-core->-r requirements.txt (line 13)) (2.11.2)
Requirement already satisfied: colorama in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from knack~=0.8.2->azureml-core->-r requirements.txt (line 13)) (0.4.4)
Requirement already satisfied: backports.weakref in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from backports.tempfile->azureml-core->-r requirements.txt (line 13)) (1.0.post1)
Requirement already satisfied: jeepney>=0.6 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from SecretStorage<4.0.0->azureml-core->-r requirements.txt (line 13)) (0.7.1)
Requirement already satisfied: portalocker<3,>=1.0; python_version >= "3.5" and platform_system != "Windows" in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from msal-extensions<0.4,>=0.3.0->azureml-core->-r requirements.txt (line 13)) (2.3.2)
Requirement already satisfied: isodate>=0.6.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from msrest<1.0.0,>=0.5.1->azureml-core->-r requirements.txt (line 13)) (0.6.1)
Requirement already satisfied: requests-oauthlib>=0.5.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from msrest<1.0.0,>=0.5.1->azureml-core->-r requirements.txt (line 13)) (1.3.0)
Requirement already satisfied: pycparser in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from cffi->azure-datalake-store<0.1,>=0.0.46->adlfs>=2021.8.1->-r requirements.txt (line 5)) (2.21)
Requirement already satisfied: MarkupSafe>=0.23 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from Jinja2>=2.10->Flask->mlflow->-r requirements.txt (line 10)) (2.0.1)
Requirement already satisfied: smmap<6,>=3.0.1 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from gitdb<5,>=4.0.1->gitpython>=2.1.0->mlflow->-r requirements.txt (line 10)) (5.0.0)
Requirement already satisfied: oauthlib>=3.0.0 in /anaconda/envs/azureml_py38/lib/python3.8/site-packages (from requests-oauthlib>=0.5.0->msrest<1.0.0,>=0.5.1->azureml-core->-r requirements.txt (line 13)) (3.1.1)
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup cloud tracking[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
###Code
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("lightgbm-iris-local-example")
###Output
_____no_output_____
###Markdown
Load dataYou can read directly from public URIs into Pandas. For private Blob or ADLS data, you can use built in Azure data protocols and pass in `storage_options` for credentials.
###Code
data_uri = "https://azuremlexamples.blob.core.windows.net/datasets/iris.csv"
import pandas as pd
df = pd.read_csv(data_uri)
df.head()
###Output
_____no_output_____
###Markdown
Write functionsAfter some experimentation, you may refactor your code into a few functions for logical steps in the ML training process:
###Code
# imports
import time
import lightgbm as lgb
from sklearn.metrics import log_loss, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
# define functions
def preprocess_data(df):
X = df.drop(["species"], axis=1)
y = df["species"]
enc = LabelEncoder()
y = enc.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
return X_train, X_test, y_train, y_test, enc
def train_model(params, num_boost_round, X_train, X_test, y_train, y_test):
t1 = time.time()
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
model = lgb.train(
params,
train_data,
num_boost_round=num_boost_round,
valid_sets=[test_data],
valid_names=["test"],
)
t2 = time.time()
return model, t2 - t1
def evaluate_model(model, X_test, y_test):
y_proba = model.predict(X_test)
y_pred = y_proba.argmax(axis=1)
loss = log_loss(y_test, y_proba)
acc = accuracy_score(y_test, y_pred)
return loss, acc
###Output
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/dask/dataframe/utils.py:367: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
_numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/dask/dataframe/utils.py:367: FutureWarning: pandas.Float64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
_numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/dask/dataframe/utils.py:367: FutureWarning: pandas.UInt64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
_numeric_index_types = (pd.Int64Index, pd.Float64Index, pd.UInt64Index)
###Markdown
Run a trialNow, you can easily run local trials, editing the parameters and seeing how the model performs.
###Code
from sklearn.metrics import accuracy_score, log_loss
# preprocess data
X_train, X_test, y_train, y_test, enc = preprocess_data(df)
# set training parameters
params = {
"objective": "multiclass",
"num_class": 3,
"learning_rate": 0.1,
"metric": "multi_logloss",
"colsample_bytree": 1.0,
"subsample": 1.0,
"seed": 42,
}
num_boost_round = 32
# start run
run = mlflow.start_run()
# enable automatic logging
mlflow.lightgbm.autolog()
# train model
model, train_time = train_model(
params, num_boost_round, X_train, X_test, y_train, y_test
)
mlflow.log_metric("training_time", train_time)
# evaluate model
loss, acc = evaluate_model(model, X_test, y_test)
mlflow.log_metrics({"loss": loss, "accuracy": acc})
###Output
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000093 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 91
[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 4
[LightGBM] [Info] Start training from score -1.098612
[LightGBM] [Info] Start training from score -1.073920
[LightGBM] [Info] Start training from score -1.123930
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
###Markdown
Note the run is still activate. You can continue experimenting with your model and log metrics or artifacts. For instance, you can log the notebook to the run. Save your notebook first to capture the outputs.
###Code
mlflow.log_artifact("1.train-lightgbm-local.ipynb")
###Output
_____no_output_____
###Markdown
Finally, mark the run as completed:
###Code
# end run
mlflow.end_run()
###Output
_____no_output_____
###Markdown
Train with LightGBM in an Interactive Run Install requirements
###Code
%pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Setup cloud tracking[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
###Code
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("lightgbm-iris-local-example")
###Output
_____no_output_____
###Markdown
Load dataYou can read directly from public URIs into Pandas. For private Blob or ADLS data, you can use built in Azure data protocols and pass in `storage_options` for credentials.
###Code
data_uri = "https://azuremlexamples.blob.core.windows.net/datasets/iris.csv"
import pandas as pd
df = pd.read_csv(data_uri)
df.head()
###Output
_____no_output_____
###Markdown
Write functionsAfter some experimentation, you may refactor your code into a few functions for logical steps in the ML training process:
###Code
# imports
import time
import lightgbm as lgb
from sklearn.metrics import log_loss, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
# define functions
def preprocess_data(df):
X = df.drop(["species"], axis=1)
y = df["species"]
enc = LabelEncoder()
y = enc.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
return X_train, X_test, y_train, y_test, enc
def train_model(params, num_boost_round, X_train, X_test, y_train, y_test):
t1 = time.time()
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
model = lgb.train(
params,
train_data,
num_boost_round=num_boost_round,
valid_sets=[test_data],
valid_names=["test"],
)
t2 = time.time()
return model, t2 - t1
def evaluate_model(model, X_test, y_test):
y_proba = model.predict(X_test)
y_pred = y_proba.argmax(axis=1)
loss = log_loss(y_test, y_proba)
acc = accuracy_score(y_test, y_pred)
return loss, acc
###Output
_____no_output_____
###Markdown
Run a trialNow, you can easily run local trials, editing the parameters and seeing how the model performs.
###Code
from sklearn.metrics import accuracy_score, log_loss
# preprocess data
X_train, X_test, y_train, y_test, enc = preprocess_data(df)
# set training parameters
params = {
"objective": "multiclass",
"num_class": 3,
"learning_rate": 0.1,
"metric": "multi_logloss",
"colsample_bytree": 1.0,
"subsample": 1.0,
"seed": 42,
}
num_boost_round = 32
# start run
run = mlflow.start_run()
# enable automatic logging
mlflow.lightgbm.autolog()
# train model
model, train_time = train_model(
params, num_boost_round, X_train, X_test, y_train, y_test
)
mlflow.log_metric("training_time", train_time)
# evaluate model
loss, acc = evaluate_model(model, X_test, y_test)
mlflow.log_metrics({"loss": loss, "accuracy": acc})
###Output
_____no_output_____
###Markdown
Note the run is still activate. You can continue experimenting with your model and log metrics or artifacts. For instance, you can log the notebook to the run. Save your notebook first to capture the outputs.
###Code
mlflow.log_artifact("1.train-lightgbm-local.ipynb")
###Output
_____no_output_____
###Markdown
Finally, mark the run as completed:
###Code
# end run
mlflow.end_run()
###Output
_____no_output_____
###Markdown
Train with LightGBM in an Interactive Run Install requirements
###Code
%pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Setup cloud tracking[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
###Code
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("lightgbm-iris-local-example")
###Output
_____no_output_____
###Markdown
Load dataYou can read directly from public URIs into Pandas. For private Blob or ADLS data, you can use built in Azure data protocols and pass in `storage_options` for credentials.
###Code
data_uri = "https://azuremlexamples.blob.core.windows.net/datasets/iris.csv"
import pandas as pd
df = pd.read_csv(data_uri)
df.head()
###Output
_____no_output_____
###Markdown
Write functionsAfter some experimentation, you may refactor your code into a few functions for logical steps in the ML training process:
###Code
# imports
import time
import lightgbm as lgb
from sklearn.metrics import log_loss, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
# define functions
def preprocess_data(df):
X = df.drop(["species"], axis=1)
y = df["species"]
enc = LabelEncoder()
y = enc.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
return X_train, X_test, y_train, y_test, enc
def train_model(params, num_boost_round, X_train, X_test, y_train, y_test):
t1 = time.time()
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
model = lgb.train(
params,
train_data,
num_boost_round=num_boost_round,
valid_sets=[test_data],
valid_names=["test"],
)
t2 = time.time()
return model, t2 - t1
def evaluate_model(model, X_test, y_test):
y_proba = model.predict(X_test)
y_pred = y_proba.argmax(axis=1)
loss = log_loss(y_test, y_proba)
acc = accuracy_score(y_test, y_pred)
return loss, acc
###Output
_____no_output_____
###Markdown
Run a trialNow, you can easily run local trials, editing the parameters and seeing how the model performs.
###Code
from sklearn.metrics import accuracy_score, log_loss
# preprocess data
X_train, X_test, y_train, y_test, enc = preprocess_data(df)
# set training parameters
params = {
"objective": "multiclass",
"num_class": 3,
"learning_rate": 0.1,
"metric": "multi_logloss",
"colsample_bytree": 1.0,
"subsample": 1.0,
"seed": 42,
}
num_boost_round = 32
# start run
run = mlflow.start_run()
# enable automatic logging
mlflow.lightgbm.autolog()
# train model
model, train_time = train_model(
params, num_boost_round, X_train, X_test, y_train, y_test
)
mlflow.log_metric("training_time", train_time)
# evaluate model
loss, acc = evaluate_model(model, X_test, y_test)
mlflow.log_metrics({"loss": loss, "accuracy": acc})
###Output
_____no_output_____
###Markdown
Note the run is still activate. You can continue experimenting with your model and log metrics or artifacts. For instance, you can log the notebook to the run. Save your notebook first to capture the outputs.
###Code
mlflow.log_artifact("1.train-lightgbm-local.ipynb")
###Output
_____no_output_____
###Markdown
Finally, mark the run as completed:
###Code
# end run
mlflow.end_run()
###Output
_____no_output_____
###Markdown
Train with LightGBM in an Interactive Run Install requirements
###Code
%pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Setup cloud tracking[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
###Code
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("lightgbm-iris-local-example")
###Output
_____no_output_____
###Markdown
Load dataYou can read directly from public URIs into Pandas. For private Blob or ADLS data, you can use built in Azure data protocols and pass in `storage_options` for credentials.
###Code
data_uri = "https://azuremlexamples.blob.core.windows.net/datasets/iris.csv"
import pandas as pd
df = pd.read_csv(data_uri)
df.head()
###Output
_____no_output_____
###Markdown
Write functionsAfter some experimentation, you may refactor your code into a few functions for logical steps in the ML training process:
###Code
# imports
import time
import lightgbm as lgb
from sklearn.metrics import log_loss, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
# define functions
def preprocess_data(df):
X = df.drop(["species"], axis=1)
y = df["species"]
enc = LabelEncoder()
y = enc.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
return X_train, X_test, y_train, y_test, enc
def train_model(params, num_boost_round, X_train, X_test, y_train, y_test):
t1 = time.time()
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
model = lgb.train(
params,
train_data,
num_boost_round=num_boost_round,
valid_sets=[test_data],
valid_names=["test"],
)
t2 = time.time()
return model, t2 - t1
def evaluate_model(model, X_test, y_test):
y_proba = model.predict(X_test)
y_pred = y_proba.argmax(axis=1)
loss = log_loss(y_test, y_proba)
acc = accuracy_score(y_test, y_pred)
return loss, acc
###Output
_____no_output_____
###Markdown
Run a trialNow, you can easily run local trials, editing the parameters and seeing how the model performs.
###Code
from sklearn.metrics import accuracy_score, log_loss
# preprocess data
X_train, X_test, y_train, y_test, enc = preprocess_data(df)
# set training parameters
params = {
"objective": "multiclass",
"num_class": 3,
"learning_rate": 0.1,
"metric": "multi_logloss",
"colsample_bytree": 1.0,
"subsample": 1.0,
"seed": 42,
}
num_boost_round = 32
# start run
run = mlflow.start_run()
# enable automatic logging
mlflow.lightgbm.autolog()
# train model
model, train_time = train_model(
params, num_boost_round, X_train, X_test, y_train, y_test
)
mlflow.log_metric("training_time", train_time)
# evaluate model
loss, acc = evaluate_model(model, X_test, y_test)
mlflow.log_metrics({"loss": loss, "accuracy": acc})
###Output
_____no_output_____
###Markdown
Note the run is still activate. You can continue experimenting with your model and log metrics or artifacts. For instance, you can log the notebook to the run. Save your notebook first to capture the outputs.
###Code
mlflow.log_artifact("1.train-lightgbm-local.ipynb")
###Output
_____no_output_____
###Markdown
Finally, mark the run as completed:
###Code
# end run
mlflow.end_run()
###Output
_____no_output_____
###Markdown
Train with LightGBM in an Interactive Run Install requirements
###Code
import os
on_ci = "CI_NAME" in os.environ
if on_ci:
os.system("/anaconda/envs/azureml_py38/bin/pip install -r requirements.txt")
else:
os.system("pip install -r requirements.txt")
###Output
_____no_output_____
###Markdown
Setup cloud tracking[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
###Code
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("lightgbm-iris-local-example")
###Output
_____no_output_____
###Markdown
Load dataYou can read directly from public URIs into Pandas. For private Blob or ADLS data, you can use built in Azure data protocols and pass in `storage_options` for credentials.
###Code
data_uri = "https://azuremlexamples.blob.core.windows.net/datasets/iris.csv"
import pandas as pd
df = pd.read_csv(data_uri)
df.head()
###Output
_____no_output_____
###Markdown
Write functionsAfter some experimentation, you may refactor your code into a few functions for logical steps in the ML training process:
###Code
# imports
import time
import lightgbm as lgb
from sklearn.metrics import log_loss, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
# define functions
def preprocess_data(df):
X = df.drop(["species"], axis=1)
y = df["species"]
enc = LabelEncoder()
y = enc.fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
return X_train, X_test, y_train, y_test, enc
def train_model(params, num_boost_round, X_train, X_test, y_train, y_test):
t1 = time.time()
train_data = lgb.Dataset(X_train, label=y_train)
test_data = lgb.Dataset(X_test, label=y_test)
model = lgb.train(
params,
train_data,
num_boost_round=num_boost_round,
valid_sets=[test_data],
valid_names=["test"],
)
t2 = time.time()
return model, t2 - t1
def evaluate_model(model, X_test, y_test):
y_proba = model.predict(X_test)
y_pred = y_proba.argmax(axis=1)
loss = log_loss(y_test, y_proba)
acc = accuracy_score(y_test, y_pred)
return loss, acc
###Output
_____no_output_____
###Markdown
Run a trialNow, you can easily run local trials, editing the parameters and seeing how the model performs.
###Code
from sklearn.metrics import accuracy_score, log_loss
# preprocess data
X_train, X_test, y_train, y_test, enc = preprocess_data(df)
# set training parameters
params = {
"objective": "multiclass",
"num_class": 3,
"learning_rate": 0.1,
"metric": "multi_logloss",
"colsample_bytree": 1.0,
"subsample": 1.0,
"seed": 42,
}
num_boost_round = 32
# start run
run = mlflow.start_run()
# enable automatic logging
mlflow.lightgbm.autolog()
# train model
model, train_time = train_model(
params, num_boost_round, X_train, X_test, y_train, y_test
)
mlflow.log_metric("training_time", train_time)
# evaluate model
loss, acc = evaluate_model(model, X_test, y_test)
mlflow.log_metrics({"loss": loss, "accuracy": acc})
# end run
mlflow.end_run()
###Output
_____no_output_____ |
Projeto_1_FREELAS.ipynb | ###Markdown
**1. Entendimento do negócio** **PROBLEMA DE NEGÓCIO:** **1.** Como acompanhar a "saúde" dos serviços *freelancer* prestados?**2.** Como aumentar as receitas?**3.** Quais tipo de serviço focar? ---* **OBJETIVO:** compreender o modelo de negócio de meus serviços como *freelancer* e desenvolver um relatório interativo *web-based* com as principais métricas e insights sobre os projetos.* **CRITÉRIOS DE SUCESSO:** obtenção de *insights* ao interagir com o dashboard por qualquer usuário que acesse o dashboard, desenvolvido com o *streamlit*.* **RECURSOS:** Google Sheets, Google Colab (máquina virtual com 12 GB de RAM), editor de texto Kate, VS Code, versionamento com Git e repositório Github, deploy de web app com Streamlit.* **OBJETIVOS DO *DATA MINING*:** Proporcionar KPI's tanto para acompanhamento como para comparaçoes.* **PLANEJAMENTO ESTRUTURAL:** *Google Sheets $\rightarrow$ Python $\rightarrow$ Google Colab $\rightarrow$ Obtenção dos KPI's e visualizações $\rightarrow$ Scripting com VS Code $\rightarrow$ Implementação e Deploy com Streamlit*--- **Insights/Perguntas de Negócio:** 1. Quantos projetos completos? e por país?2. Quanto recebi ao total (receitas)? e por país?3. Qual foi o total de horas trabalhadas até agora? E a média por projeto?4. Quais países tenho mais clientela (ou projetos completos)?5. Quais serviços resultam no melhor valor-hora?6. Qual o total de taxas (em valores monetários)?---> **KPI's** anuais:1. Receitas brutas totais por ano2. Receitas líquidas totais por ano3. Receita líquida por país (cliente)4. Receita líquida por fonte5. Receita líquida tipo de serviço6. Valor-hora médio por tipo de serviço---> **KPI's** mensais:1. Projetos completos2. Receita bruta3. Receita líquida4. Total de horas trabalhadas5. Média de horas semanais6. Valor-hora médio **2. Entendimento dos dados** **Importando as bibliotecas necessárias**
###Code
import pandas as pd # manipulação dados (dataframes, séries)
import numpy as np # array (vetorização)
# Plots
import matplotlib.pyplot as plt
import seaborn as sns
# Estilizando os plots
from pylab import *
rc('axes', lw=1.25, axisbelow=True)
# Aumentando a qualidade dos plots (base: svg)
# from IPython import display
# display.set_matplotlib_formats('png')
# Configs. adicionais
%matplotlib inline
sns.set_context('notebook')
%config InlineBackend.figure_format = 'png'
###Output
_____no_output_____
###Markdown
> **Carregando os dados (coleta)**Aqui, os dados são carregados diretamente de uma planilha (compartilhada) do Google Sheets. Assim, cada alteração na base de dados original vai ser levada em conta e não será preciso fazer upload do dataset.
###Code
google_seet_id = '1RJMatmCHTTd7PieVGeI0-A4c_6lHmOu6b-HsGLlXVBY' # link (planilha)
workseet_name = 'Data' # 'aba' de trabalho
# guarando o URL padrão do Google Sheets (com compartilhamento)
URL = 'https://docs.google.com/spreadsheets/d/{0}/gviz/tq?tqx=out:csv&sheet={1}'.format(
google_seet_id,
workseet_name
)
dataset = pd.read_csv(URL) # lê o dataset como um dataframe do pandas
dataset.head() # mostra as 5 primeiras linhas
###Output
_____no_output_____
###Markdown
> **Exploração dos dados**Obtendo informações do dataframe (colunas, tipos das variáveis e valores nulos/faltantes)
###Code
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 101 entries, 0 to 100
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Work 101 non-null object
1 Country 101 non-null object
2 Subject 101 non-null object
3 Source 101 non-null object
4 Month (Worked) 101 non-null object
5 Month (Payment) 101 non-null object
6 Date (Payday) 61 non-null object
7 Worked hours 101 non-null float64
8 Budget 101 non-null object
9 Earned (Liquid) 101 non-null object
10 Hourly Rate 101 non-null object
11 Year 101 non-null int64
dtypes: float64(1), int64(1), object(10)
memory usage: 9.6+ KB
###Markdown
---Como podemos ver, temos 12 coulnas e 101 linhas que formam o **dataset**. Os dados podem, ainda, ser descritos com base no **Dicionário de Dados** abaixo:Índice | Coluna | Descrição------ | --------------- | -----------------------------------------------------------------------------------------------------------------0 | Work | Projeto (contem um ID com o número do projeto)1 | Country | País do cliente2 | Subject | Tipo de serviço prestado3 | Source | Fonte do serviço (Marketplace, CRM, Direto) 4 | Month (Worked) | Mês trabalhado (em alguns casos o recebimento se dá no mês subsequente)5 | Month (Payment) | Mês de recebimento do budget do projeto6 | Date (Payday) | Data de pagamento7 | Worked hours | Horas totais trabalhadas em cada projeto8 | Budget | Orçamento do projeto (receita bruta), em USD9 | Earned (Liquid) | Receita líquda do projeto, em USD10 | Hourly Rate | Valor-hora, em USD/h11 | Year | Ano de conclusão do projeto Note que temos 40 valores nulos na coluna **'Date (Payday)'**, isso se deve ao fato de alguns valores não serem reconhecidos no formato data-hora. Confirmando o **shape** do **dataframe**:
###Code
linhas, colunas = dataset.shape
print(f'{linhas} Linhas e {colunas} colunas')
###Output
101 Linhas e 12 colunas
###Markdown
Checando valores **nulos/faltantes**:
###Code
dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
De fato, temos 40 valores nulos, devio à questão de formatação mencionada anteriormente.Representando isso em porcentagem, tem-se:
###Code
round(dataset.isnull().sum() / len(dataset) * 100, 2)
###Output
_____no_output_____
###Markdown
Provavelemnte, teremos que aplicar algumas correções no dataframe, para darmos sequência aos próximos passos.Como a coluna **Date (Payday)** não será utilizada, não é precisso aplicar nenhum tipo de correção ou abordagem para tratar esses dados em específico. Note, no entanto, que as colunas **Budget**, **Earned (Liquid)** e **Hourly Rate** estão como string (em função do "U$D"). Logo, precisamos corrigir esse problema:
###Code
# Retirando "$" e as vírgulas:
dataset['Budget'] = dataset[['Budget']].applymap(
lambda registro: str(registro).replace('$', '').replace(',', '')
)
# Convertendo o tipo das variáveis da coluna 'Budget' de string para float:
dataset['Budget'] = dataset['Budget'].astype('float64')
# Checagem
dataset[['Budget']].dtypes
###Output
_____no_output_____
###Markdown
Aplicamos esta abordagem para as demais colunas:
###Code
dataset['Earned (Liquid)'] = dataset[['Earned (Liquid)']].applymap(
lambda registro: str(registro).replace('$', '').replace(',', '')
)
dataset['Earned (Liquid)'] = dataset['Earned (Liquid)'].astype('float64')
dataset['Hourly Rate'] = dataset[['Hourly Rate']].applymap(
lambda registro: str(registro).replace('$', '').replace(',', '')
)
dataset['Hourly Rate'] = dataset['Hourly Rate'].astype('float64')
###Output
_____no_output_____
###Markdown
Checagem:
###Code
dataset.dtypes
###Output
_____no_output_____
###Markdown
Precisamos ainda, converter a coluna **Year** para `string`:
###Code
dataset['Year'] = dataset['Year'].astype('str')
dataset.dtypes
###Output
_____no_output_____
###Markdown
> **Qualidade dos dados**
###Code
# dados quantitativos:
dataset.describe().T
# dados categóricos:
dataset.select_dtypes('object').describe()
###Output
_____no_output_____
###Markdown
Confome visto pelas estatísticas descritivas e, principalmente, pelos quartis e o **range** (valor máximo - valor mínimo) de cada coluna, os dados paracem estar de boa qualidade e propícios para as próximas etapas. **3. Preparação dos dados** Já vimos anteriormente que não temos valores nulos nas colunas de interesse, visto que a coluna **Date (Payday)** não será utilizada na análise. Checando novamente os dados faltantes após as transformações, tem-se:
###Code
dataset.drop(columns='Date (Payday)', axis=1, inplace=True)
dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
> **Checando a presença de outliers**OBS.: Vale notar que, neste caso, os **outliers** são muito bem-vindos (do ponto de vista prático), já que eles representam um projeto cujo o *budget* é discrepante comparados aos valores convencionais. Como não iremos aplicar nenhum modelo de **Machine Learning** aqui, a tendência é que eles não atrapalhem por hora. Podemos verificar por meio de um **boxplot**:
###Code
# Selecionando apenas colunas qualitativas:
colunas_quantitativas = dataset.select_dtypes('float64')
# Plotando o boxplot:
boxplot = sns.boxplot(data = colunas_quantitativas);
###Output
_____no_output_____
###Markdown
Ainda, podemos visualizar a **distribuição** de cada coluna quantitativa (numérica):
###Code
sns.histplot(colunas_quantitativas, kde=True, palette='YlGnBu')
###Output
_____no_output_____
###Markdown
De uma maneira geral, valores acima de 500 no eixo '**x**' parecem ser **outliers**. Portanto, vamos dar um **zoom** para averiguar melhor as distribuições de cada coluna.
###Code
sns.histplot(colunas_quantitativas, multiple='stack', palette='YlGnBu',lw=0.0)
plt.xlim([0, 450]);
###Output
_____no_output_____
###Markdown
> **Renomeando as colunas para facilitar a manipulação dos dados**
###Code
dataset.columns #mostra o nome das colunas do dataframe
# Eliminando a coluna 'Mês (trabalhado)',
# vamos usar o 'Mês (Pagamento)' como referência
dataset.drop(columns = ['Month (Worked)'], axis=1, inplace=True)
# Renomeando as colunas:
dataset.rename(columns =
{'Work': 'Project',
'Month (Payment)': 'Month',
'Worked hours': 'Worked_hours',
'Earned (Liquid)': 'Earned',
'Hourly Rate': 'Hourly_rate'},
inplace=True
)
# Checagem:
dataset.head()
###Output
_____no_output_____
###Markdown
**4. Respondendo as perguntas de negócios** ('Modelagem' do CRISP-DM) **Projetos completos (total e por país)**
###Code
total_projects = dataset['Project'].count().sum()
print(f'Projetos completos (total): {total_projects}')
###Output
Projetos completos (total): 101
###Markdown
Podemos notar, pelo bloco de código acima, que foram completos **101 projetos** ao todo. Podemos obter essa informação **por país** do cliente.
###Code
# agrupando os dados por país e mostrando a contagem descendente:
projects_by_country = dataset.groupby('Country')['Project'].count().sort_values(ascending=False)
projects_by_country
###Output
_____no_output_____
###Markdown
> **Visualizando isso em um gráfico**
###Code
projects_by_country.plot.barh(color='cornflowerblue', edgecolor='k', lw=1.2)
###Output
_____no_output_____
###Markdown
Podemos ver que os países com maior quantidade de projetos completos foram **USA, UK, Alemanha** e **Holanda**. Destaque também a um país com menor valor de moeda comparado aos demais, a **Arábia Saudita**. **Receitas (total e por país)**
###Code
total_budget = round( dataset['Budget'].sum(), 2)
total_earned = round( dataset['Earned'].sum(), 2)
print(f'Total budgets (in U$D): {total_budget} | (Receita bruta total)')
print(f'Total earned (in U$D): {total_earned} | (Receita líquida total)')
###Output
Total budgets (in U$D): 11692.98 | (Receita bruta total)
Total earned (in U$D): 9819.21 | (Receita líquida total)
###Markdown
> **Obtendo receitas líquidas por país**
###Code
earned_by_country = dataset.groupby('Country')['Earned'].sum().sort_values(ascending=True)
###Output
_____no_output_____
###Markdown
> **Visualização**
###Code
plot_order = dataset.groupby('Country')['Earned'].sum().sort_values(ascending=True).index
sns.barplot(x='Earned', y='Country', estimator=sum, data=dataset,
edgecolor='k', lw=1.2, ci=None, order = plot_order, palette='gist_heat'
)
plt.xticks(np.arange(0,5500,500));
###Output
_____no_output_____
###Markdown
**Horas totais e trabalhas e média de horas por projeto** > Obtendo a **soma** de **todas as horas trabalhadas** e a **média** também.
###Code
total_hours = int( dataset['Worked_hours'].sum() )
mean_hours = int( dataset['Worked_hours'].mean() )
print(f'Horas totais trabalhadas: ~ {total_hours} h')
print(f'Média de horas totais trabalhadas: ~ {mean_hours} h por projeto')
###Output
Horas totais trabalhadas: ~ 469 h
Média de horas totais trabalhadas: ~ 4 h por projeto
###Markdown
**Países com mais clientela (\ de projetos)** Já averiguamos isso anteriormenete, mas podemos re-avaliar:
###Code
projects_by_country = dataset.groupby('Country')['Project'].count().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Inspeção visual:
###Code
projects_by_country.plot.barh(color = 'cornflowerblue', edgecolor = 'k')
plt.xticks(np.arange(0,65,5));
###Output
_____no_output_____
###Markdown
> Representando em **porcentagens**:
###Code
percentage = projects_by_country / total_projects * 100
percentage
###Output
_____no_output_____
###Markdown
**Serviços com melhor valor-hora** > **Agrupando** o valor-hora médio por tipo de serviço. Aqui, vale notar que os *outliers* podem aumentar ou diminuir a média, mas vamos considerá-los nas análises.
###Code
mean_hourly_rate_by_service = dataset.groupby('Subject')['Hourly_rate'].mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
> **Visualização**
###Code
plt.figure(figsize=(18/2.54, 10/2.54))
sns.boxplot(x = 'Subject', y = 'Hourly_rate', data = dataset)
plt.xticks(rotation=45);
plt.xlabel('');
###Output
_____no_output_____
###Markdown
Podemos observar, com base nos **boxplots** acima, que cerca de 50% dos projetos envolvendo LaTeX e Python possuem valor hora médio mais interessantes. Logo, em situações de muita demanda, selecionar aqueles serviços envolvendo **LaTeX** e **Python** à serviços **Educacionais** pode ser mais vantajoso, além de despender menos tempo. **Total de taxas (em valores monetários)** > Criando um coluna para as **'Taxas'** em valores monetários
###Code
dataset['Fees'] = dataset['Budget'] - dataset['Earned']
dataset.head()
###Output
_____no_output_____
###Markdown
Ao longo de todo o período, o valor total pago em **taxas** (*serviços de recebimento*, e *marketplaces*) foi:
###Code
total_fees = round(dataset['Fees'].sum(), 2)
print(f'Total fees (in U$D): {total_fees}')
###Output
Total fees (in U$D): 1873.77
###Markdown
> **Verificando o quanto isso corresponde aos orçamentos totais dos projetos**
###Code
fees_percent = round( dataset['Fees'].sum() / total_budget * 100, 2)
print(f'Total fees (in relation to the total budget): {fees_percent}%')
###Output
Total fees (in relation to the total budget): 16.02%
###Markdown
> **Inspecionando qual fonte cobra mais taxas**
###Code
sns.boxplot(x='Source', y='Fees', data = dataset)
plt.xlabel(' ');
plt.yticks(np.arange(0,90,10));
plt.ylim([0,80]);
###Output
_____no_output_____
###Markdown
Analisando o **boxplot** acima, inferimos que a melhor decisão para **diminuir** as **taxas** é aumentar a quantidade de projetos tratados diretamente com o cliente. **5. KPI's** ('Avaliação' do CRISP-DM) **Anuais** Primeiro, vamos criar um dataframe para cada ano:
###Code
dataset_2020 = dataset.query('Year == "2020"')
dataset_2021 = dataset.query('Year == "2021"')
###Output
_____no_output_____
###Markdown
Com isso, podemos seguir para o cálculo dos KPI's: > KPI 1: **Receitas brutas totais por ano**
###Code
total_budget_2020 = round( dataset_2020['Budget'].sum(), 2)
total_budget_2021 = round( dataset_2021['Budget'].sum(), 2)
print(f'Receita Bruta total (2020): U$D {total_budget_2020}')
print(f'Receita Bruta total (2021): U$D {total_budget_2021}')
###Output
Receita Bruta total (2020): U$D 3238.54
Receita Bruta total (2021): U$D 6334.44
###Markdown
> KPI 2: **Receitas líquidas totais por ano**
###Code
total_earned_2020 = round( dataset_2020['Earned'].sum(), 2)
total_earned_2021 = round( dataset_2021['Earned'].sum(), 2)
print(f'Receita Líquida total (2020): U$D {total_earned_2020}')
print(f'Receita Líquida total (2021): U$D {total_earned_2021}')
###Output
Receita Líquida total (2020): U$D 2686.79
Receita Líquida total (2021): U$D 5262.53
###Markdown
> **Comparando as receitas de 2020 e 2021**
###Code
comparacao_anual_budget = round( ((total_budget_2021/total_budget_2020)), 2)
print(f'Comparação anual (budget): {comparacao_anual_budget}')
comparacao_anual_earned = round( ((total_earned_2021/total_earned_2020)), 2)
print(f'Comparacao anual (earned): {comparacao_anual_earned}')
###Output
Comparacao anual (earned): 1.96
###Markdown
**INSIGHT 1:** De 2020 para 2021, houve aumento em torno de **96 %** nas receitas (tanto bruta, como líquida). > **Receita Líquida por país (em função do ano)**
###Code
receita_pais_2020 = dataset_2020.groupby('Country')['Earned'].sum().sort_values(ascending=False)
receita_pais_2020
###Output
_____no_output_____
###Markdown
**INSIGHT 2:** Disparadamente, a maior parte da receita de 2020 corresponde a clientes dos USA. A base de países desses clientes ainda era pequena, pois era o início da jornada como *freelancer*.
###Code
receita_pais_2021 = dataset_2021.groupby('Country')['Earned'].sum().sort_values(ascending=False)
receita_pais_2021
###Output
_____no_output_____
###Markdown
**INSIGHT 3:** A maior parte da receita de 2021 corresponde a clientes dos USA, mas surgiram clientes de novos países, inclusive com a Holanda gerando mais receita que clientes do UK.
###Code
# Calculando o número de novos clientes (de outros países):
receita_pais_2021.count() - receita_pais_2020.count()
###Output
_____no_output_____
###Markdown
**INSIGHT 4:** Em 2021, foram fechados projetos com clientes de **7** novos países, comparado a 2020. > KPI 3: **Receita líquida por fonte**
###Code
# 2020
receita_fonte_2020 = dataset_2020.groupby('Source')[['Earned']].sum()
receita_fonte_2020.T
# 2021
receita_fonte_2021 = dataset_2021.groupby('Source')[['Earned']].sum()
receita_fonte_2021.T
###Output
_____no_output_____
###Markdown
Comparando os valores:
###Code
( ((receita_fonte_2021 / receita_fonte_2020) - 1 ) * 100).T
###Output
_____no_output_____
###Markdown
**INSIGHT 5:** Em 2021, as receitas líquidas aumentaram consideravelmente em todas as fontes de renda. Ganha destaqui aqui, o aumento expressivo de **173%** das receitas relacionadas a projetos tratados diretamente com o cliente, isto é, sem intermediador (*markeplace* ou CRM). Obviamente, foram fechados mais projetos ao longo do ano, porém a base o fato de a base de clientes onde o projeto foi tratada diretamente, contribui muito para as maiores receitas. > KPI 4: **Receita líquida tipo de serviço**
###Code
# 2020
receita_servico_2020 = dataset_2020.groupby('Subject')['Earned'].sum()
receita_servico_2020.sort_values(ascending=False)
# 2021
receita_servico_2021 = dataset_2021.groupby('Subject')['Earned'].sum()
receita_servico_2021.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Comparação:
###Code
449.27/284.14
###Output
_____no_output_____
###Markdown
**INSIGHT 6:** Em 2020, o tipo de serviço que dava maior retorno era do tipo **Educacional**, embora **LaTeX typesetting** também proporcionasse um ótimo retorno financeiro. No entanto, como foi percebida a escalabilidade do serviço envolvendo **Tabelas em LaTeX** (é um serviço barato e extremamente rápido, pois querem pouco tempo, o que pode gerar valores-hora muito altos), o foco se deu para fechar mais esse tipo de projeto. Isso, aliádo ao fato de que os tempo de contrato com os principais **serviços educationais** foi encerrando, levou a uma maior rentabilidade de serviços relacionados a ***LaTeX typesetting** e **LaTeX tables** (este último com aumento de **58 %** em ralação ao ano anterior). > KPI 5: **Valor-hora médio por tipo de serviço**
###Code
valor_hora_servico_2020 = dataset_2020.groupby('Subject')['Hourly_rate'].mean()
valor_hora_servico_2020.sort_values(ascending=False)
valor_hora_servico_2021 = dataset_2021.groupby('Subject')['Hourly_rate'].mean()
valor_hora_servico_2021.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
**INSIGHT 8:** percebe-se que, em 2020, o serviço do tipo **LaTeX tables** era responsável pelo maior valor-hora médio. No entanto, em 2021, o serviço do tipo **LaTeX typesetting** assumiu a 1ª posição, em função de fechar projetos mais complexos e de maior *budget*. > Comparação quanto ao **valor-hora**:
###Code
round((valor_hora_servico_2021.mean()/valor_hora_servico_2020.mean()) * 100, 2)
###Output
_____no_output_____
###Markdown
**INSIGHT 9:** de 2020 para 2021, houve decréscimo em torno de **22%** no valor-hora. Isso pode ter se dado na diminuição do valor-hora médio relacionado aos serviços do tipo **Educational** e pela inserção do serviço do tipo **Python plots**, o que altera a média global. > Checando os **meses trabalhados** em cada ano:
###Code
meses_trabalhados_2020 = len( list(dataset_2020['Month'].unique()) )
meses_trabalhados_2021 = len( list(dataset_2021['Month'].unique()) )
print(f'Meses trabalhados em 2020: {meses_trabalhados_2020}')
print(f'Meses trabalhados em 2021: {meses_trabalhados_2021}')
###Output
Meses trabalhados em 2020: 5
Meses trabalhados em 2021: 12
###Markdown
**Mensais** Os mesmos KPI anuais podem ser extendidos para dados mensais. Vejamos um exemplo: > **Projetos competos**
###Code
projetos_mensal_2020 = dataset_2020.groupby('Month')['Project'].count()
ordem_indice_2020 = ['August', 'October', 'September', 'November', 'December']
projetos_mensal_2020 = projetos_mensal_2020.reindex(ordem_indice_2020)
projetos_mensal_2020
projetos_mensal_2021 = dataset_2021.groupby('Month')['Project'].count()
ordem_indice_2021 = ['January', 'February', 'March', 'April', 'May', 'June', 'July',
'August', 'October', 'September', 'November', 'December']
projetos_mensal_2021 = projetos_mensal_2021.reindex(ordem_indice_2021)
projetos_mensal_2021
###Output
_____no_output_____
###Markdown
Exemplo de visualização:
###Code
projetos_mensal_2020.plot.barh(color='whitesmoke', edgecolor='k', label='2020')
projetos_mensal_2021.iloc[7:12].plot.barh(color='royalblue', edgecolor='k', label='2021', hatch='/')
plt.legend(loc='best', edgecolor='w', prop={'weight': 'bold'});
plt.xticks(np.arange(0,11));
plt.grid(alpha=0.09);
plt.ylabel('');
###Output
_____no_output_____ |
.ipynb_checkpoints/work0-checkpoint.ipynb | ###Markdown
Config
###Code
class Config:
n_folds=10
random_state=42
tbs = 1024
vbs = 512
data_path="data"
result_path="results"
models_path="models"
###Output
_____no_output_____
###Markdown
plot and util
###Code
def write_to_txt(file_name,column):
with open(file_name, 'w') as f:
for item in column:
f.write("%s\n" % item)
###Output
_____no_output_____
###Markdown
Load data
###Code
train=pd.read_csv(os.path.join(Config.data_path,"train.csv"))
test=pd.read_csv(os.path.join(Config.data_path,"test.csv"))
aae=pd.read_csv(os.path.join(Config.data_path,"amino_acid_embeddings.csv"))
submission=pd.read_csv(os.path.join(Config.data_path,"SampleSubmission.csv"))
###Output
_____no_output_____
###Markdown
Prepare and split data
###Code
train["Sequence_len"]=train["Sequence"].apply(lambda x : len(x))
test["Sequence_len"]=test["Sequence"].apply(lambda x : len(x))
max_seq_length = 550 # max seq length in this data set is 550
#stratified k fold
train["folds"]=-1
kf = StratifiedKFold(n_splits=Config.n_folds, random_state=Config.random_state, shuffle=True)
for fold, (_, val_index) in enumerate(kf.split(train,train["target"])):
train.loc[val_index, "folds"] = fold
train.head()
# reduce seq length
if max_seq_length>550 :
train["Sequence"] = train["Sequence"].apply(lambda x: "".join(list(x)[0:max_seq_length]))
test["Sequence"] = test["Sequence"].apply(lambda x: "".join(list(x)[0:max_seq_length]))
voc_set = set(['P', 'V', 'I', 'K', 'N', 'B', 'F', 'Y', 'E', 'W', 'R', 'D', 'X', 'S', 'C', 'U', 'Q', 'A', 'M', 'H', 'L', 'G', 'T'])
voc_set_map = { k:v for k , v in zip(voc_set,range(1,len(voc_set)+1))}
number_of_class = train["target"].nunique()
def encode(text_tensor, label):
encoded_text = [ voc_set_map[e] for e in list(text_tensor.numpy().decode())]
return encoded_text, label
def encode_map_fn(text, label):
# py_func doesn't set the shape of the returned tensors.
encoded_text, label = tf.py_function(encode,
inp=[text, label],
Tout=(tf.int64, tf.int64))
encoded_text.set_shape([None])
label=tf.one_hot(label,number_of_class)
label.set_shape([number_of_class])
return encoded_text, label
def get_data_loader(file,batch_size,labels):
label_data=tf.data.Dataset.from_tensor_slices(labels)
data_set=tf.data.TextLineDataset(file)
data_set=tf.data.Dataset.zip((data_set,label_data))
data_set=data_set.repeat()
data_set = data_set.shuffle(len(labels))
data_set=data_set.map(encode_map_fn,tf.data.experimental.AUTOTUNE)
data_set=data_set.padded_batch(batch_size)
data_set = data_set.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return data_set
def get_data_loader_test(file,batch_size,labels):
label_data=tf.data.Dataset.from_tensor_slices(labels.target)
data_set=tf.data.TextLineDataset(file)
data_set=tf.data.Dataset.zip((data_set,label_data))
data_set=data_set.map(encode_map_fn,tf.data.experimental.AUTOTUNE)
data_set=data_set.padded_batch(batch_size)
data_set = data_set.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return data_set
###Output
_____no_output_____
###Markdown
Model
###Code
def model():
name = "seq"
dropout_rate = 0.1
learning_rate = 0.001
sequnce = Input([None],name="sequnce")
EMB_layer = Embedding(input_dim = len(voc_set)+1, output_dim = 64, name = "emb_layer")
GRU_layer_2 = GRU(units=256, name = "gru_2", return_sequences = False)
BIDIR_layer_2 = Bidirectional(GRU_layer_2, name="bidirectional_2")
Dens_layer_1 = Dense(units=512, activation=relu, kernel_regularizer=None, bias_regularizer=None, name=name+"_dense_layer_1")
Dens_layer_2 = Dense(units=256, activation=relu, kernel_regularizer=None, bias_regularizer=None, name=name+"_dense_layer_2")
output = Dense(units=number_of_class, activation=softmax, kernel_regularizer=None, bias_regularizer=None, name=name+"_dense_layer_output")
dropout_1 = Dropout(dropout_rate)
emb_layer = EMB_layer(sequnce)
logits = output(Dens_layer_2(dropout_1(Dens_layer_1(BIDIR_layer_2(emb_layer)))))
model = tf.keras.Model(inputs={"sequnce":sequnce, },outputs=logits)
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
loss= tfa.losses.SigmoidFocalCrossEntropy(reduction=tf.keras.losses.Reduction.AUTO)
#loss=CategoricalCrossentropy()
model.compile(optimizer=optimizer, loss=loss, metrics=[tf.keras.metrics.CategoricalAccuracy(name="Acc")])
model.summary()
return model
###Output
_____no_output_____
###Markdown
training
###Code
def trainn(fold):
model_path=f"model_{fold}.h5"
df_train = train[train["folds"] != fold].reset_index(drop=True)
df_valid = train[train["folds"] == fold].reset_index(drop=True)
write_to_txt(f"data/train_{fold}.txt",df_train.Sequence)
write_to_txt(f"data/valid_{fold}.txt",df_valid.Sequence)
train_label=df_train["target"]
valid_label=df_valid["target"]
train_dl = get_data_loader(f"data/train_{fold}.txt",Config.tbs,train_label)
valid_dl = get_data_loader(f"data/valid_{fold}.txt",Config.vbs,valid_label)
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(Config.models_path,model_path),
save_weights_only=True,monitor = 'val_loss',
save_best_only=True,mode="min", verbose=1)
callbacks=[checkpoint]
my_model = model()
history = my_model.fit(train_dl,
validation_data=valid_dl,
epochs=15,
verbose=1,
batch_size=Config.tbs,
validation_batch_size=Config.vbs,
validation_steps=len(df_valid)//Config.vbs,
steps_per_epoch=len(df_train)/Config.tbs,
callbacks=callbacks
)
def predict(fold):
model_path=f"model_{fold}.h5"
write_to_txt(f"data/test_{fold}.txt",test.Sequence)
test["target"]=0
test_label=test["target"]
test_dl = get_data_loader_test(f"data/test_{fold}.txt",Config.vbs,test)
my_model = model()
my_model.load_weights(os.path.join(Config.models_path,model_path))
prediction=my_model.predict(test_dl)
return prediction
trainn(0)
p=predict(0)
sub=test[["ID"]].copy()
for i in range(number_of_class):
sub["target_{}".format(i)]=p[:,i]
sub.head()
sub.to_csv(os.path.join(Config.result_path,"sub_p0_epoch15.csv"),index=False)
###Output
_____no_output_____ |
Half Adder/Version 1/HalfAdder_qiskit.ipynb | ###Markdown
Half Adder Quantum Circuit (v1.0) The following is a quantum implementation of the classical Half Adder circuit. The image below is the classical implementation of a Half Adder with a XOR and AND gate for Sum and Carry respectively.
###Code
from qiskit import *
from qiskit.tools.visualization import plot_bloch_multivector
from qiskit.tools.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
Defining the Circuit
###Code
quantReg = QuantumRegister(4)
classReg = ClassicalRegister(2)
circuit = QuantumCircuit(quantReg, classReg)
circuit.x(0) #Comment out this line to make A = 0
circuit.x(1) #Comment out this line to make B = 0
circuit.barrier()
###Output
_____no_output_____
###Markdown
The above line allows the user to test out the different varitions for the Half Adder. By default it is set to A = 1 and B = 1. If the user wants to change the gates they will have to run the simulator and the output again.
###Code
%matplotlib inline
circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Performing the Sum Operation In the classical working of the Half Adder, the Sum is calculated by XORing A and B. In a quantum circuit, the Controlled NOT (or CNOT for short) gate performs this operation
###Code
circuit.cx(0,2)
circuit.cx(1,2)
circuit.barrier()
circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
A trivial point to note is that ```q_0 CX q_1 ~= q_0 XOR q_1```. The value of the XOR is stored in q_1. However, we don't want the value to be stored in q_1, we want seperate qubit to store it. So q_2 takes ```q_1 XOR (q_0 XOR q_2)```. Since the default value of q_2 is 0 therefore q_1 XOR ```(q_0 XOR q_2) = q_1 XOR (q_0 XOR 0) = (q_1 XOR q_0) XOR 0```. This does not affect the XOR result since if ```(q_1 XOR q_0) = 0 then (q_1 XOR q_0) XOR 0 = 0``` and the same applies for the 1 case. Performing the Carry Operation In the classical working of the Half Adder, the Carry is calculated by ANDing A and B. In a quantum circuit, the Toffoli CCNOT or CCX for short) gate performs this operation.
###Code
circuit.ccx(0,1,3)
circuit.barrier()
circuit.draw(output = 'mpl')
circuit.measure(2,0)
circuit.measure(3,1)
circuit.draw(output = 'mpl')
###Output
_____no_output_____
###Markdown
Results
###Code
nativeSim = Aer.get_backend('qasm_simulator')
nativeResult = execute(circuit, backend = nativeSim).result()
counts = nativeResult.get_counts()
###Output
_____no_output_____
###Markdown
Histogram Visualization
###Code
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
Since q_0 and q_1 are both 1 therefore 1 + 1 = 10 (S = 0 and C = 1). We got the correct result. Yayy! Bloch Sphere Visualization
###Code
nativeSVSim = Aer.get_backend('statevector_simulator')
nativeSVResult = execute(circuit, backend = nativeSVSim).result()
stateVector = nativeSVResult.get_statevector()
plot_bloch_multivector(stateVector)
###Output
_____no_output_____ |
course2/C2W3_Assignment.ipynb | ###Markdown
Week 3 Assignment: Data Pipeline Components for Production MLIn this last graded programming exercise of the course, you will put together all the lessons we've covered so far to handle the first three steps of a production machine learning project - Data ingestion, Data Validation, and Data Transformation.Specifically, you will build the production data pipeline by:* Performing feature selection* Ingesting the dataset* Generating the statistics of the dataset* Creating a schema as per the domain knowledge* Creating schema environments* Visualizing the dataset anomalies* Preprocessing, transforming and engineering your features* Tracking the provenance of your data pipeline using ML MetadataMost of these will look familiar already so try your best to do the exercises by recall or browsing the documentation. If you get stuck however, you can review the lessons in class and the ungraded labs. Let's begin! Table of Contents- [1 - Imports](1)- [2 - Load the Dataset](2)- [3 - Feature Selection](4) - [Exercise 1 - Feature Selection](ex-1)- [4 - Data Pipeline](4) - [4.1 - Setup the Interactive Context](4-1) - [4.2 - Generating Examples](4-2) - [Exercise 2 - ExampleGen](ex-2) - [4.3 - Computing Statistics](4-3) - [Exercise 3 - StatisticsGen](ex-3) - [4.4 - Inferring the Schema](4-4) - [Exercise 4 - SchemaGen](ex-4) - [4.5 - Curating the Schema](4-5) - [Exercise 5 - Curating the Schema](ex-5) - [4.6 - Schema Environments](4-6) - [Exercise 6 - Define the serving environment](ex-6) - [4.7 - Generate new statistics using the updated schema](4-7) - [Exercise 7 - ImporterNode](ex-7) - [Exercise 8 - StatisticsGen with the new schema](ex-8) - [4.8 - Check anomalies](4-8) - [Exercise 9 - ExampleValidator](ex-9) - [4.9 - Feature Engineering](4-9) - [Exercise 10 - preprocessing function](ex-10) - [Exercise 11 - Transform](ex-11)- [5 - ML Metadata](5) - [5.1 - Accessing stored artifacts](5-1) - [5.2 - Tracking artifacts](5-2) - [Exercise 12 - Get parent artifacts](ex-12) 1 - Imports
###Code
import tensorflow as tf
import tfx
# TFX components
from tfx.components import CsvExampleGen
from tfx.components import ExampleValidator
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Transform
from tfx.components import ImporterNode
# TFX libraries
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
# For performing feature selection
from sklearn.feature_selection import SelectKBest, f_classif
# For feature visualization
import matplotlib.pyplot as plt
import seaborn as sns
# Utilities
from tensorflow.python.lib.io import file_io
from tensorflow_metadata.proto.v0 import schema_pb2
from google.protobuf.json_format import MessageToDict
from tfx.proto import example_gen_pb2
from tfx.types import standard_artifacts
import os
import pprint
import tempfile
import pandas as pd
# To ignore warnings from TF
tf.get_logger().setLevel('ERROR')
# For formatting print statements
pp = pprint.PrettyPrinter()
# Display versions of TF and TFX related packages
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
print('TensorFlow Data Validation version: {}'.format(tfdv.__version__))
print('TensorFlow Transform version: {}'.format(tft.__version__))
###Output
_____no_output_____
###Markdown
2 - Load the datasetYou are going to use a variant of the [Cover Type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset. This can be used to train a model that predicts the forest cover type based on cartographic variables. You can read more about the *original* dataset [here](https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info) and we've outlined the data columns below:| Column Name | Variable Type | Units / Range | Description || --------- | ------------ | ----- | ------------------- || Elevation | quantitative |meters | Elevation in meters || Aspect | quantitative | azimuth | Aspect in degrees azimuth || Slope | quantitative | degrees | Slope in degrees || Horizontal_Distance_To_Hydrology | quantitative | meters | Horz Dist to nearest surface water features || Vertical_Distance_To_Hydrology | quantitative | meters | Vert Dist to nearest surface water features || Horizontal_Distance_To_Roadways | quantitative | meters | Horz Dist to nearest roadway || Hillshade_9am | quantitative | 0 to 255 index | Hillshade index at 9am, summer solstice || Hillshade_Noon | quantitative | 0 to 255 index | Hillshade index at noon, summer soltice || Hillshade_3pm | quantitative | 0 to 255 index | Hillshade index at 3pm, summer solstice || Horizontal_Distance_To_Fire_Points | quantitative | meters | Horz Dist to nearest wildfire ignition points || Wilderness_Area (4 binary columns) | qualitative | 0 (absence) or 1 (presence) | Wilderness area designation || Soil_Type (40 binary columns) | qualitative | 0 (absence) or 1 (presence) | Soil Type designation || Cover_Type (7 types) | integer | 1 to 7 | Forest Cover Type designation |As you may notice, the qualitative data has already been one-hot encoded (e.g. `Soil_Type` has 40 binary columns where a `1` indicates presence of a feature). For learning, we will use a modified version of this dataset that shows a more raw format. This will let you practice your skills in handling different data types. You can see the code for preparing the dataset [here](https://github.com/GoogleCloudPlatform/mlops-on-gcp/blob/master/datasets/covertype/wrangle/prepare.ipynb) if you want but it is **not required for this assignment**. The main changes include:* Converting `Wilderness_Area` and `Soil_Type` to strings.* Converting the `Cover_Type` range to [0, 6]Run the next cells to load the **modified** dataset to your workspace.
###Code
# # OPTIONAL: Just in case you want to restart the lab workspace *from scratch*, you
# # can uncomment and run this block to delete previously created files and
# # directories.
# !rm -rf pipeline
# !rm -rf data
# Declare paths to the data
DATA_DIR = './data'
TRAINING_DIR = f'{DATA_DIR}/training'
TRAINING_DATA = f'{TRAINING_DIR}/dataset.csv'
# Create the directory
!mkdir -p {TRAINING_DIR}
# download the dataset
!wget -nc https://storage.googleapis.com/workshop-datasets/covertype/full/dataset.csv -P {TRAINING_DIR}
###Output
_____no_output_____
###Markdown
3 - Feature SelectionFor your first task, you will reduce the number of features to feed to the model. As mentioned in Week 2, this will help reduce the complexity of your model and save resources while training. Let's assume that you already have a baseline model that is trained on all features and you want to see if reducing the number of features will generate a better model. You will want to select a subset that has great predictive value to the label (in this case the `Cover_Type`). Let's do that in the following cells.
###Code
# Load the dataset to a dataframe
df = pd.read_csv(TRAINING_DATA)
# Preview the dataset
df.head()
# Show the data type of each column
df.dtypes
###Output
_____no_output_____
###Markdown
Looking at the data types of each column and the dataset description at the start of this notebook, you can see that most of the features are numeric and only two are not. This needs to be taken into account when selecting the subset of features because numeric and categorical features are scored differently. Let's create a temporary dataframe that only contains the numeric features so we can use it in the next sections.
###Code
# Copy original dataset
df_num = df.copy()
# Categorical columns
cat_columns = ['Wilderness_Area', 'Soil_Type']
# Label column
label_column = ['Cover_Type']
# Drop the categorical and label columns
df_num.drop(cat_columns, axis=1, inplace=True)
df_num.drop(label_column, axis=1, inplace=True)
# Preview the resuls
df_num.head()
###Output
_____no_output_____
###Markdown
You will use scikit-learn's built-in modules to perform [univariate feature selection](https://scikit-learn.org/stable/modules/feature_selection.htmlunivariate-feature-selection) on our dataset's numeric attributes. First, you need to prepare the input and target features:
###Code
# Set the target values
y = df[label_column].values
# Set the input values
X = df_num.values
###Output
_____no_output_____
###Markdown
Afterwards, you will use [SelectKBest](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.htmlsklearn.feature_selection.SelectKBest) to score each input feature against the target variable. Be mindful of the scoring function to pass in and make sure it is appropriate for the input (numeric) and target (categorical) values. Exercise 1: Feature SelectionComplete the code below to select the top 8 features of the numeric columns.
###Code
### START CODE HERE ###
# Create SelectKBest object using f_classif (ANOVA statistics) for 8 classes
select_k_best = None
# Fit and transform the input data using select_k_best
X_new = None
# Extract the features which are selected using get_support API
features_mask = None
### END CODE HERE ###
# Print the results
reqd_cols = pd.DataFrame({'Columns': df_num.columns, 'Retain': features_mask})
print(reqd_cols)
###Output
_____no_output_____
###Markdown
**Expected Output:**``` Columns Retain0 Elevation True1 Aspect False2 Slope True3 Horizontal_Distance_To_Hydrology True4 Vertical_Distance_To_Hydrology True5 Horizontal_Distance_To_Roadways True6 Hillshade_9am True7 Hillshade_Noon True8 Hillshade_3pm False9 Horizontal_Distance_To_Fire_Points True``` If you got the expected results, you can now select this subset of features from the original dataframe and save it to a new directory in your workspace.
###Code
# Set the paths to the reduced dataset
TRAINING_DIR_FSELECT = f'{TRAINING_DIR}/fselect'
TRAINING_DATA_FSELECT = f'{TRAINING_DIR_FSELECT}/dataset.csv'
# Create the directory
!mkdir -p {TRAINING_DIR_FSELECT}
# Get the feature names from SelectKBest
feature_names = list(df_num.columns[features_mask])
# Append the categorical and label columns
feature_names = feature_names + cat_columns + label_column
# Select the selected subset of columns
df_select = df[feature_names]
# Write CSV to the created directory
df_select.to_csv(TRAINING_DATA_FSELECT, index=False)
# Preview the results
df_select.head()
###Output
_____no_output_____
###Markdown
4 - Data PipelineWith the selected subset of features prepared, you can now start building the data pipeline. This involves ingesting, validating, and transforming your data. You will be using the TFX components you've already encountered in the ungraded labs and you can look them up here in the [official documentation](https://www.tensorflow.org/tfx/api_docs/python/tfx/components). 4.1 - Setup the Interactive ContextAs usual, you will first setup the Interactive Context so you can manually execute the pipeline components from the notebook. You will save the sqlite database in a pre-defined directory in your workspace. Please do not modify this path because you will need this in a later exercise involving ML Metadata.
###Code
# Location of the pipeline metadata store
PIPELINE_DIR = './pipeline'
# Declare the InteractiveContext and use a local sqlite file as the metadata store.
context = InteractiveContext(pipeline_root=PIPELINE_DIR)
###Output
_____no_output_____
###Markdown
4.2 - Generating ExamplesThe first step in the pipeline is to ingest the data. Using [ExampleGen](https://www.tensorflow.org/tfx/guide/examplegen), you can convert raw data to TFRecords for faster computation in the later stages of the pipeline. Exercise 2: ExampleGenUse `ExampleGen` to ingest the dataset we loaded earlier. Some things to note:* The input is in CSV format so you will need to use the appropriate type of `ExampleGen` to handle it. * This function accepts a *directory* path to the training data and not the CSV file path itself. This will take a couple of minutes to run.
###Code
# # NOTE: Uncomment and run this if you get an error saying there are different
# # headers in the dataset. This is usually because of the notebook checkpoints saved in
# # that folder.
# !rm -rf {TRAINING_DIR}/.ipynb_checkpoints
# !rm -rf {TRAINING_DIR_FSELECT}/.ipynb_checkpoints
# !rm -rf {SERVING_DIR}/.ipynb_checkpoints
### START CODE HERE
# Instantiate ExampleGen with the input CSV dataset
example_gen = None
# Run the component using the InteractiveContext instance
None
### END CODE HERE
###Output
_____no_output_____
###Markdown
4.3 - Computing StatisticsNext, you will compute the statistics of your data. This will allow you to observe and analyze characteristics of your data through visualizations provided by the integrated [FACETS](https://pair-code.github.io/facets/) library. Exercise 3: StatisticsGenUse [StatisticsGen](https://www.tensorflow.org/tfx/guide/statsgen) to compute the statistics of the output examples of `ExampleGen`.
###Code
### START CODE HERE
# Instantiate StatisticsGen with the ExampleGen ingested dataset
statistics_gen = None
# Run the component
None
### END CODE HERE
# Display the results
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
Once you've loaded the display, you may notice that the `zeros` column for `Cover_type` is highlighted in red. The visualization is letting us know that this might be a potential issue. In our case though, we know that the `Cover_Type` has a range of [0, 6] so having zeros in this column is something we expect. 4.4 - Inferring the SchemaYou will need to create a schema to validate incoming datasets during training and serving. Fortunately, TFX allows you to infer a first draft of this schema with the [SchemaGen](https://www.tensorflow.org/tfx/guide/schemagen) component. Exercise 4: SchemaGenUse `SchemaGen` to infer a schema based on the computed statistics of `StatisticsGen`.
###Code
### START CODE HERE
# Instantiate SchemaGen with the output statistics from the StatisticsGen
schema_gen = None
# Run the component
None
### END CODE HERE
# Visualize the output
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
4.5 - Curating the schemaYou can see that the inferred schema is able to capture the data types correctly and also able to show the expected values for the qualitative (i.e. string) data. You can still fine-tune this however. For instance, we have features where we expect a certain range:* `Hillshade_9am`: 0 to 255* `Hillshade_Noon`: 0 to 255* `Slope`: 0 to 90* `Cover_Type`: 0 to 6You want to update your schema to take note of these so the pipeline can detect if invalid values are being fed to the model. Exercise 5: Curating the SchemaUse [TFDV](https://www.tensorflow.org/tfx/data_validation/get_started) to update the inferred schema to restrict a range of values to the features mentioned above.Things to note:* You can use [tfdv.set_domain()](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/set_domain) to define acceptable values for a particular feature.* These should still be INT types after making your changes.* Declare `Cover_Type` as a *categorical* variable. Unlike the other four features, the integers 0 to 6 here correspond to a designated label and not a quantitative measure. You can look at the available flags for `set_domain()` in the official doc to know how to set this.
###Code
try:
# Get the schema uri
schema_uri = schema_gen.outputs['schema']._artifacts[0].uri
# for grading since context.run() does not work outside the notebook
except IndexError:
print("context.run() was no-op")
schema_path = './pipeline/SchemaGen/schema'
dir_id = os.listdir(schema_path)[0]
schema_uri = f'{schema_path}/{dir_id}'
# Get the schema pbtxt file from the SchemaGen output
schema = tfdv.load_schema_text(os.path.join(schema_uri, 'schema.pbtxt'))
### START CODE HERE ###
# Set the two `Hillshade` features to have a range of 0 to 255
tfdv.set_domain(None, None, schema_pb2.IntDomain(name='Hillshade_9am', min=None, max=None))
tfdv.set_domain(None, None, schema_pb2.IntDomain(name='Hillshade_Noon', min=None, max=None))
# Set the `Slope` feature to have a range of 0 to 90
tfdv.set_domain(None, None, schema_pb2.IntDomain(name='Slope', min=None, max=None))
# Set `Cover_Type` to categorical having minimum value of 0 and maximum value of 6
tfdv.set_domain(None, None, schema_pb2.IntDomain(name='Cover_Type', min=None, max=None, is_categorical=None))
### END CODE HERE ###
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
You should now see the ranges you declared in the `Domain` column of the schema. 4.6 - Schema EnvironmentsIn supervised learning, we train the model to make predictions by feeding a set of features with its corresponding label. Thus, our training dataset will have both the input features and label, and the schema is configured to detect these. However, after training and you serve the model for inference, the incoming data will no longer have the label. This will present problems when validating the data using the current version of the schema. Let's demonstrate that in the following cells. You will simulate a serving dataset by getting subset of the training set and dropping the label column (i.e. `Cover_Type`). Afterwards, you will validate this serving dataset using the schema you curated earlier.
###Code
# Declare paths to the serving data
SERVING_DIR = f'{DATA_DIR}/serving'
SERVING_DATA = f'{SERVING_DIR}/serving_dataset.csv'
# Create the directory
!mkdir -p {SERVING_DIR}
# Read a subset of the training dataset
serving_data = pd.read_csv(TRAINING_DATA, nrows=100)
# Drop the `Cover_Type` column
serving_data.drop(columns='Cover_Type', inplace=True)
# Save the modified dataset
serving_data.to_csv(SERVING_DATA, index=False)
# Delete unneeded variable from memory
del serving_data
# Declare StatsOptions to use the curated schema
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
# Compute the statistics of the serving dataset
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=stats_options)
# Detect anomalies in the serving dataset
anomalies = tfdv.validate_statistics(serving_stats, schema=schema)
# Display the anomalies detected
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
As expected, the missing column is flagged. To fix this, you need to configure the schema to detect when it's being used for training or for inference / serving. You can do this by setting [schema environments](https://www.tensorflow.org/tfx/tutorials/data_validation/tfdv_basicschema_environments). Exercise 6: Define the serving environmentComplete the code below to ignore the `Cover_Type` feature when validating in the *SERVING* environment.
###Code
schema.default_environment.append('TRAINING')
### START CODE HERE ###
# Hint: Create another default schema environment with name SERVING (pass in a string)
schema.default_environment.append(None)
# Remove Cover_Type feature from SERVING using TFDV
# Hint: Pass in the strings with the name of the feature and environment
tfdv.get_feature(schema, None).not_in_environment.append(None)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
If done correctly, running the cell below should show *No Anomalies*.
###Code
# Validate the serving dataset statistics in the `SERVING` environment
anomalies = tfdv.validate_statistics(serving_stats, schema=schema, environment='SERVING')
# Display the anomalies detected
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
We can now save this curated schema in a local directory so we can import it to our TFX pipeline.
###Code
# Declare the path to the updated schema directory
UPDATED_SCHEMA_DIR = f'{PIPELINE_DIR}/updated_schema'
# Create the said directory
!mkdir -p {UPDATED_SCHEMA_DIR}
# Declare the path to the schema file
schema_file = os.path.join(UPDATED_SCHEMA_DIR, 'schema.pbtxt')
# Save the curated schema to the said file
tfdv.write_schema_text(schema, schema_file)
###Output
_____no_output_____
###Markdown
As a sanity check, let's display the schema we just saved and verify that it contains the changes we introduced. It should still show the ranges in the `Domain` column and there should be two environments available.
###Code
# Load the schema from the directory we just created
new_schema = tfdv.load_schema_text(schema_file)
# Display the schema. Check that the Domain column still contains the ranges.
tfdv.display_schema(schema=new_schema)
# The environment list should show `TRAINING` and `SERVING`.
new_schema.default_environment
###Output
_____no_output_____
###Markdown
4.7 - Generate new statistics using the updated schemaYou will now compute the statistics using the schema you just curated. Remember though that TFX components interact with each other by getting artifact information from the metadata store. So you first have to import the curated schema file into ML Metadata. You will do that by using an [ImporterNode](https://www.tensorflow.org/tfx/guide/statsgenusing_the_statsgen_component_with_a_schema) to create an artifact representing the curated schema. Exercise 7: ImporterNodeComplete the code below to create a `Schema` artifact that points to the curated schema directory. Pass in an `instance_name` as well and name it `import_user_schema`.
###Code
### START CODE HERE ###
# Use an ImporterNode to put the curated schema to ML Metadata
user_schema_importer = None
)
# Run the component
context.run(None, enable_cache=False)
### END CODE HERE ###
context.show(user_schema_importer.outputs['result'])
###Output
_____no_output_____
###Markdown
With the artifact successfully created, you can now use `StatisticsGen` and pass in a `schema` parameter to use the curated schema. Exercise 8: Statistics with the new schemaUse `StatisticsGen` to compute the statistics with the schema you updated in the previous section.
###Code
### START CODE HERE ###
# Use StatisticsGen to compute the statistics using the curated schema
statistics_gen_updated = None
)
# Run the component
None
### END CODE HERE ###
context.show(statistics_gen_updated.outputs['statistics'])
###Output
_____no_output_____
###Markdown
The chart will look mostly the same from the previous runs but you can see that the `Cover Type` is now under the categorical features. That shows that `StatisticsGen` is indeed using the updated schema. 4.8 - Check anomaliesYou will now check if the dataset has any anomalies with respect to the schema. You can do that easily with the [ExampleValidator](https://www.tensorflow.org/tfx/guide/exampleval) component. Exercise 9: ExampleValidatorCheck if there are any anomalies using `ExampleValidator`. You will need to pass in the updated statistics and schema from the previous sections.
###Code
### START CODE HERE ###
example_validator = None
# Run the component.
None
### END CODE HERE ###
# Visualize the results
context.show(example_validator.outputs['anomalies'])
###Output
_____no_output_____
###Markdown
4.10 - Feature engineeringYou will now proceed to transforming your features to a form suitable for training a model. This can include several methods such as scaling and converting strings to vocabulary indices. It is important for these transformations to be consistent across your training data, and also for the serving data when the model is deployed for inference. TFX ensures this by generating a graph that will process incoming data both during training and inference.Let's first declare the constants and utility function you will use for the exercise.
###Code
# Set the constants module filename
_cover_constants_module_file = 'cover_constants.py'
%%writefile {_cover_constants_module_file}
SCALE_MINMAX_FEATURE_KEYS = [
"Horizontal_Distance_To_Hydrology",
"Vertical_Distance_To_Hydrology",
]
SCALE_01_FEATURE_KEYS = [
"Hillshade_9am",
"Hillshade_Noon",
"Horizontal_Distance_To_Fire_Points",
]
SCALE_Z_FEATURE_KEYS = [
"Elevation",
"Slope",
"Horizontal_Distance_To_Roadways",
]
VOCAB_FEATURE_KEYS = ["Wilderness_Area"]
HASH_STRING_FEATURE_KEYS = ["Soil_Type"]
LABEL_KEY = "Cover_Type"
# Utility function for renaming the feature
def transformed_name(key):
return key + '_xf'
###Output
_____no_output_____
###Markdown
Next you will define the `preprocessing_fn` to apply transformations to the features. Exercise 10: Preprocessing functionComplete the module to transform your features. Refer to the code comments to get hints on what operations to perform.Here are some links to the docs of the functions you will need to complete this function:- [`tft.scale_by_min_max`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_by_min_max)- [`tft.scale_to_0_1`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_to_0_1)- [`tft.scale_to_z_score`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_to_z_score)- [`tft.compute_and_apply_vocabulary`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/compute_and_apply_vocabulary)- [`tft.hash_strings`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/hash_strings)
###Code
# Set the transform module filename
_cover_transform_module_file = 'cover_transform.py'
%%writefile {_cover_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import cover_constants
_SCALE_MINMAX_FEATURE_KEYS = cover_constants.SCALE_MINMAX_FEATURE_KEYS
_SCALE_01_FEATURE_KEYS = cover_constants.SCALE_01_FEATURE_KEYS
_SCALE_Z_FEATURE_KEYS = cover_constants.SCALE_Z_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = cover_constants.VOCAB_FEATURE_KEYS
_HASH_STRING_FEATURE_KEYS = cover_constants.HASH_STRING_FEATURE_KEYS
_LABEL_KEY = cover_constants.LABEL_KEY
_transformed_name = cover_constants.transformed_name
def preprocessing_fn(inputs):
features_dict = {}
### START CODE HERE ###
for feature in _SCALE_MINMAX_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using scaling of min_max function
# Hint: Use tft.scale_by_min_max by passing in the respective column
features_dict[_transformed_name(feature)] = None
for feature in _SCALE_01_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using scaling of 0 to 1 function
# Hint: tft.scale_to_0_1
features_dict[_transformed_name(feature)] = None
for feature in _SCALE_Z_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using scaling to z score
# Hint: tft.scale_to_z_score
features_dict[_transformed_name(feature)] = None
for feature in _VOCAB_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using vocabulary available in column
# Hint: Use tft.compute_and_apply_vocabulary
features_dict[_transformed_name(feature)] = None
for feature in _HASH_STRING_FEATURE_KEYS:
data_col = inputs[feature]
# Transform by hashing strings into buckets
# Hint: Use tft.hash_strings with the param hash_buckets set to 10
features_dict[_transformed_name(feature)] = None
### END CODE HERE ###
# No change in the label
features_dict[_LABEL_KEY] = inputs[_LABEL_KEY]
return features_dict
###Output
_____no_output_____
###Markdown
Exercise 11: TransformUse the [TFX Transform component](https://www.tensorflow.org/tfx/api_docs/python/tfx/components/Transform) to perform the transformations and generate the transformation graph. You will need to pass in the dataset examples, *curated* schema, and the module that contains the preprocessing function.
###Code
### START CODE HERE ###
# Instantiate the Transform component
transform = None
### END CODE HERE ###
# Run the component
context.run(transform, enable_cache=False)
###Output
_____no_output_____
###Markdown
Let's inspect a few examples of the transformed dataset to see if the transformations are done correctly.
###Code
try:
transform_uri = transform.outputs['transformed_examples'].get()[0].uri
# for grading since context.run() does not work outside the notebook
except IndexError:
print("context.run() was no-op")
examples_path = './pipeline/Transform/transformed_examples'
dir_id = os.listdir(examples_path)[0]
transform_uri = f'{examples_path}/{dir_id}'
# Get the URI of the output artifact representing the transformed examples
train_uri = os.path.join(transform_uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
transformed_dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# import helper function to get examples from the dataset
from util import get_records
# Get 3 records from the dataset
sample_records_xf = get_records(transformed_dataset, 3)
# Print the output
pp.pprint(sample_records_xf)
###Output
_____no_output_____
###Markdown
5 - ML MetadataTFX uses [ML Metadata](https://www.tensorflow.org/tfx/guide/mlmd) under the hood to keep records of artifacts that each component uses. This makes it easier to track how the pipeline is run so you can troubleshoot if needed or want to reproduce results.In this final section of the assignment, you will demonstrate going through this metadata store to retrieve related artifacts. This skill is useful for when you want to recall which inputs are fed to a particular stage of the pipeline. For example, you can know where to locate the schema used to perform feature transformation, or you can determine which set of examples were used to train a model. You will start by importing the relevant modules and setting up the connection to the metadata store. We have also provided some helper functions for displaying artifact information and you can review its code in the external `util.py` module in your lab workspace.
###Code
# Import mlmd and utilities
import ml_metadata as mlmd
from ml_metadata.proto import metadata_store_pb2
from util import display_types, display_artifacts, display_properties
# Get the connection config to connect to the metadata store
connection_config = context.metadata_connection_config
# Instantiate a MetadataStore instance with the connection config
store = mlmd.MetadataStore(connection_config)
# Declare the base directory where All TFX artifacts are stored
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
###Output
_____no_output_____
###Markdown
5.1 - Accessing stored artifactsWith the connection setup, you can now interact with the metadata store. For instance, you can retrieve all artifact types stored with the `get_artifact_types()` function. For reference, the API is documented [here](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStore).
###Code
# Get the artifact types
types = store.get_artifact_types()
# Display the results
display_types(types)
###Output
_____no_output_____
###Markdown
You can also get a list of artifacts for a particular type to see if there are variations used in the pipeline. For example, you curated a schema in an earlier part of the assignment so this should appear in the records. Running the cell below should show at least two rows: one for the inferred schema, and another for the updated schema. If you ran this notebook before, then you might see more rows because of the different schema artifacts saved under the `./SchemaGen/schema` directory.
###Code
# Retrieve the transform graph list
schema_list = store.get_artifacts_by_type('Schema')
# Display artifact properties from the results
display_artifacts(store, schema_list, base_dir)
###Output
_____no_output_____
###Markdown
Moreover, you can also get the properties of a particular artifact. TFX declares some properties automatically for each of its components. You will most likely see `name`, `state` and `producer_component` for each artifact type. Additional properties are added where appropriate. For example, a `split_names` property is added in `ExampleStatistics` artifacts to indicate which splits the statistics are generated for.
###Code
# Get the latest TransformGraph artifact
statistics_artifact = store.get_artifacts_by_type('ExampleStatistics')[-1]
# Display the properties of the retrieved artifact
display_properties(store, statistics_artifact)
###Output
_____no_output_____
###Markdown
5.2 - Tracking artifactsFor this final exercise, you will build a function to return the parent artifacts of a given one. For example, this should be able to list the artifacts that were used to generate a particular `TransformGraph` instance. Exercise 12: Get parent artifactsComplete the code below to track the inputs of a particular artifact.Tips:* You may find [get_events_by_artifact_ids()](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStoreget_events_by_artifact_ids) and [get_events_by_execution_ids()](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStoreget_executions_by_id) useful here. * Some of the methods of the MetadataStore class (such as the two given above) only accepts iterables so remember to convert to a list (or set) if you only have an int (e.g. pass `[x]` instead of `x`).
###Code
def get_parent_artifacts(store, artifact):
### START CODE HERE ###
# Get the artifact id of the input artifact
artifact_id = None
# Get events associated with the artifact id
artifact_id_events = None
# From the `artifact_id_events`, get the execution ids of OUTPUT events.
# Cast to a set to remove duplicates if any.
execution_id = set(
None
for event in None
if None == None
)
# Get the events associated with the execution_id
execution_id_events = None
# From execution_id_events, get the artifact ids of INPUT events.
# Cast to a set to remove duplicates if any.
parent_artifact_ids = set(
event.artifact_id
for event in execution_id_events
if event.type == metadata_store_pb2.Event.INPUT
)
# Get the list of artifacts associated with the parent_artifact_ids
parent_artifact_list = None
### END CODE HERE ###
return parent_artifact_list
# Get an artifact instance from the metadata store
artifact_instance = store.get_artifacts_by_type('TransformGraph')[0]
# Retrieve the parent artifacts of the instance
parent_artifacts = get_parent_artifacts(store, artifact_instance)
# Display the results
display_artifacts(store, parent_artifacts, base_dir)
###Output
_____no_output_____
###Markdown
Week 3 Assignment: Data Pipeline Components for Production MLIn this last graded programming exercise of the course, you will put together all the lessons we've covered so far to handle the first three steps of a production machine learning project - Data ingestion, Data Validation, and Data Transformation.Specifically, you will build the production data pipeline by:* Performing feature selection* Ingesting the dataset* Generating the statistics of the dataset* Creating a schema as per the domain knowledge* Creating schema environments* Visualizing the dataset anomalies* Preprocessing, transforming and engineering your features* Tracking the provenance of your data pipeline using ML MetadataMost of these will look familiar already so try your best to do the exercises by recall or browsing the documentation. If you get stuck however, you can review the lessons in class and the ungraded labs. Let's begin! Table of Contents- [1 - Imports](1)- [2 - Load the Dataset](2)- [3 - Feature Selection](4) - [Exercise 1 - Feature Selection](ex-1)- [4 - Data Pipeline](4) - [4.1 - Setup the Interactive Context](4-1) - [4.2 - Generating Examples](4-2) - [Exercise 2 - ExampleGen](ex-2) - [4.3 - Computing Statistics](4-3) - [Exercise 3 - StatisticsGen](ex-3) - [4.4 - Inferring the Schema](4-4) - [Exercise 4 - SchemaGen](ex-4) - [4.5 - Curating the Schema](4-5) - [Exercise 5 - Curating the Schema](ex-5) - [4.6 - Schema Environments](4-6) - [Exercise 6 - Define the serving environment](ex-6) - [4.7 - Generate new statistics using the updated schema](4-7) - [Exercise 7 - ImporterNode](ex-7) - [Exercise 8 - StatisticsGen with the new schema](ex-8) - [4.8 - Check anomalies](4-8) - [Exercise 9 - ExampleValidator](ex-9) - [4.9 - Feature Engineering](4-9) - [Exercise 10 - preprocessing function](ex-10) - [Exercise 11 - Transform](ex-11)- [5 - ML Metadata](5) - [5.1 - Accessing stored artifacts](5-1) - [5.2 - Tracking artifacts](5-2) - [Exercise 12 - Get parent artifacts](ex-12) 1 - Imports
###Code
import tensorflow as tf
import tfx
# TFX components
from tfx.components import CsvExampleGen
from tfx.components import ExampleValidator
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Transform
from tfx.components import ImporterNode
# TFX libraries
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
# For performing feature selection
from sklearn.feature_selection import SelectKBest, f_classif
# For feature visualization
import matplotlib.pyplot as plt
import seaborn as sns
# Utilities
from tensorflow.python.lib.io import file_io
from tensorflow_metadata.proto.v0 import schema_pb2
from google.protobuf.json_format import MessageToDict
from tfx.proto import example_gen_pb2
from tfx.types import standard_artifacts
import os
import pprint
import tempfile
import pandas as pd
# To ignore warnings from TF
tf.get_logger().setLevel('ERROR')
# For formatting print statements
pp = pprint.PrettyPrinter()
# Display versions of TF and TFX related packages
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
print('TensorFlow Data Validation version: {}'.format(tfdv.__version__))
print('TensorFlow Transform version: {}'.format(tft.__version__))
###Output
TensorFlow version: 2.3.1
TFX version: 0.24.0
TensorFlow Data Validation version: 0.24.1
TensorFlow Transform version: 0.24.1
###Markdown
2 - Load the datasetYou are going to use a variant of the [Cover Type](https://archive.ics.uci.edu/ml/datasets/covertype) dataset. This can be used to train a model that predicts the forest cover type based on cartographic variables. You can read more about the *original* dataset [here](https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info) and we've outlined the data columns below:| Column Name | Variable Type | Units / Range | Description || --------- | ------------ | ----- | ------------------- || Elevation | quantitative |meters | Elevation in meters || Aspect | quantitative | azimuth | Aspect in degrees azimuth || Slope | quantitative | degrees | Slope in degrees || Horizontal_Distance_To_Hydrology | quantitative | meters | Horz Dist to nearest surface water features || Vertical_Distance_To_Hydrology | quantitative | meters | Vert Dist to nearest surface water features || Horizontal_Distance_To_Roadways | quantitative | meters | Horz Dist to nearest roadway || Hillshade_9am | quantitative | 0 to 255 index | Hillshade index at 9am, summer solstice || Hillshade_Noon | quantitative | 0 to 255 index | Hillshade index at noon, summer soltice || Hillshade_3pm | quantitative | 0 to 255 index | Hillshade index at 3pm, summer solstice || Horizontal_Distance_To_Fire_Points | quantitative | meters | Horz Dist to nearest wildfire ignition points || Wilderness_Area (4 binary columns) | qualitative | 0 (absence) or 1 (presence) | Wilderness area designation || Soil_Type (40 binary columns) | qualitative | 0 (absence) or 1 (presence) | Soil Type designation || Cover_Type (7 types) | integer | 1 to 7 | Forest Cover Type designation |As you may notice, the qualitative data has already been one-hot encoded (e.g. `Soil_Type` has 40 binary columns where a `1` indicates presence of a feature). For learning, we will use a modified version of this dataset that shows a more raw format. This will let you practice your skills in handling different data types. You can see the code for preparing the dataset [here](https://github.com/GoogleCloudPlatform/mlops-on-gcp/blob/master/datasets/covertype/wrangle/prepare.ipynb) if you want but it is **not required for this assignment**. The main changes include:* Converting `Wilderness_Area` and `Soil_Type` to strings.* Converting the `Cover_Type` range to [0, 6]Run the next cells to load the **modified** dataset to your workspace.
###Code
# # OPTIONAL: Just in case you want to restart the lab workspace *from scratch*, you
# # can uncomment and run this block to delete previously created files and
# # directories.
# !rm -rf pipeline
# !rm -rf data
# Declare paths to the data
DATA_DIR = './data'
TRAINING_DIR = f'{DATA_DIR}/training'
TRAINING_DATA = f'{TRAINING_DIR}/dataset.csv'
# Create the directory
!mkdir -p {TRAINING_DIR}
# download the dataset
!wget -nc https://storage.googleapis.com/workshop-datasets/covertype/full/dataset.csv -P {TRAINING_DIR}
###Output
File ‘./data/training/dataset.csv’ already there; not retrieving.
###Markdown
3 - Feature SelectionFor your first task, you will reduce the number of features to feed to the model. As mentioned in Week 2, this will help reduce the complexity of your model and save resources while training. Let's assume that you already have a baseline model that is trained on all features and you want to see if reducing the number of features will generate a better model. You will want to select a subset that has great predictive value to the label (in this case the `Cover_Type`). Let's do that in the following cells.
###Code
# Load the dataset to a dataframe
df = pd.read_csv(TRAINING_DATA)
# Preview the dataset
df.head()
# Show the data type of each column
df.dtypes
###Output
_____no_output_____
###Markdown
Looking at the data types of each column and the dataset description at the start of this notebook, you can see that most of the features are numeric and only two are not. This needs to be taken into account when selecting the subset of features because numeric and categorical features are scored differently. Let's create a temporary dataframe that only contains the numeric features so we can use it in the next sections.
###Code
# Copy original dataset
df_num = df.copy()
# Categorical columns
cat_columns = ['Wilderness_Area', 'Soil_Type']
# Label column
label_column = ['Cover_Type']
# Drop the categorical and label columns
df_num.drop(cat_columns, axis=1, inplace=True)
df_num.drop(label_column, axis=1, inplace=True)
# Preview the resuls
df_num.head()
###Output
_____no_output_____
###Markdown
You will use scikit-learn's built-in modules to perform [univariate feature selection](https://scikit-learn.org/stable/modules/feature_selection.htmlunivariate-feature-selection) on our dataset's numeric attributes. First, you need to prepare the input and target features:
###Code
# Set the target values
y = df[label_column].values
# Set the input values
X = df_num.values
###Output
_____no_output_____
###Markdown
Afterwards, you will use [SelectKBest](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.htmlsklearn.feature_selection.SelectKBest) to score each input feature against the target variable. Be mindful of the scoring function to pass in and make sure it is appropriate for the input (numeric) and target (categorical) values. Exercise 1: Feature SelectionComplete the code below to select the top 8 features of the numeric columns.
###Code
### START CODE HERE ###
# Create SelectKBest object using f_classif (ANOVA statistics) for 8 classes
select_k_best = SelectKBest(f_classif, k=8)
# Fit and transform the input data using select_k_best
X_new = select_k_best.fit_transform(X,y)
# Extract the features which are selected using get_support API
features_mask = select_k_best.get_support()
### END CODE HERE ###
# Print the results
reqd_cols = pd.DataFrame({'Columns': df_num.columns, 'Retain': features_mask})
print(reqd_cols)
###Output
Columns Retain
0 Elevation True
1 Aspect False
2 Slope True
3 Horizontal_Distance_To_Hydrology True
4 Vertical_Distance_To_Hydrology True
5 Horizontal_Distance_To_Roadways True
6 Hillshade_9am True
7 Hillshade_Noon True
8 Hillshade_3pm False
9 Horizontal_Distance_To_Fire_Points True
###Markdown
**Expected Output:**``` Columns Retain0 Elevation True1 Aspect False2 Slope True3 Horizontal_Distance_To_Hydrology True4 Vertical_Distance_To_Hydrology True5 Horizontal_Distance_To_Roadways True6 Hillshade_9am True7 Hillshade_Noon True8 Hillshade_3pm False9 Horizontal_Distance_To_Fire_Points True``` If you got the expected results, you can now select this subset of features from the original dataframe and save it to a new directory in your workspace.
###Code
# Set the paths to the reduced dataset
TRAINING_DIR_FSELECT = f'{TRAINING_DIR}/fselect'
TRAINING_DATA_FSELECT = f'{TRAINING_DIR_FSELECT}/dataset.csv'
# Create the directory
!mkdir -p {TRAINING_DIR_FSELECT}
# Get the feature names from SelectKBest
feature_names = list(df_num.columns[features_mask])
# Append the categorical and label columns
feature_names = feature_names + cat_columns + label_column
# Select the selected subset of columns
df_select = df[feature_names]
# Write CSV to the created directory
df_select.to_csv(TRAINING_DATA_FSELECT, index=False)
# Preview the results
df_select.head()
###Output
_____no_output_____
###Markdown
4 - Data PipelineWith the selected subset of features prepared, you can now start building the data pipeline. This involves ingesting, validating, and transforming your data. You will be using the TFX components you've already encountered in the ungraded labs and you can look them up here in the [official documentation](https://www.tensorflow.org/tfx/api_docs/python/tfx/components). 4.1 - Setup the Interactive ContextAs usual, you will first setup the Interactive Context so you can manually execute the pipeline components from the notebook. You will save the sqlite database in a pre-defined directory in your workspace. Please do not modify this path because you will need this in a later exercise involving ML Metadata.
###Code
# Location of the pipeline metadata store
PIPELINE_DIR = './pipeline'
# Declare the InteractiveContext and use a local sqlite file as the metadata store.
context = InteractiveContext(pipeline_root=PIPELINE_DIR)
###Output
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at ./pipeline/metadata.sqlite.
###Markdown
4.2 - Generating ExamplesThe first step in the pipeline is to ingest the data. Using [ExampleGen](https://www.tensorflow.org/tfx/guide/examplegen), you can convert raw data to TFRecords for faster computation in the later stages of the pipeline. Exercise 2: ExampleGenUse `ExampleGen` to ingest the dataset we loaded earlier. Some things to note:* The input is in CSV format so you will need to use the appropriate type of `ExampleGen` to handle it. * This function accepts a *directory* path to the training data and not the CSV file path itself. This will take a couple of minutes to run.
###Code
# # NOTE: Uncomment and run this if you get an error saying there are different
# # headers in the dataset. This is usually because of the notebook checkpoints saved in
# # that folder.
# !rm -rf {TRAINING_DIR}/.ipynb_checkpoints
# !rm -rf {TRAINING_DIR_FSELECT}/.ipynb_checkpoints
# !rm -rf {SERVING_DIR}/.ipynb_checkpoints
### START CODE HERE
# Instantiate ExampleGen with the input CSV dataset
example_gen = CsvExampleGen(input_base=TRAINING_DIR_FSELECT)
# Run the component using the InteractiveContext instance
context.run(example_gen)
### END CODE HERE
###Output
_____no_output_____
###Markdown
4.3 - Computing StatisticsNext, you will compute the statistics of your data. This will allow you to observe and analyze characteristics of your data through visualizations provided by the integrated [FACETS](https://pair-code.github.io/facets/) library. Exercise 3: StatisticsGenUse [StatisticsGen](https://www.tensorflow.org/tfx/guide/statsgen) to compute the statistics of the output examples of `ExampleGen`.
###Code
### START CODE HERE
# Instantiate StatisticsGen with the ExampleGen ingested dataset
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
# Run the component
context.run(statistics_gen)
### END CODE HERE
# Display the results
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
Once you've loaded the display, you may notice that the `zeros` column for `Cover_type` is highlighted in red. The visualization is letting us know that this might be a potential issue. In our case though, we know that the `Cover_Type` has a range of [0, 6] so having zeros in this column is something we expect. 4.4 - Inferring the SchemaYou will need to create a schema to validate incoming datasets during training and serving. Fortunately, TFX allows you to infer a first draft of this schema with the [SchemaGen](https://www.tensorflow.org/tfx/guide/schemagen) component. Exercise 4: SchemaGenUse `SchemaGen` to infer a schema based on the computed statistics of `StatisticsGen`.
###Code
### START CODE HERE
# Instantiate SchemaGen with the output statistics from the StatisticsGen
schema_gen = SchemaGen(statistics=statistics_gen.outputs['statistics'],)
# Run the component
context.run(schema_gen)
### END CODE HERE
# Visualize the output
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
4.5 - Curating the schemaYou can see that the inferred schema is able to capture the data types correctly and also able to show the expected values for the qualitative (i.e. string) data. You can still fine-tune this however. For instance, we have features where we expect a certain range:* `Hillshade_9am`: 0 to 255* `Hillshade_Noon`: 0 to 255* `Slope`: 0 to 90* `Cover_Type`: 0 to 6You want to update your schema to take note of these so the pipeline can detect if invalid values are being fed to the model. Exercise 5: Curating the SchemaUse [TFDV](https://www.tensorflow.org/tfx/data_validation/get_started) to update the inferred schema to restrict a range of values to the features mentioned above.Things to note:* You can use [tfdv.set_domain()](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/set_domain) to define acceptable values for a particular feature.* These should still be INT types after making your changes.* Declare `Cover_Type` as a *categorical* variable. Unlike the other four features, the integers 0 to 6 here correspond to a designated label and not a quantitative measure. You can look at the available flags for `set_domain()` in the official doc to know how to set this.
###Code
try:
# Get the schema uri
schema_uri = schema_gen.outputs['schema']._artifacts[0].uri
# for grading since context.run() does not work outside the notebook
except IndexError:
print("context.run() was no-op")
schema_path = './pipeline/SchemaGen/schema'
dir_id = os.listdir(schema_path)[0]
schema_uri = f'{schema_path}/{dir_id}'
# Get the schema pbtxt file from the SchemaGen output
schema = tfdv.load_schema_text(os.path.join(schema_uri, 'schema.pbtxt'))
### START CODE HERE ###
# Set the two `Hillshade` features to have a range of 0 to 255
tfdv.set_domain(schema, 'Hillshade_9am', schema_pb2.IntDomain(name='Hillshade_9am', min=0, max=255))
tfdv.set_domain(schema, 'Hillshade_Noon', schema_pb2.IntDomain(name='Hillshade_Noon', min=0, max=255))
# Set the `Slope` feature to have a range of 0 to 90
tfdv.set_domain(schema, 'Slope', schema_pb2.IntDomain(name='Slope', min=0, max=90))
# Set `Cover_Type` to categorical having minimum value of 0 and maximum value of 6
tfdv.set_domain(schema, 'Cover_Type', schema_pb2.IntDomain(name='Cover_Type', min=0, max=6, is_categorical=True))
### END CODE HERE ###
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
You should now see the ranges you declared in the `Domain` column of the schema. 4.6 - Schema EnvironmentsIn supervised learning, we train the model to make predictions by feeding a set of features with its corresponding label. Thus, our training dataset will have both the input features and label, and the schema is configured to detect these. However, after training and you serve the model for inference, the incoming data will no longer have the label. This will present problems when validating the data using the current version of the schema. Let's demonstrate that in the following cells. You will simulate a serving dataset by getting subset of the training set and dropping the label column (i.e. `Cover_Type`). Afterwards, you will validate this serving dataset using the schema you curated earlier.
###Code
# Declare paths to the serving data
SERVING_DIR = f'{DATA_DIR}/serving'
SERVING_DATA = f'{SERVING_DIR}/serving_dataset.csv'
# Create the directory
!mkdir -p {SERVING_DIR}
# Read a subset of the training dataset
serving_data = pd.read_csv(TRAINING_DATA, nrows=100)
# Drop the `Cover_Type` column
serving_data.drop(columns='Cover_Type', inplace=True)
# Save the modified dataset
serving_data.to_csv(SERVING_DATA, index=False)
# Delete unneeded variable from memory
del serving_data
# Declare StatsOptions to use the curated schema
stats_options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
# Compute the statistics of the serving dataset
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=stats_options)
# Detect anomalies in the serving dataset
anomalies = tfdv.validate_statistics(serving_stats, schema=schema)
# Display the anomalies detected
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
As expected, the missing column is flagged. To fix this, you need to configure the schema to detect when it's being used for training or for inference / serving. You can do this by setting [schema environments](https://www.tensorflow.org/tfx/tutorials/data_validation/tfdv_basicschema_environments). Exercise 6: Define the serving environmentComplete the code below to ignore the `Cover_Type` feature when validating in the *SERVING* environment.
###Code
schema.default_environment.append('TRAINING')
### START CODE HERE ###
# Hint: Create another default schema environment with name SERVING (pass in a string)
schema.default_environment.append('SERVING')
# Remove Cover_Type feature from SERVING using TFDV
# Hint: Pass in the strings with the name of the feature and environment
tfdv.get_feature(schema, 'Cover_Type').not_in_environment.append('SERVING')
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
If done correctly, running the cell below should show *No Anomalies*.
###Code
# Validate the serving dataset statistics in the `SERVING` environment
anomalies = tfdv.validate_statistics(serving_stats, schema=schema, environment='SERVING')
# Display the anomalies detected
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
We can now save this curated schema in a local directory so we can import it to our TFX pipeline.
###Code
# Declare the path to the updated schema directory
UPDATED_SCHEMA_DIR = f'{PIPELINE_DIR}/updated_schema'
# Create the said directory
!mkdir -p {UPDATED_SCHEMA_DIR}
# Declare the path to the schema file
schema_file = os.path.join(UPDATED_SCHEMA_DIR, 'schema.pbtxt')
# Save the curated schema to the said file
tfdv.write_schema_text(schema, schema_file)
###Output
_____no_output_____
###Markdown
As a sanity check, let's display the schema we just saved and verify that it contains the changes we introduced. It should still show the ranges in the `Domain` column and there should be two environments available.
###Code
# Load the schema from the directory we just created
new_schema = tfdv.load_schema_text(schema_file)
# Display the schema. Check that the Domain column still contains the ranges.
tfdv.display_schema(schema=new_schema)
# The environment list should show `TRAINING` and `SERVING`.
new_schema.default_environment
###Output
_____no_output_____
###Markdown
4.7 - Generate new statistics using the updated schemaYou will now compute the statistics using the schema you just curated. Remember though that TFX components interact with each other by getting artifact information from the metadata store. So you first have to import the curated schema file into ML Metadata. You will do that by using an [ImporterNode](https://www.tensorflow.org/tfx/guide/statsgenusing_the_statsgen_component_with_a_schema) to create an artifact representing the curated schema. Exercise 7: ImporterNodeComplete the code below to create a `Schema` artifact that points to the curated schema directory. Pass in an `instance_name` as well and name it `import_user_schema`.
###Code
### START CODE HERE ###
# Use an ImporterNode to put the curated schema to ML Metadata
user_schema_importer = ImporterNode(
instance_name='import_user_schema',
source_uri=UPDATED_SCHEMA_DIR,
artifact_type=standard_artifacts.Schema
)
# Run the component
context.run(user_schema_importer, enable_cache=False)
### END CODE HERE ###
context.show(user_schema_importer.outputs['result'])
###Output
_____no_output_____
###Markdown
With the artifact successfully created, you can now use `StatisticsGen` and pass in a `schema` parameter to use the curated schema. Exercise 8: Statistics with the new schemaUse `StatisticsGen` to compute the statistics with the schema you updated in the previous section.
###Code
### START CODE HERE ###
# Use StatisticsGen to compute the statistics using the curated schema
statistics_gen_updated = StatisticsGen(examples=example_gen.outputs['examples'],schema=user_schema_importer.outputs['result'],)
# Run the component
context.run(statistics_gen_updated)
### END CODE HERE ###
context.show(statistics_gen_updated.outputs['statistics'])
###Output
_____no_output_____
###Markdown
The chart will look mostly the same from the previous runs but you can see that the `Cover Type` is now under the categorical features. That shows that `StatisticsGen` is indeed using the updated schema. 4.8 - Check anomaliesYou will now check if the dataset has any anomalies with respect to the schema. You can do that easily with the [ExampleValidator](https://www.tensorflow.org/tfx/guide/exampleval) component. Exercise 9: ExampleValidatorCheck if there are any anomalies using `ExampleValidator`. You will need to pass in the updated statistics and schema from the previous sections.
###Code
### START CODE HERE ###
example_validator = ExampleValidator(
statistics=statistics_gen_updated.outputs['statistics'],
schema=user_schema_importer.outputs['result'])
# Run the component.
context.run(example_validator)
### END CODE HERE ###
# Visualize the results
context.show(example_validator.outputs['anomalies'])
###Output
_____no_output_____
###Markdown
4.10 - Feature engineeringYou will now proceed to transforming your features to a form suitable for training a model. This can include several methods such as scaling and converting strings to vocabulary indices. It is important for these transformations to be consistent across your training data, and also for the serving data when the model is deployed for inference. TFX ensures this by generating a graph that will process incoming data both during training and inference.Let's first declare the constants and utility function you will use for the exercise.
###Code
# Set the constants module filename
_cover_constants_module_file = 'cover_constants.py'
%%writefile {_cover_constants_module_file}
SCALE_MINMAX_FEATURE_KEYS = [
"Horizontal_Distance_To_Hydrology",
"Vertical_Distance_To_Hydrology",
]
SCALE_01_FEATURE_KEYS = [
"Hillshade_9am",
"Hillshade_Noon",
"Horizontal_Distance_To_Fire_Points",
]
SCALE_Z_FEATURE_KEYS = [
"Elevation",
"Slope",
"Horizontal_Distance_To_Roadways",
]
VOCAB_FEATURE_KEYS = ["Wilderness_Area"]
HASH_STRING_FEATURE_KEYS = ["Soil_Type"]
LABEL_KEY = "Cover_Type"
# Utility function for renaming the feature
def transformed_name(key):
return key + '_xf'
###Output
Writing cover_constants.py
###Markdown
Next you will define the `preprocessing_fn` to apply transformations to the features. Exercise 10: Preprocessing functionComplete the module to transform your features. Refer to the code comments to get hints on what operations to perform.Here are some links to the docs of the functions you will need to complete this function:- [`tft.scale_by_min_max`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_by_min_max)- [`tft.scale_to_0_1`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_to_0_1)- [`tft.scale_to_z_score`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_to_z_score)- [`tft.compute_and_apply_vocabulary`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/compute_and_apply_vocabulary)- [`tft.hash_strings`](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/hash_strings)
###Code
# Set the transform module filename
_cover_transform_module_file = 'cover_transform.py'
%%writefile {_cover_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import cover_constants
_SCALE_MINMAX_FEATURE_KEYS = cover_constants.SCALE_MINMAX_FEATURE_KEYS
_SCALE_01_FEATURE_KEYS = cover_constants.SCALE_01_FEATURE_KEYS
_SCALE_Z_FEATURE_KEYS = cover_constants.SCALE_Z_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = cover_constants.VOCAB_FEATURE_KEYS
_HASH_STRING_FEATURE_KEYS = cover_constants.HASH_STRING_FEATURE_KEYS
_LABEL_KEY = cover_constants.LABEL_KEY
_transformed_name = cover_constants.transformed_name
def preprocessing_fn(inputs):
features_dict = {}
### START CODE HERE ###
for feature in _SCALE_MINMAX_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using scaling of min_max function
# Hint: Use tft.scale_by_min_max by passing in the respective column
features_dict[_transformed_name(feature)] = tft.scale_by_min_max(data_col)
for feature in _SCALE_01_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using scaling of 0 to 1 function
# Hint: tft.scale_to_0_1
features_dict[_transformed_name(feature)] = tft.scale_to_0_1(data_col)
for feature in _SCALE_Z_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using scaling to z score
# Hint: tft.scale_to_z_score
features_dict[_transformed_name(feature)] = tft.scale_to_z_score(data_col)
for feature in _VOCAB_FEATURE_KEYS:
data_col = inputs[feature]
# Transform using vocabulary available in column
# Hint: Use tft.compute_and_apply_vocabulary
features_dict[_transformed_name(feature)] = tft.compute_and_apply_vocabulary(data_col)
for feature in _HASH_STRING_FEATURE_KEYS:
data_col = inputs[feature]
# Transform by hashing strings into buckets
# Hint: Use tft.hash_strings with the param hash_buckets set to 10
features_dict[_transformed_name(feature)] = tft.hash_strings(data_col, hash_buckets=10)
### END CODE HERE ###
# No change in the label
features_dict[_LABEL_KEY] = inputs[_LABEL_KEY]
return features_dict
###Output
Writing cover_transform.py
###Markdown
Exercise 11: TransformUse the [TFX Transform component](https://www.tensorflow.org/tfx/api_docs/python/tfx/components/Transform) to perform the transformations and generate the transformation graph. You will need to pass in the dataset examples, *curated* schema, and the module that contains the preprocessing function.
###Code
### START CODE HERE ###
# Instantiate the Transform component
transform = Transform(
examples=example_gen.outputs['examples'],
schema=user_schema_importer.outputs['result'],
module_file=os.path.abspath(_cover_transform_module_file))
### END CODE HERE ###
# Run the component
context.run(transform, enable_cache=False)
###Output
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead.
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead.
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
###Markdown
Let's inspect a few examples of the transformed dataset to see if the transformations are done correctly.
###Code
try:
transform_uri = transform.outputs['transformed_examples'].get()[0].uri
# for grading since context.run() does not work outside the notebook
except IndexError:
print("context.run() was no-op")
examples_path = './pipeline/Transform/transformed_examples'
dir_id = os.listdir(examples_path)[0]
transform_uri = f'{examples_path}/{dir_id}'
# Get the URI of the output artifact representing the transformed examples
train_uri = os.path.join(transform_uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
transformed_dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# import helper function to get examples from the dataset
from util import get_records
# Get 3 records from the dataset
sample_records_xf = get_records(transformed_dataset, 3)
# Print the output
pp.pprint(sample_records_xf)
###Output
[{'features': {'feature': {'Cover_Type': {'int64List': {'value': ['4']}},
'Elevation_xf': {'floatList': {'value': [-1.2982628]}},
'Hillshade_9am_xf': {'floatList': {'value': [0.87007874]}},
'Hillshade_Noon_xf': {'floatList': {'value': [0.9133858]}},
'Horizontal_Distance_To_Fire_Points_xf': {'floatList': {'value': [0.875366]}},
'Horizontal_Distance_To_Hydrology_xf': {'floatList': {'value': [0.18468146]}},
'Horizontal_Distance_To_Roadways_xf': {'floatList': {'value': [-1.1803539]}},
'Slope_xf': {'floatList': {'value': [-1.483387]}},
'Soil_Type_xf': {'int64List': {'value': ['4']}},
'Vertical_Distance_To_Hydrology_xf': {'floatList': {'value': [0.22351421]}},
'Wilderness_Area_xf': {'int64List': {'value': ['0']}}}}},
{'features': {'feature': {'Cover_Type': {'int64List': {'value': ['4']}},
'Elevation_xf': {'floatList': {'value': [-1.3197033]}},
'Hillshade_9am_xf': {'floatList': {'value': [0.86614174]}},
'Hillshade_Noon_xf': {'floatList': {'value': [0.9251968]}},
'Horizontal_Distance_To_Fire_Points_xf': {'floatList': {'value': [0.8678377]}},
'Horizontal_Distance_To_Hydrology_xf': {'floatList': {'value': [0.15175375]}},
'Horizontal_Distance_To_Roadways_xf': {'floatList': {'value': [-1.2572862]}},
'Slope_xf': {'floatList': {'value': [-1.6169325]}},
'Soil_Type_xf': {'int64List': {'value': ['4']}},
'Vertical_Distance_To_Hydrology_xf': {'floatList': {'value': [0.21576227]}},
'Wilderness_Area_xf': {'int64List': {'value': ['0']}}}}},
{'features': {'feature': {'Cover_Type': {'int64List': {'value': ['1']}},
'Elevation_xf': {'floatList': {'value': [-0.5549895]}},
'Hillshade_9am_xf': {'floatList': {'value': [0.9212598]}},
'Hillshade_Noon_xf': {'floatList': {'value': [0.93700784]}},
'Horizontal_Distance_To_Fire_Points_xf': {'floatList': {'value': [0.8533389]}},
'Horizontal_Distance_To_Hydrology_xf': {'floatList': {'value': [0.19183965]}},
'Horizontal_Distance_To_Roadways_xf': {'floatList': {'value': [0.53138816]}},
'Slope_xf': {'floatList': {'value': [-0.6821134]}},
'Soil_Type_xf': {'int64List': {'value': ['4']}},
'Vertical_Distance_To_Hydrology_xf': {'floatList': {'value': [0.30749354]}},
'Wilderness_Area_xf': {'int64List': {'value': ['0']}}}}}]
###Markdown
5 - ML MetadataTFX uses [ML Metadata](https://www.tensorflow.org/tfx/guide/mlmd) under the hood to keep records of artifacts that each component uses. This makes it easier to track how the pipeline is run so you can troubleshoot if needed or want to reproduce results.In this final section of the assignment, you will demonstrate going through this metadata store to retrieve related artifacts. This skill is useful for when you want to recall which inputs are fed to a particular stage of the pipeline. For example, you can know where to locate the schema used to perform feature transformation, or you can determine which set of examples were used to train a model. You will start by importing the relevant modules and setting up the connection to the metadata store. We have also provided some helper functions for displaying artifact information and you can review its code in the external `util.py` module in your lab workspace.
###Code
# Import mlmd and utilities
import ml_metadata as mlmd
from ml_metadata.proto import metadata_store_pb2
from util import display_types, display_artifacts, display_properties
# Get the connection config to connect to the metadata store
connection_config = context.metadata_connection_config
# Instantiate a MetadataStore instance with the connection config
store = mlmd.MetadataStore(connection_config)
# Declare the base directory where All TFX artifacts are stored
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
###Output
_____no_output_____
###Markdown
5.1 - Accessing stored artifactsWith the connection setup, you can now interact with the metadata store. For instance, you can retrieve all artifact types stored with the `get_artifact_types()` function. For reference, the API is documented [here](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStore).
###Code
# Get the artifact types
types = store.get_artifact_types()
# Display the results
display_types(types)
###Output
_____no_output_____
###Markdown
You can also get a list of artifacts for a particular type to see if there are variations used in the pipeline. For example, you curated a schema in an earlier part of the assignment so this should appear in the records. Running the cell below should show at least two rows: one for the inferred schema, and another for the updated schema. If you ran this notebook before, then you might see more rows because of the different schema artifacts saved under the `./SchemaGen/schema` directory.
###Code
# Retrieve the transform graph list
schema_list = store.get_artifacts_by_type('Schema')
# Display artifact properties from the results
display_artifacts(store, schema_list, base_dir)
###Output
_____no_output_____
###Markdown
Moreover, you can also get the properties of a particular artifact. TFX declares some properties automatically for each of its components. You will most likely see `name`, `state` and `producer_component` for each artifact type. Additional properties are added where appropriate. For example, a `split_names` property is added in `ExampleStatistics` artifacts to indicate which splits the statistics are generated for.
###Code
# Get the latest TransformGraph artifact
statistics_artifact = store.get_artifacts_by_type('ExampleStatistics')[-1]
# Display the properties of the retrieved artifact
display_properties(store, statistics_artifact)
###Output
_____no_output_____
###Markdown
5.2 - Tracking artifactsFor this final exercise, you will build a function to return the parent artifacts of a given one. For example, this should be able to list the artifacts that were used to generate a particular `TransformGraph` instance. Exercise 12: Get parent artifactsComplete the code below to track the inputs of a particular artifact.Tips:* You may find [get_events_by_artifact_ids()](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStoreget_events_by_artifact_ids) and [get_events_by_execution_ids()](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStoreget_executions_by_id) useful here. * Some of the methods of the MetadataStore class (such as the two given above) only accepts iterables so remember to convert to a list (or set) if you only have an int (e.g. pass `[x]` instead of `x`).
###Code
def get_parent_artifacts(store, artifact):
### START CODE HERE ###
# Get the artifact id of the input artifact
artifact_id = artifact.id
# Get events associated with the artifact id
artifact_id_events = store.get_events_by_artifact_ids([artifact_id])
# From the `artifact_id_events`, get the execution ids of OUTPUT events.
# Cast to a set to remove duplicates if any.
execution_id = set(
event.artifact_id
for event in artifact_id_events
if event.type == metadata_store_pb2.Event.OUTPUT
)
# Get the events associated with the execution_id
execution_id_events = store.get_events_by_execution_ids(execution_id)
# From execution_id_events, get the artifact ids of INPUT events.
# Cast to a set to remove duplicates if any.
parent_artifact_ids = set(
event.artifact_id
for event in execution_id_events
if event.type == metadata_store_pb2.Event.INPUT
)
# Get the list of artifacts associated with the parent_artifact_ids
parent_artifact_list = store.get_artifacts_by_id(parent_artifact_ids)
### END CODE HERE ###
return parent_artifact_list
# Get an artifact instance from the metadata store
artifact_instance = store.get_artifacts_by_type('TransformGraph')[0]
# Retrieve the parent artifacts of the instance
parent_artifacts = get_parent_artifacts(store, artifact_instance)
# Display the results
display_artifacts(store, parent_artifacts, base_dir)
###Output
_____no_output_____ |
23 - Python for Finance/1_Useful Tools/11_Importing and Organizing Data in Python - Part II (7:01)/Importing and Organizing Your Data in Python - Part II - Exercise_Yahoo_Py3.ipynb | ###Markdown
Importing and Organizing Your Data in Python - Part II
###Code
from pandas_datareader import data as wb
F = wb.DataReader('F', data_source='yahoo', start='2005-1-1')
F
###Output
_____no_output_____ |
src/Third Party Modules/python-pptx/python-pptx_tutorial.ipynb | ###Markdown
Python PowerPoint Tutorial https://python-pptx.readthedocs.io/en/latest/user/quickstart.html
###Code
from pptx import Presentation
prs = Presentation()
title_slide_layout = prs.slide_layouts[4]
slide = prs.slides.add_slide(slide_layout=title_slide_layout)
title = slide.shapes.title
subtitle = slide.placeholders[1]
title.text = "Hello, World!"
subtitle.text = "python-pptx was here!"
prs.save('test.pptx')
from IPython.display import Video
video = Video('Rain Background.mp4')
video
###Output
_____no_output_____ |
notebooks/Demonstrations/tutorials/JupyterTestdrive-Python.ipynb | ###Markdown
![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) Basics of PythonIn this notebook, we're going to go over some of the very basics of Python for interested parties. Note that knowledge of Python is *not required* in order to _use_ Jupyter notebooks and participate in Callysto. This section is completely optional and we can create content which doesn't require writing anything in Python. That being said, if you want to introduce Python in the classroom or if a student changes anything, it will be useful to know the basics.
###Code
# Anything in a code cell after a pound sign is a comment!
# You can type anything here and it will not be excecuted
# You can define variables with an equals sign
my_variable = 10 # You cannot put spaces in variable names.
other_variable = "some text" # variables need not be numbers!
# Print will output our variables below the cell
print(my_variable, other_variable)
# Variables are also shared between cells, and print other things too!
print(my_variable, other_variable, "We can print text directly in quotes")
# You can also do mathematical operations in python
x = 10.5
y = 3.14159
multiply = x * y
add = x + y
subtract = x - y
divide = x/y
print(add, subtract, multiply, divide)
###Output
_____no_output_____
###Markdown
--- Exercise 11. In the cell below, assign variable z to your name and run the cell. 2. In the cell below, write a comment on the same line where you define z. Run the cell to make sure the comment is not changing anything.---
###Code
## Enter your code in the line below
z =
##
print(z, "is loving Python!")
###Output
_____no_output_____
###Markdown
Loops in Python
###Code
# You can also write loops with for
for i in range(5):
print(i)
###Output
_____no_output_____
###Markdown
Let's take a moment to digest the loop syntax we saw above ```python We have defined our *iterator* and called it i. This will take values based on the range function that follows it ^ |for i in range(10): <- note the colon print(i) <- note how this line is indented relative to "for" ```To start our loop, we have typed `for i in range(10):`. What this is saying is "Do something for values of `i` that are in values defined by `range(10)`. In this case, `range` creates 10 values by starting and zero and counting to 9. Which is what we saw printed above. Also note how there is a colon after `range` and the print statement is _indented_ relative to `for`. This is because anything that takes place within a specific control structure such as a loop in Python needs to be indented consistently. --- Exercises 21. In the loop in the cell below, get it to loop from 0-9.2. Change the loop below to print out 1-10 instead of 0-9. 3. Execute the loop in the cell two fields below. Can you get the loop to run without errors?---
###Code
# Exercises 2.1 and 2.2:
## Change the code below for the exercise:
for i in range(5):
print(i)
##
# Exercises 2.3:
sharks = ['hammerhead', 'great white', 'tiger', 'frilled', 'bullhead', 'requiem']
## Change the code below for the exercise:
for shark in sharks
print(shark)
##
###Output
_____no_output_____
###Markdown
If and else statementsLet's build on our previous example and introduce `if` statements
###Code
for i in range(10):
if i == 3:
print(i == 3, i, "is three")
else:
print(i == 3, i, "Is not three")
###Output
_____no_output_____
###Markdown
Notice how again, our `if` and `else` statement are indented relative to `for`, and the print statements inside of our `if` statements are indented even further.What the lines of code does above is that inside the `if` statement, it checks if the value of `i` is equal to three with double equals signs `==`. Should that condition be `True`, we enter the lines of code indented relative to `if`. We then print `i==3` which is `True`, as well as `i` and a string. If `i` is _not_ equal to three, we enter the `else` clause, which the code returns to in the case that our `if` statement is `False`, which we can see in the output of the code above. Python FunctionsYou can also define functions in Python which can help speed up writing the code that performs more complicated calculations. This is done using the `def` method in Python which can be thought of to mean "define". A simple function is defined below.
###Code
def simple_function(x):
# a double asterix in pyton is an exponent
return x**2
simple_function(10)
###Output
_____no_output_____
###Markdown
In the line above, we're defining a function with `def`, and in this case we chose to give it a name of `simple_function`. We also then specified which variable it should take as input inside of the parenthesis, and we _decided_ to call this variable `x` inside the function. We then called the function with its name, and a value for it to use for `x`. We can use functions in combination with other things like loops, which we have shown below.
###Code
for i in range(5):
new_variable = simple_function(i)
print(new_variable, "Look what we can do!")
###Output
_____no_output_____
###Markdown
Notice how in the loop above, we decided to define a new variable based on the output of our function. We also used our loop iterator `i` as the input to our function to calculate those squares for each value from 0 to 4. --- Exercises 31. In the first cell below, change the name of the defined function to "some_name."2. In the second cell below, change the formula to calculate the area of a rectangle correctly. 3. In the second cell below, change the function to return "The area of the rectangle is" along with the calculated output.---
###Code
# Exercises 3.1:
## Change the code below for the exercise:
def function_name(x):
print('His name is ' + x)
##
(some_name(z))
# Exercises 3.2 & 3.3:
def calculated_area(x, y):
## Change the code below for the exercise:
print()
##
(calculated_area(3, 5))
###Output
_____no_output_____
###Markdown
GraphingWe can also graph our function using Python with the following code
###Code
# this library contains some mathematical functions to use
import numpy as np
# this is the plotting library
import matplotlib.pyplot as plt
# this is a "magic" command which tells jupyter to display our graphs in the notebook
%matplotlib inline
# Here we generate points to plot our function at.
# The first number is the minimum, the second is the maximum, and the third is how many points to generate
values_to_plot = np.linspace(-10,10,100) # this creates a list of 100 points from [-10,10]
# this is how we create a graph. Notice how we're using 'plt' as it is the library we imported
# which contains the plot functions
plt.plot(values_to_plot, simple_function(values_to_plot))
# add some axes labels
plt.xlabel("X values")
plt.ylabel("Y values")
# we can use latex in plots as well!
plt.title("$f(x) = x^2$")
plt.show()
###Output
_____no_output_____
###Markdown
--- Exercises 41. The plot in the first cell below plots the points defined in the lists x & y. Change the numbers in x & y to see how it affects the graph. 2. The plot two cells down has various lines commented out to add different features to the plot. Play around with the plot by adding in various lines. 3. In the plot two cells down, try to add in a third line by defining a function and having it plot the function.---
###Code
# Exercises 4.1:
## Change the code below for the exercise:
x = [0, 2, 4, 6, 8, 10]
y = [0, 4, 8, 12, 16, 20]
##
plt.plot(x,y, marker='o',markersize=8);
# Exercises 4.2 & 4.3:
values_to_plot = np.linspace(0,10,11)
x2 = values_to_plot
y2 = simple_function(values_to_plot)
x3 = values_to_plot
## Change the code below for exercise 4.3:
def new_function(x):
return x
y3 = new_function(values_to_plot)
## Change the code below for the exercise 4.2 & 4.3:
plt.plot(x,y)
plt.plot(x2, y2)
#plt.grid()
#plt.xlabel('x label')
#plt.ylabel('y label')
#plt.title("Simple Plot")
#plt.legend()
##
###Output
_____no_output_____ |
LS_DS_Unit_4_Sprint_Challenge_4.ipynb | ###Markdown
Major Neural Network Architectures Challenge *Data Science Unit 4 Sprint 3 Challenge*In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Generative Adverserial Networks (GANs). In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Challenge Objectives*You should be able to:** Part 1: Train a RNN classification model* Part 2: Utilize a pre-trained CNN for objective detection* Part 3: Describe the difference between a discriminator and generator in a GAN* Part 4: Describe yourself as a Data Science and elucidate your vision of AI Part 1 - RNNsUse an RNN to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. Your Tasks: - Use Keras to fit a predictive model, classifying news articles into topics. - Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.__*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
!pip install numpy
import numpy as np
from tensorflow.keras.datasets import reuters
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
len(y_train)
import pandas as pd
df = pd.DataFrame({'y_train':y_train})
one_hot = pd.get_dummies(df['y_train'])
y_train_dummies = one_hot.as_matrix()
df = pd.DataFrame({'y_test':y_test})
one_hot = pd.get_dummies(df['y_test'])
y_test_dummies = one_hot.as_matrix()
y_train = y_train_dummies
y_test = y_test_dummies
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 200000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
# (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 80))
model.add(LSTM(80, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(46, activation='softmax'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=1,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
_____no_output_____
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive model. To *really* improve the model, more playing with parameters would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNs Find the FrogTime to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit":20, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1.Pondanimals.GIF
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2.hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3.PKLS4116_inline.png
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 4.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116.png
Completed Image ====> 5.PKLS4116.png
Image URL: https://i.pinimg.com/originals/57/5c/5b/575c5b5c441e27ff04eb50571ee30127.jpg
Completed Image ====> 6.575c5b5c441e27ff04eb50571ee30127.jpg
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 7.alligator-animal-on-pond.jpg
Image URL: https://www.pixoto.com/images-photography/animals/birds/birds-in-a-pond-5986310798966784.jpg
Completed Image ====> 8.birds-in-a-pond-5986310798966784.jpg
Image URL: https://cdn.pixabay.com/photo/2017/04/19/20/37/frog-2243543_960_720.jpg
Completed Image ====> 9.frog-2243543_960_720.jpg
Image URL: https://cdn.pixabay.com/photo/2017/08/17/06/32/goose-2650209_960_720.jpg
Completed Image ====> 10.goose-2650209_960_720.jpg
Image URL: https://img-aws.ehowcdn.com/750x428p/photos.demandstudios.com/getty/article/178/192/87827228_XS.jpg
Completed Image ====> 11.87827228_XS.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Invalid or missing image format. Skipping...
Image URL: https://pondinformer.com/wp-content/uploads/2018/04/pond-fish-that-eat-algae.jpg
Completed Image ====> 12.pond-fish-that-eat-algae.jpg
Image URL: https://i.pinimg.com/736x/dd/69/c9/dd69c94f00312b5c487bf1018f38be58--vocabulary-cards-picture-cards.jpg
Completed Image ====> 13.dd69c94f00312b5c487bf1018f38be58--vocabulary-cards-picture-cards.jpg
Image URL: https://www.scienceabc.com/wp-content/uploads/2016/10/Fishes-in-lake.jpg
Completed Image ====> 14.Fishes-in-lake.jpg
Image URL: https://faaspets.org/wp-content/uploads/HF-2019.jpg
Completed Image ====> 15.HF-2019.jpg
Image URL: https://sterlingshelter-animalshelterinc.netdna-ssl.com/wp-content/uploads/2019/05/koi2.jpg
Completed Image ====> 16.koi2.jpg
Image URL: https://www.pondtrademag.com/wp-content/uploads/1801wildpond001b.jpg
Completed Image ====> 17.1801wildpond001b.jpg
Image URL: https://static.wixstatic.com/media/06af3a_f89e7596d5254e6e8896f054e8c4ea7b~mv2_d_1650_1275_s_2.jpg/v1/fill/w_500,h_500/06af3a_f89e7596d5254e6e8896f054e8c4ea7b~mv2_d_1650_1275_s_2.jpg
Completed Image ====> 18.06af3a_f89e7596d5254e6e8896f054e8c4ea7b~mv2_d_1650_1275_s_2.jpg
Image URL: https://www.denverpost.com/wp-content/uploads/2019/06/fishfloatingeverywheredead1.jpg?w=620
Completed Image ====> 19.fishfloatingeverywheredead1.jpg
Image URL: https://www.wikihow.com/images/b/b3/Clean-a-Koi-Pond-Step-15.jpg
Completed Image ====> 20.Clean-a-Koi-Pond-Step-15.jpg
Errors: 1
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
absolute_image_paths
images = absolute_image_paths[0].get('animal pond')
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] == ('bullfrog', 'tree frog', 'tailed frog'):
return entry[2]
return 0.0
for i in range(len(images)):
img_contains_frog(process_img_path(images[i]))
###Output
W0726 09:48:07.558556 12916 deprecation_wrapper.py:119] From C:\Users\Patrick\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.
W0726 09:48:07.730624 12916 deprecation_wrapper.py:119] From C:\Users\Patrick\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
###Markdown
Part 3 - Generative Adverserial Networks (GANS)Describe the difference between a discriminator and generator in a GAN in your own words.A discriminator is the checker, it's job is to not let anything that doesn't look like the target get through. The generator creates data and it wants to be able t fool the discriminator.At the end of training the discriminator doesn't let subpar data through and the generator is making really good looking data to the discriminator. Part 4 - More... Answer the following questions, with a target audience of a fellow Data Scientist:- What do you consider your strongest area, as a Data Scientist? - Feature engineering, and data analysis, I'd like to improve my NN knowledge for the future. - What area of Data Science would you most like to learn more about, and why? - NN, it is a complex topic and it's super interesting. NN's are very powerful and in the future our problems are only going to get more complicated. - Where do you think Data Science will be in 5 years? - I'm not sure, I expect AI to get better and all the things that we don't exactly have solid rules for will be fleshed out. GPU's will get better so training will be faster, i'm looking forward to that. - What are the treats posed by AI to our society? - AI has the potential to control what people see online, well it's already happening but it will be worse, and the biggest threat at the moment is that AI will take over jobs and those people who's jobs have been automated are going to have to do something else. - How do you think we can counteract those threats? - Though I'm not sure how to prevent AI from filtering what we see online, making programs for those people how will be out of jobs so that they can transition into a different sector of work. The best thing would be free education and if needed providing neccesities. - Do you think achieving General Artifical Intelligence is ever possible? - Maybe but I think it's more likly that we will for a symbiotic relationship with AI and we all become cyborgs, at that point AGI won't be needed. Everyones processing power will increase which will be good as a whole for the world. We need to evolve with technology. A few sentences per answer is fine - only elaborate if time allows. Congratulations! Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.
###Code
from IPython.display import HTML
HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""")
###Output
_____no_output_____
###Markdown
Major Neural Network Architectures Challenge *Data Science Unit 4 Sprint 3 Challenge*In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Generative Adverserial Networks (GANs). In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Challenge Objectives*You should be able to:** Part 1: Train a RNN classification model* Part 2: Utilize a pre-trained CNN for objective detection* Part 3: Describe the difference between a discriminator and generator in a GAN* Part 4: Describe yourself as a Data Science and elucidate your vision of AI Part 1 - RNNsUse an RNN to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. Your Tasks: - Use Keras to fit a predictive model, classifying news articles into topics. - Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.__*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
!pip install numpy
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from tensorflow.keras.datasets import reuters
import numpy as np
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
len(set(y_train))
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='softmax'))
# try using different optimizers and different optimizer configs
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
W0726 16:01:53.860604 139794824738688 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive model. To *really* improve the model, more playing with parameters would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNs Find the FrogTime to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
absolute_image_paths[0]['animal pond'][0], len(absolute_image_paths[0]['animal pond'])
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] == 'bullfrog' or entry[1] == 'tree frog' or entry[1] == 'tailed frog':
return entry[2]
return 0.0
for i in range(len(absolute_image_paths[0]['animal pond'])):
img_contains_frog(process_img_path(absolute_image_paths[0]['animal pond'][i]))
###Output
[('n03598930', 'jigsaw_puzzle', 0.8680313), ('n06359193', 'web_site', 0.06410018), ('n02834397', 'bib', 0.021264324)]
[('n01443537', 'goldfish', 0.8495859), ('n01631663', 'eft', 0.067602046), ('n02536864', 'coho', 0.035163548)]
[('n04243546', 'slot', 0.87124527), ('n04476259', 'tray', 0.049935803), ('n03908618', 'pencil_box', 0.023072328)]
[('n02442845', 'mink', 0.30976582), ('n02363005', 'beaver', 0.23398991), ('n02361337', 'marmot', 0.2079679)]
[('n03485794', 'handkerchief', 0.88227266), ('n02834397', 'bib', 0.022680871), ('n03291819', 'envelope', 0.020095097)]
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
###Output
_____no_output_____
###Markdown
Part 3 - Generative Adverserial Networks (GANS)Describe the difference between a discriminator and generator in a GAN in your own words.__*Your Answer:*__ The generator predicts the features given the label. For example, given an email is marked as spam, predicts(generates) the text of the email. The discriminator tries to predict the label given the features. For example, given the text of an email predict(discriminate) whether spam or non-spam Part 4 - More... Answer the following questions, with a target audience of a fellow Data Scientist:- What do you consider your strongest area, as a Data Scientist? My strongest area is quickly picking up and implementing new data science techniques.- What area of Data Science would you most like to learn more about, and why? I would like to learn more about how governments plan to use data science to gain a competitive advantage.- Where do you think Data Science will be in 5 years? Data science will make Universal Basic Income(UBI) a necessity because many white collar jobs will be automated.- What are the threats posed by AI to our society? Lack of privacy and increased income inequality.- How do you think we can counteract those threats? UBI- Do you think achieving General Artifical Intelligence is ever possible? Although in terms of capability, we are far from achieving artificial general intelligence, the exponential advancement of AI research may possibly culminate into the invention of artificial general intelligence within our lifetime or by the end of this century.A few sentences per answer is fine - only elaborate if time allows. Congratulations! Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.
###Code
from IPython.display import HTML
HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""")
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
# TODO - your code
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size=32
austen = [i.text for i in austen_tweets[:180]]
austen
import pandas as pd
import numpy as np
austen_tweets = pd.DataFrame(austen)
austen_tweets.head()
austen_tweets['target'] = 1
austen_tweets = austen_tweets.rename({0: 'tweets'}, axis=1)
austen_tweets.head()
al = [i.text for i in al_tweets[:959]]
al
al_tweets = pd.DataFrame(al)
al_tweets.head()
al_tweets['target'] = 0
al_tweets = al_tweets.rename({0: 'tweets'}, axis=1)
al_tweets.head()
df = pd.concat([austen_tweets, al_tweets])
df.head()
df.shape
from sklearn.utils import shuffle
df= shuffle(df)
df.head()
from sklearn.model_selection import train_test_split
y = df['target']
X = df.drop('target', axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
X_train_vectorized = count.fit_transform(X_train)
X_test_vectorized = count.transform(X_test)
X_train = sequence.pad_sequences(X_train_vectorized)
X_test = sequence.pad_sequences(X_test_vectorized)
print('x_train shape:', X_train.shape)
print('x_test shape:', X_test.shape)
model = Sequential()
model.add(Embedding(max_features, 128, input_length))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Train...
Train on 854 samples, validate on 285 samples
Epoch 1/15
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3. pkls4116_inline.png
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 4. alligator-animal-on-pond.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Completed Image ====> 5. amphibian_bronze-frog_julia-bartosh_400x267.ash
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
# TODO - your code!
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageOps # https://pillow.readthedocs.io/en/stable/
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
image_path_list = absolute_image_paths['animal pond']
def resize_image(filename, new_width=256, new_height=256):
pil_image = Image.open(filename)
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert('RGB')
pil_image_rgb.save(filename, format='JPEG', quality=90)
for path in image_path_list:
resize_image(path, 224, 224)
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def interpret(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
return results
all_predictions = []
for i, image_path in enumerate(image_path_list):
results = interpret(process_img_path(image_path))
all_predictions.append(results)
print(results)
###Output
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5
102858752/102853048 [==============================] - 11s 0us/step
Downloading data from https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
40960/35363 [==================================] - 0s 1us/step
[('n06359193', 'web_site', 0.8918389), ('n04404412', 'television', 0.08547714), ('n04152593', 'screen', 0.006043785)]
[('n01443537', 'goldfish', 0.57714343), ('n02536864', 'coho', 0.3193861), ('n01630670', 'common_newt', 0.020654997)]
[('n04243546', 'slot', 0.9574489), ('n04476259', 'tray', 0.014238177), ('n03908618', 'pencil_box', 0.007384028)]
[('n01698640', 'American_alligator', 0.59087384), ('n01737021', 'water_snake', 0.13598265), ('n01697457', 'African_crocodile', 0.07479092)]
[('n01641577', 'bullfrog', 0.66036487), ('n01644900', 'tailed_frog', 0.3108625), ('n01630670', 'common_newt', 0.012726643)]
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
# TODO - your code!
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tpot import TPOTRegressor
housing = pd.read_csv('kc_house_data.csv', parse_dates=['date'], na_values=['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A','N/A', 'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', '?'])
housing.head()
housing.isna().sum()
housing.columns
housing = housing.dropna()
housing.dtypes
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', -1)
y = housing['price'].values
X = housing.drop(['price', 'id','date'], axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=0.25)
tpot = TPOTRegressor(generations=1, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Major Neural Network Architectures Challenge *Data Science Unit 4 Sprint 3 Challenge*In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Generative Adverserial Networks (GANs). In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Challenge Objectives*You should be able to:** Part 1: Train a RNN classification model* Part 2: Utilize a pre-trained CNN for objective detection* Part 3: Describe the difference between a discriminator and generator in a GAN* Part 4: Describe yourself as a Data Science and elucidate your vision of AI Part 1 - RNNsUse an RNN to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. Your Tasks: - Use Keras to fit a predictive model, classifying news articles into topics. - Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.__*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
!pip install numpy==1.16.1
from tensorflow.keras.datasets import reuters
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
word_index
char_to_int = dict((c, i) for i, c in enumerate(word_index)) # "enumerate" retruns index and value. Convert it to dictionary
test = char_to_int["the"]
test
# TODO - your code!
import numpy as np
num_words = len(word_index) # the number of unique characters
print("num words : ", num_words)
'''Trains and evaluate a simple MLP
on the Reuters newswire topic classification task.
'''
from __future__ import print_function
import numpy as np
import keras
from keras.datasets import reuters
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
max_words = 1000
batch_size = 32
epochs = 5
print('Loading data...')
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words,
test_split=0.2)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
num_classes = np.max(y_train) + 1
print(num_classes, 'classes')
print('Vectorizing sequence data...')
tokenizer = Tokenizer(num_words=max_words)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Convert class vector to binary class matrix '
'(for use with categorical_crossentropy)')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
print('Building model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split=0.1)
score = model.evaluate(x_test, y_test,
batch_size=batch_size, verbose=1)
print('Test score:', score[0])
print('Test accuracy:', score[1])
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=1000)
#text = np.array(['this is just some random, stupid text'])
#pred = tokenizer.sequences_to_matrix(text, mode='binary')
#model.predict(pred)
#prediction = model.predict(np.array(tokenizer.texts_to_sequences(text)))
#print(prediction)
phrase = "the"
tokens = tokenizer.texts_to_matrix([phrase])
model.predict(np.array(tokens))
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.wrappers.scikit_learn import KerasClassifier
from keras.constraints import maxnorm
def create_model():
# create model
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# load dataset
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words,
test_split=0.2)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
num_classes = np.max(y_train) + 1
print(num_classes, 'classes')
print('Vectorizing sequence data...')
tokenizer = Tokenizer(num_words=max_words)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Convert class vector to binary class matrix '
'(for use with categorical_crossentropy)')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# create model
model = KerasClassifier(build_fn=create_model, verbose=0)
# define the grid search parameters
batch_size = [10, 20, 40, 60, 80, 100]
epochs = [2, 3]
param_grid = dict(batch_size=batch_size, epochs=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(x_train, y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
###Output
8982 train sequences
2246 test sequences
46 classes
Vectorizing sequence data...
x_train shape: (8982, 1000)
x_test shape: (2246, 1000)
Convert class vector to binary class matrix (for use with categorical_crossentropy)
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive model. To *really* improve the model, more playing with parameters would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNs Find the FrogTime to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 15, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1.Pondanimals.GIF
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2.hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3.PKLS4116_inline.png
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 4.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116.png
Completed Image ====> 5.PKLS4116.png
Image URL: https://i.pinimg.com/originals/57/5c/5b/575c5b5c441e27ff04eb50571ee30127.jpg
Completed Image ====> 6.575c5b5c441e27ff04eb50571ee30127.jpg
Image URL: https://cdn.pixabay.com/photo/2018/04/11/23/05/frog-3312038__340.jpg
Completed Image ====> 7.frog-3312038__340.jpg
Image URL: https://i.pinimg.com/originals/08/32/22/0832222ee0338d16abf7345aa10bd59f.jpg
Completed Image ====> 8.0832222ee0338d16abf7345aa10bd59f.jpg
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 9.alligator-animal-on-pond.jpg
Image URL: https://www.pixoto.com/images-photography/animals/birds/birds-in-a-pond-5986310798966784.jpg
Completed Image ====> 10.birds-in-a-pond-5986310798966784.jpg
Image URL: https://cdn.pixabay.com/photo/2017/04/19/20/37/frog-2243543_960_720.jpg
Completed Image ====> 11.frog-2243543_960_720.jpg
Image URL: https://img-aws.ehowcdn.com/750x428p/photos.demandstudios.com/getty/article/178/192/87827228_XS.jpg
Completed Image ====> 12.87827228_XS.jpg
Image URL: https://a5.mzstatic.com/us/r30/Purple4/v4/48/05/96/480596cb-892e-df83-830e-76d623bd29fa/screen480x480.jpeg
Completed Image ====> 13.screen480x480.jpeg
Image URL: http://extension.msstate.edu/sites/default/files/news/extension-outdoors/2016/eo20151113_turtle300.jpg
Completed Image ====> 14.eo20151113_turtle300.jpg
Image URL: https://cdn.pixabay.com/photo/2017/08/17/06/32/goose-2650209_960_720.jpg
Completed Image ====> 15.goose-2650209_960_720.jpg
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
# TODO - your code!import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import requests
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] == 'bullfrog' or entry[1] == 'tree frog' or entry[1] == 'tailed frog':
return entry[2]
return 0.0
for path in absolute_image_paths[0]['animal pond']:
img_contains_frog(process_img_path(path))
# TODO - your code!import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import requests
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_fish(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if 'fish' in entry[1]:
return entry[2]
return 0.0
for path in absolute_image_paths[0]['animal pond'][:5]:
img_contains_fish(process_img_path(path))
###Output
[('n03598930', 'jigsaw_puzzle', 0.8680324), ('n06359193', 'web_site', 0.064099774), ('n02834397', 'bib', 0.021264149)]
[('n01443537', 'goldfish', 0.8495884), ('n01631663', 'eft', 0.067601345), ('n02536864', 'coho', 0.03516318)]
[('n04243546', 'slot', 0.8712438), ('n04476259', 'tray', 0.04993648), ('n03908618', 'pencil_box', 0.023072599)]
[('n02442845', 'mink', 0.30976573), ('n02363005', 'beaver', 0.2339894), ('n02361337', 'marmot', 0.20796825)]
[('n03485794', 'handkerchief', 0.88227284), ('n02834397', 'bib', 0.02268081), ('n03291819', 'envelope', 0.020095214)]
###Markdown
Part 3 - Generative Adverserial Networks (GANS)Describe the difference between a discriminator and generator in a GAN in your own words.The job of the generator is to generate an image that looks real. The job of the discriminator is to give a probability that the generated image is real. The discriminator learns to do this well by being trained on both real and fake images. As it grows more adept at identifying fakes, the generator grows more adept at generating them more realistically.A well-trained GAN finds the generator and discriminator in a Nash equilibrium. Part 4 - More... Answer the following questions, with a target audience of a fellow Data Scientist:- What do you consider your strongest area, as a Data Scientist?I've been told several times that I'm good at explaining technical things to non-technical people. This is a good skill for a data scientist, as we will often be required to explain analyses of data to non-technical people in a non-technical but detailed manner. - What area of Data Science would you most like to learn more about, and why?I'd like to learn more about dealing with "big data"—particularly when that data is unstructured. Data in the real world is often messy, but valuable. I think my skills need sharpening in this area. Thankfully, we'll be having more instruction on this.- Where do you think Data Science will be in 5 years?It looks as though automl will be very effective in five years or so. Google is making very nice progress here. I expect that more people will have a decent understanding of what exactly it is that data scientists do and when to hire them. Perhaps at this point we will see more technical advancements? After I have more experience "in the field" the answer to this question should be clearer to me. - What are the treats posed by AI to our society?Job displacement is the most immediate threat. One could easily imagine mass outrage leading to politicians campaigning on AI-regulation. If we don't accidentally create harmful AGI and manage to create it in a manner that allows it to augment human intelligence, society will fundamentally change on a massive scale. This is increasingly a bad thing. However, this is a distant threat and I rarely think of it. Too many unknowns.- How do you think we can counteract those threats? Regulation is a line of dominoes and China would be the only winner in this case. Don't let the government have any say in who gets to develop AI. Focus more on AI-security startups. - Do you think achieving General Artifical Intelligence is ever possible?Yes, but probably not before 2050. There's not a strong commercial incentive behind AGI research right now and consciousnesss seems to become more mystifying the more we learn about it.A few sentences per answer is fine - only elaborate if time allows. Congratulations! Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.
###Code
from IPython.display import HTML
HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""")
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
# TODO - your code!
aa_text = []
for i in austen_tweets:
aa_text.append(i.text)
al_text = []
for i in al_tweets:
al_text.append(i.text)
tweet_list = aa_text+al_text
chars=''.join(tweet_list)
chars = list(set(chars))
char_to_int = dict((c, i) for i, c in enumerate(chars))
int_to_char = dict((i, c) for i, c in enumerate(chars))
#lol = list of lists
aa_char_lol = [ list(i) for i in aa_text]
al_char_lol = [list(i) for i in al_text]
aa_char = [char_to_int[list(i)] for i in aa_text]
aa_text = np.asarray(aa_text)
al_text = np.asarray(al_text)
aa_text.shape[0]
aa_text[-1]
x = np.concatenate((aa_text, al_text))
y = np.concatenate((np.ones(aa_text.shape),np.zeros(al_text.shape)))
def str_to_int(string):
l = list(string)
return [char_to_int[i] for i in l]
x = list(map(str_to_int, x))
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y)
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
score
acc
###Output
_____no_output_____
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3. pkls4116_inline.png
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 4. alligator-animal-on-pond.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Completed Image ====> 5. amphibian_bronze-frog_julia-bartosh_400x267.ash
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
import os
files = os.listdir('downloads/animal-pond')
files[0]
display(Image(filename='downloads/animal-pond/'+files[0]))
# TODO - your code!
import numpy as np
from IPython.display import Image
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
# ideally this would work like list(fish), if results[0][1] in list(fish)
if "fish" in results[0][1]:
return "fish"
if "frog" in results[0][1]:
return "frog"
print(results)
[img_contains_frog(process_img_path('downloads/animal-pond/'+i)) for i in files]
###Output
[('n03598930', 'jigsaw_puzzle', 0.8680313), ('n06359193', 'web_site', 0.06410024), ('n02834397', 'bib', 0.021264283)]
[('n04243546', 'slot', 0.8712451), ('n04476259', 'tray', 0.04993579), ('n03908618', 'pencil_box', 0.023072347)]
[('n01698640', 'American_alligator', 0.96394104), ('n01697457', 'African_crocodile', 0.026759878), ('n01737021', 'water_snake', 0.005964658)]
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
'head' is not recognized as an internal or external command,
operable program or batch file.
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
# TODO - your code!
from tpot import TPOTRegressor
tpot = TPOTRegressor(generations=1, population_size=10, verbosity=2)
import pandas as pd
pd.set_option("display.max_columns",21)
import numpy as np
df = pd.read_csv("kc_house_data.csv")
df.head()
x = df['date'][0]
int(x[6:8])
def get_timestamp(x):
return pd.Timestamp(year=int(x[0:4]),month=int(x[4:6]), day = int(x[6:8]))
get_timestamp(df['date'][0])
#df["date"] = df["date"].apply(get_timestamp)
df["date"]= df["date"].apply(lambda x: x.value)
X= df.drop(columns=["price"],axis=1)
y= np.log(df["price"])
from sklearn.model_selection import train_test_split
X_train,X_test, y_train, y_test = train_test_split(X,y)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
np.sqrt(tpot.score(X_test, y_test)*-1)
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach!
###Code
import matplotlib.pyplot as plt
import matplotlib.style as style
import numpy as np
import pandas as pd
style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[180].text.split(" ")
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[959].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
austen_text = ''
for tweet in austen_tweets:
try:
# article.download()
# article.parse()
austen_text += '\n\n' + tweet.text
except:
print('Failed: ' + tweet.url)
austen_text = austen_text.split('\n\n')[1:]
# austen_text = austen_text.split(' ')
print(austen_text)
al_text = ''
for tweet in al_tweets:
try:
# article.download()
# article.parse()
al_text += '\n\n' + tweet.text
except:
print('Failed: ' + tweet.url)
al_text = al_text.split('\n\n')[1:]
print(al_text)
chars_austen = list(set(austen_text)) # split and remove duplicate characters. convert to list.
chars_al = list(set(al_text))
num_chars = len(chars_austen) # the number of unique characters
txt_data_size = len(austen_text)
print("unique characters : ", num_chars)
print("txt_data_size : ", txt_data_size)
# one hot encode
char_to_int = dict((c, i) for i, c in enumerate(chars)) # "enumerate" returns index and value. Convert it to dictionary
int_to_char = dict((i, c) for i, c in enumerate(chars))
print(char_to_int)
print("----------------------------------------------------")
print(int_to_char)
print("----------------------------------------------------")
# integer encode input data
integer_encoded = [char_to_int[i] for i in austen_text] # "integer_encoded" is a list which has a sequence converted from an original data to integers.
print(integer_encoded)
print("----------------------------------------------------")
print("data length : ", len(integer_encoded))
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
_____no_output_____
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
image_list = list(absolute_image_paths.values())
image_list = image_list[0]
image_list
###Output
_____no_output_____
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
# TODO - your code!
from IPython.display import Image
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
frog_list = ['bullfrog', 'tree frog', 'tailed frog']
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] in frog_list:
print(f'\nThis image has a similarity score to a frog of {entry[2]}')
return 0.0
for path in image_list:
print(img_contains_frog(process_img_path(path)))
###Output
[('n03598930', 'jigsaw_puzzle', 0.8680313), ('n06359193', 'web_site', 0.06410024), ('n02834397', 'bib', 0.021264324)]
0.0
[('n01443537', 'goldfish', 0.8495859), ('n01631663', 'eft', 0.06760218), ('n02536864', 'coho', 0.035163548)]
0.0
[('n04243546', 'slot', 0.8712449), ('n04476259', 'tray', 0.04993588), ('n03908618', 'pencil_box', 0.023072386)]
0.0
[('n01698640', 'American_alligator', 0.96394104), ('n01697457', 'African_crocodile', 0.026759902), ('n01737021', 'water_snake', 0.005964664)]
0.0
[('n01641577', 'bullfrog', 0.95048445), ('n01644900', 'tailed_frog', 0.04144713), ('n01667114', 'mud_turtle', 0.0026259099)]
This image has a similarity score to a frog of 0.9504844546318054
0.0
###Markdown
As expected, the only image with a similarity to a frog is **5. amphibian_bronze-frog_julia-bartosh_400x267.ash**
###Code
image_num = 5
Image(filename=image_list[image_num-1], width=600)
###Output
_____no_output_____
###Markdown
**Stretch - Fish**
###Code
def img_contains_fish(img):
fish_list = ['goldfish', 'fish', 'coho']
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] in fish_list:
print(f'\nThis image has a similarity score to a fish of {entry[2]}')
return 0.0
for path in image_list:
print(img_contains_fish(process_img_path(path)))
image_num = 2
Image(filename=image_list[image_num-1], width=600)
###Output
_____no_output_____
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
from tpot import TPOTRegressor
df = pd.read_csv('kc_house_data.csv')
df.head()
df.dtypes
df.isnull().sum()
from sklearn.model_selection import train_test_split
y = df['price'].values
cols_drop = ['id', 'date', 'price']
X = df.drop(cols_drop, axis=1).values
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, test_size=0.25)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# TODO - your code!
%%time
tpot = TPOTRegressor(generations=1, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.predict(X_test)
###Output
_____no_output_____
###Markdown
**Variable Transform**
###Code
df.columns
import seaborn as sns
sns.set(style="ticks", color_codes=True)
x_cols = ['bedrooms', 'bathrooms', 'sqft_living',
'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade',
'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode',
'lat', 'long', 'sqft_living15', 'sqft_lot15']
# Only plot the scatterplot of x variables with our y variable
sns.pairplot(data = df, y_vars= 'price', x_vars=x_cols)
###Output
_____no_output_____
###Markdown
**Some variables have nonlinear relationships, one will consider the use of a log transformation of price.**
###Code
df['log_price'] = np.log(df['price'])
x_cols = ['bedrooms', 'bathrooms', 'sqft_living',
'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade',
'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode',
'lat', 'long', 'sqft_living15', 'sqft_lot15']
# Only plot the scatterplot of x variables with our y variable
sns.pairplot(data = df, y_vars= 'log_price', x_vars=x_cols)
y = df['log_price'].values
cols_drop = ['id', 'date', 'price', 'log_price']
X = df.drop(cols_drop, axis=1).values
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, test_size=0.25)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
%%time
tpot = TPOTRegressor(generations=1, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.predict(X_test)
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
# Stretch Goal - Data for Austen is very less so fetching more tweets
austen_tweets = query_tweets('from:austen', 3000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
# Generic imports
import pandas as pd
import numpy as np
# TODO - your code!
tweets_len = []
for i in range(len(austen_tweets)):
tweet_len = len(austen_tweets[i].text)
tweets_len.append(tweet_len)
print('Austen tweet lengths:')
print(f'max: {max(tweets_len)}, min: {min(tweets_len)}')
tweets_len = []
for i in range(len(al_tweets)):
tweet_len = len(al_tweets[i].text)
tweets_len.append(tweet_len)
print('AlYankovic tweet lengths:')
print(f'max: {max(tweets_len)}, min: {min(tweets_len)}')
import re
# Encode the characters to a sequence of integers for the model
def convert_to_ascii(text):
ascii_list = [ord(char) for char in text]
return ascii_list
def remove_url_from_text(text):
return re.sub(r'^https?:\/\/.*[\r\n]*', '', text, flags=re.MULTILINE)
# output = convert_to_ascii('hello')
# print(output)
austen_tweets_updated = []
for i in range(len(austen_tweets)):
text_without_url = remove_url_from_text(austen_tweets[i].text)
updated_text = convert_to_ascii(text_without_url)
austen_tweets_updated.append(updated_text)
al_tweets_updated = []
for i in range(len(al_tweets)):
text_without_url = remove_url_from_text(al_tweets[i].text)
updated_text = convert_to_ascii(text_without_url)
al_tweets_updated.append(updated_text)
print('Initial tweet counts')
print(len(austen_tweets))
print(len(al_tweets))
print('Updated tweet counts')
print(len(austen_tweets_updated))
print(len(al_tweets_updated))
# Get the data into the appropriate shape/format, including labels
# and a train/test split
combined_updated_tweets = austen_tweets_updated + al_tweets_updated
print('Combined tweet counts')
print(len(combined_updated_tweets))
print(combined_updated_tweets)
X = np.array(combined_updated_tweets)
# print(X)
austen_tweets_label = np.ones((len(austen_tweets_updated),), dtype=np.int)
al_tweets_lable = np.zeros((len(al_tweets_updated),), dtype=np.int)
y = np.concatenate((austen_tweets_label,al_tweets_lable), axis=0)
# Use Keras to fit a predictive model, classifying tweets as being
# from Austen versus Weird Al
from sklearn.model_selection import train_test_split
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 370
batch_size = 32
print('Loading data...')
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# print(x_train)
# print(y_train)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
# Report your overall score and accuracy
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Test score: 0.3344605711329616
Test accuracy: 0.838337182585692
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animalpond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animalpond
Evaluating...
Starting Download...
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 1. alligator-animal-on-pond.jpg
Image URL: https://c7.alamy.com/comp/X98MX9/nesting-trumpeter-swan-cygnus-buccinator-near-beaver-creek-yukon-swan-bird-animal-pond-nest-usa-north-america-americ-X98MX9.jpg
Completed Image ====> 2. nesting-trumpeter-swan-cygnus-buccinator-near-beaver-creek-yukon-swan-bird-animal-pond-nest-usa-north-america-americ-x98mx9.jpg
Image URL: https://c8.alamy.com/comp/X285B8/moose-alces-alces-along-denali-highway-alaska-north-america-usa-animal-pond-drinking-X285B8.jpg
Completed Image ====> 3. moose-alces-alces-along-denali-highway-alaska-north-america-usa-animal-pond-drinking-x285b8.jpg
Image URL: https://cdn.pixabay.com/photo/2019/01/18/11/37/animal-3939592_960_720.jpg
Completed Image ====> 4. animal-3939592_960_720.jpg
Image URL: https://get.pxhere.com/photo/water-bird-animal-pond-wildlife-fauna-duck-vertebrate-ducks-waterfowl-water-bird-mallard-canard-duck-bird-ducks-geese-and-swans-seaduck-1348242.jpg
Completed Image ====> 5. water-bird-animal-pond-wildlife-fauna-duck-vertebrate-ducks-waterfowl-water-bird-mallard-canard-duck-bird-ducks-geese-and-swans-seaduck-1348242.jpg
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
# TODO - your code!
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains(img, findstr='frog'):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
#for entry in results:
# print(f'Found {entry[1]} with prediction score {entry[2]}')
for entry in results:
entry_key = entry[1]
if entry_key.find(findstr) != -1:
return entry[2]
return 0.0
# Check the download path
absolute_image_paths
# For each image check the presense of frog or fish
image_path_list = absolute_image_paths['animalpond']
for i, image_path in enumerate(image_path_list):
print(image_path)
processed_image = process_img_path(image_path)
results = img_contains(processed_image, 'frog')
print(f'Prediction for frog in the picture is {results}\n')
results = img_contains(processed_image, 'fish')
print(f'Prediction for fish in the picture is {results}\n')
###Output
/content/downloads/animalpond/1. alligator-animal-on-pond.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animalpond/2. nesting-trumpeter-swan-cygnus-buccinator-near-beaver-creek-yukon-swan-bird-animal-pond-nest-usa-north-america-americ-x98mx9.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animalpond/3. moose-alces-alces-along-denali-highway-alaska-north-america-usa-animal-pond-drinking-x285b8.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animalpond/4. animal-3939592_960_720.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animalpond/5. water-bird-animal-pond-wildlife-fauna-duck-vertebrate-ducks-waterfowl-water-bird-mallard-canard-duck-bird-ducks-geese-and-swans-seaduck-1348242.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
# TODO - your code!
import pandas as pd
from tpot import TPOTRegressor
df = pd.read_csv('kc_house_data.csv')
df.head()
# Feature Engineering
df['date'] = pd.to_datetime(df['date'])
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day_of_month'] = df['date'].dt.day
df['day_of_week'] = df['date'].dt.weekday
df.dtypes
X = df.drop(columns=['price','date'])
y = df['price']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, test_size=0.25)
%%time
tpot = TPOTRegressor(generations=1, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
y_test_predict = tpot.predict(X_test)
from sklearn.metrics import mean_squared_error, r2_score
import numpy as np
MSE = mean_squared_error(y_test, y_test_predict)
RMSE = (np.sqrt(MSE))
print('MSE is {}'.format(MSE))
print('RMSE is {}'.format(RMSE))
R2 = r2_score(y_test, y_test_predict)
print('R^2 is {}'.format(R2))
###Output
MSE is 23915052667.10568
RMSE is 154644.92447896788
R^2 is 0.8258978365532139
###Markdown
*Data Science Unit 4 Sprint 4* Sprint Challenge RNNs, CNNs, GANS, and AutoMLIn this Sprint Challenge, you'll explore some of the cutting edge of Data Science. *Caution* - these approaches can be pretty heavy computationally. All problems are designed to completed with 5-10 minutes of run time on most machines. If you approach takes longer, please double check your work. Part 1 - RNNsUse an RNN to fit a classification model on tweets to distinguish from tweets from any two accounts. The following code sample illustrates how to access data from an account (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
###Output
_____no_output_____
###Markdown
Your Tasks:* Select two twitter accounts to gather data from* Use twitterscraper to get ~1,000 tweets from each account* Encode the characters to a sequence of integers for the model* Get the data into the appropriate shape/format, including labels and a train/test split* Use Keras to fit a predictive model, classying tweets as being from one acount or the other* Report your overall score and accuracyFor reference, the [Keras IMDB classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well as the RNN code we used in class.Note - focus on getting a running model, not on making accuracy with extreme data size or epoch numbers. Fit a baseline model based on tweet text. Only revisit and push accuracy or incorporate additional features if you get everything else done!
###Code
elon_tweets = query_tweets('from:elonmusk', 1000)
len(elon_tweets)
for i, j in enumerate(austen_tweets):
if i < 10:
print(austen_tweets[i].text)
print("-"*100)
for i, j in enumerate(elon_tweets):
if i < 10:
print(elon_tweets[i].text)
###Output
I love love love working with great people.pic.twitter.com/fCKOm6Vl
Today for all-hands we watched a video of Aaron, a 47-year-old military chaplain now turned software engineer.
The company that hired him less than a year ago has now hired 8 more Lambda School students.
Other people who were creating wealth. And we exchange wealth with trade.
Sounds like by definition it's not a relative measure then
Is the world more wealthy than it was 10 million years ago? Obviously, yes.
How is that possible?
I'd love to see data around what robocalling has done to phone pickups generally.
I just never answer the phone anymore. Telemarketing must be getting hammered.
The tweet wasn’t about Lambda School
You’re assuming I’m talking about myself
This ^
False
----------------------------------------------------------------------------------------------------
Please ignore prior tweets, as that was someone pretending to be me :) This is actually me.
I love the Internet. Comments had me literally ROFL. No, it wasn't intentional. Glad I didn't mention the other letter!
About time to unveil the D and something elsepic.twitter.com/qp23yi59i6
“@TheDailyShow: The House Science, Space and Technology Committee hearing on global warming. http://on.cc.com/1vcjuzt ”
@TalulahRiley Good suggestion :)
Calendar app w tap to nav & traffic predictor in Tesla V6.0 release will radically improve how the car adapts to the owner over time
Would also like to congratulate @Boeing, fellow winner of the @NASA commercial crew program
This is the Crew Dragon spacecraft design that we unveiled earlier this year:http://www.youtube.com/watch?v=yEQrmDoIRO8 …
Deeply honored and appreciative of the trust that @NASA has placed in @SpaceX for the future of human spaceflight
Official Gigafactory address to be: Electric Avenue, McCarran, Nevada
###Markdown
Encode to integers
###Code
# get all tweet texts
both_tweets = ''
for i in austen_tweets:
both_tweets = both_tweets + i.text
for i in elon_tweets:
both_tweets = both_tweets + i.text
# Convert all tweet texts to numeric
chars = list(set(both_tweets))
char_indices = dict((c, i) for i, c in enumerate(chars))
# Convert austen tweet to numeric set based on all tweets
austen_tweets_num = []
for i, j in enumerate(austen_tweets):
num_list = [char_indices[char] for char in j.text]
austen_tweets_num.append(num_list)
# Convert elon tweet to numeric set based on all tweets
elon_tweets_num = []
for i, j in enumerate(elon_tweets):
num_list = [char_indices[char] for char in j.text]
elon_tweets_num.append(num_list)
print(len(austen_tweets_num), len(elon_tweets_num))
print("-"*100)
print(austen_tweets_num[1:3])
print("-"*100)
print(elon_tweets_num[1:3])
###Output
181 721
----------------------------------------------------------------------------------------------------
[[117, 115, 25, 79, 45, 94, 123, 115, 13, 94, 79, 90, 90, 46, 61, 79, 40, 25, 93, 94, 63, 32, 94, 63, 79, 110, 102, 61, 32, 25, 94, 79, 94, 16, 122, 25, 32, 115, 94, 115, 123, 94, 24, 79, 13, 115, 40, 30, 94, 79, 94, 109, 42, 46, 45, 32, 79, 13, 46, 115, 90, 25, 94, 38, 122, 90, 122, 110, 79, 13, 45, 94, 102, 61, 79, 116, 90, 79, 122, 40, 94, 40, 115, 63, 94, 110, 12, 13, 40, 32, 25, 94, 93, 115, 123, 110, 63, 79, 13, 32, 94, 32, 40, 21, 122, 40, 32, 32, 13, 22, 52, 52, 117, 61, 32, 94, 102, 115, 38, 116, 79, 40, 45, 94, 110, 61, 79, 110, 94, 61, 122, 13, 32, 25, 94, 61, 122, 38, 94, 90, 32, 93, 93, 94, 110, 61, 79, 40, 94, 79, 94, 45, 32, 79, 13, 94, 79, 21, 115, 94, 61, 79, 93, 94, 40, 115, 63, 94, 61, 122, 13, 32, 25, 94, 68, 94, 38, 115, 13, 32, 94, 15, 79, 38, 57, 25, 79, 94, 35, 102, 61, 115, 115, 90, 94, 93, 110, 12, 25, 32, 40, 110, 93, 22], [8, 110, 61, 32, 13, 94, 116, 32, 115, 116, 90, 32, 94, 63, 61, 115, 94, 63, 32, 13, 32, 94, 102, 13, 32, 79, 110, 122, 40, 21, 94, 63, 32, 79, 90, 110, 61, 22, 94, 24, 40, 25, 94, 63, 32, 94, 32, 67, 102, 61, 79, 40, 21, 32, 94, 63, 32, 79, 90, 110, 61, 94, 63, 122, 110, 61, 94, 110, 13, 79, 25, 32, 22]]
----------------------------------------------------------------------------------------------------
[[91, 94, 90, 115, 16, 32, 94, 110, 61, 32, 94, 91, 40, 110, 32, 13, 40, 32, 110, 22, 94, 23, 115, 38, 38, 32, 40, 110, 93, 94, 61, 79, 25, 94, 38, 32, 94, 90, 122, 110, 32, 13, 79, 90, 90, 45, 94, 48, 8, 85, 15, 22, 94, 104, 115, 30, 94, 122, 110, 94, 63, 79, 93, 40, 29, 110, 94, 122, 40, 110, 32, 40, 110, 122, 115, 40, 79, 90, 22, 94, 77, 90, 79, 25, 94, 91, 94, 25, 122, 25, 40, 29, 110, 94, 38, 32, 40, 110, 122, 115, 40, 94, 110, 61, 32, 94, 115, 110, 61, 32, 13, 94, 90, 32, 110, 110, 32, 13, 111], [24, 57, 115, 12, 110, 94, 110, 122, 38, 32, 94, 110, 115, 94, 12, 40, 16, 32, 122, 90, 94, 110, 61, 32, 94, 120, 94, 79, 40, 25, 94, 93, 115, 38, 32, 110, 61, 122, 40, 21, 94, 32, 90, 93, 32, 116, 122, 102, 22, 110, 63, 122, 110, 110, 32, 13, 22, 102, 115, 38, 100, 95, 116, 18, 75, 45, 122, 101, 70, 122, 27]]
###Markdown
Run Model
###Code
# Convert to np array for machine learning
X = np.array(austen_tweets_num + elon_tweets_num)
austen_y = np.zeros((len(austen_tweets_num),), dtype=np.int)
elon_y = np.ones((len(elon_tweets_num),), dtype=np.int)
y = np.concatenate((austen_y,elon_y), axis=0)
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
max_features = 2000
maxlen = 80
epochs = 10
batch_size = 20
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('x_train shape:', X_train.shape)
print('x_test shape:', X_test.shape)
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
###Output
Using TensorFlow backend.
###Markdown
Result
###Code
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=batch_size)
###Output
WARNING:tensorflow:From /home/superio/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 721 samples, validate on 181 samples
Epoch 1/10
721/721 [==============================] - 21s 29ms/step - loss: 0.5311 - acc: 0.8086 - val_loss: 0.5593 - val_acc: 0.7569
Epoch 2/10
721/721 [==============================] - 14s 20ms/step - loss: 0.4708 - acc: 0.8100 - val_loss: 0.5344 - val_acc: 0.7569
Epoch 3/10
721/721 [==============================] - 13s 18ms/step - loss: 0.4480 - acc: 0.8100 - val_loss: 0.5231 - val_acc: 0.7569
Epoch 4/10
721/721 [==============================] - 13s 18ms/step - loss: 0.4286 - acc: 0.8100 - val_loss: 0.5050 - val_acc: 0.7680
Epoch 5/10
721/721 [==============================] - 14s 19ms/step - loss: 0.4209 - acc: 0.8128 - val_loss: 0.5056 - val_acc: 0.7790
Epoch 6/10
721/721 [==============================] - 14s 20ms/step - loss: 0.4222 - acc: 0.8252 - val_loss: 0.5022 - val_acc: 0.7735
Epoch 7/10
721/721 [==============================] - 16s 22ms/step - loss: 0.4113 - acc: 0.8280 - val_loss: 0.4942 - val_acc: 0.7845
Epoch 8/10
721/721 [==============================] - 27s 37ms/step - loss: 0.4094 - acc: 0.8308 - val_loss: 0.5187 - val_acc: 0.7735
Epoch 9/10
721/721 [==============================] - 16s 22ms/step - loss: 0.3994 - acc: 0.8280 - val_loss: 0.4854 - val_acc: 0.7901
Epoch 10/10
721/721 [==============================] - 15s 20ms/step - loss: 0.3923 - acc: 0.8197 - val_loss: 0.4980 - val_acc: 0.7956
###Markdown
Part 2 - CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {'keywords': "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1.Pondanimals.GIF
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2.hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3.PKLS4116_inline.png
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 4.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Image URL: http://images.animalpicturesociety.com/images/5d/alligator_animal_on_pond.jpg
Completed Image ====> 5.alligator_animal_on_pond.jpg
Errors: 0
###Markdown
At the time of writing at least a few do, but since the internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is validly run ResNet50 on the input images - don't worry about tuning or improving the model. *Hint:* ResNet 50 doesn't just return "frog". The three labels it has for frogs are bullfrog, tree frog, and tailed frog.Stretch goal - also check for fish.
###Code
for i in absolute_image_paths[0]['animal pond']:
print(i.split("/")[-1:])
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if 'frog' in entry[1]:
return entry[2], 'Frog is in this picture'
else:
return 'Frog is not in this picture'
for i in absolute_image_paths[0]['animal pond']:
print(i.split("/")[-1:])
print(img_contains_frog(process_img_path(i)))
print("-"*50)
###Output
['1.Pondanimals.GIF']
[('n03598930', 'jigsaw_puzzle', 0.8680317), ('n06359193', 'web_site', 0.064100206), ('n02834397', 'bib', 0.021264251)]
Frog is not in this picture
--------------------------------------------------
['2.hqdefault.jpg']
[('n01443537', 'goldfish', 0.8495909), ('n01631663', 'eft', 0.067602254), ('n02536864', 'coho', 0.03516372)]
Frog is not in this picture
--------------------------------------------------
['3.PKLS4116_inline.png']
[('n04243546', 'slot', 0.8712449), ('n04476259', 'tray', 0.049936026), ('n03908618', 'pencil_box', 0.023072386)]
Frog is not in this picture
--------------------------------------------------
['4.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg']
[('n02442845', 'mink', 0.30976456), ('n02363005', 'beaver', 0.23399007), ('n02361337', 'marmot', 0.20796947)]
Frog is not in this picture
--------------------------------------------------
['5.alligator_animal_on_pond.jpg']
[('n01698640', 'American_alligator', 0.963947), ('n01697457', 'African_crocodile', 0.026759941), ('n01737021', 'water_snake', 0.005964684)]
Frog is not in this picture
--------------------------------------------------
###Markdown
Part 3 - AutoMLUse [TPOT](https://epistasislab.github.io/tpot/) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
import pandas as pd
url = "https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv"
df = pd.read_csv(url)
df.head()
print(df.shape)
df.isnull().sum()
###Output
(21613, 21)
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1`and `population_size=10` parameters, so your pipeline runs efficiently. You will want to be able to iterate and test. *Hint:* You will have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running - as long as you still get a valid model with reasonable predictive power.
###Code
from tpot import TPOTRegressor
from sklearn.model_selection import train_test_split
X = df.drop(['price','id','date'], axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, df['price'].values, test_size=0.2)
###Output
_____no_output_____
###Markdown
Result
###Code
tpot = TPOTRegressor(generations=2, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install category_encoders
import imageio
import matplotlib.pyplot as plt
from skimage import color
from skimage.exposure import rescale_intensity
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import category_encoders as ce
from sklearn.pipeline import make_pipeline, Pipeline
import pandas as pd
from datetime import datetime
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.metrics import mean_squared_error, r2_score
import pandas as pd
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
from datetime import datetime
import math
from scipy.stats import mode
from sklearn.preprocessing import StandardScaler
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import statsmodels.api as s
import time
from functools import reduce
import regex
import category_encoders as ce
from sklearn.pipeline import make_pipeline, Pipeline
import pandas as pd
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
type(austen_tweets)
max_length = 100
austin_text = ' '.join([t.text for t in austen_tweets])
al_text = ' '.join([t.text for t in al_tweets])
text = austin_text + al_text
chars = list(set(text)) # split and remove duplicate characters. convert to list
# one hot encode
char_to_int = dict((c, i) for i, c in enumerate(chars)) # "enumerate" retruns index and value. Convert it to dictionary
int_to_char = dict((i, c) for i, c in enumerate(chars))
# max_length = 100
# [t.text for t in austen_tweets])
# integer_encoded_austin = [char_to_int[i] for i in austin_text]
# integer_encoded_al = [char_to_int[i] for i in al_text]
def text_to_nums(text,austen):
d = {'y': austen}
for i in range(max_length):
d[f"{i}"] = char_to_int[text[i]] if i < len(text) else -1
return d
a = [text_to_nums(t.text, 0) for t in austen_tweets]
a.extend([text_to_nums(t.text, 1) for t in al_tweets])
df = pd.DataFrame(a)
df.head()
X = df.drop('y', axis=1)
y = df['y']
X_train, X_test, y_train, y_test = train_test_split(
X, y.values, train_size=0.75, test_size=0.25)
max_features = max_length
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
batch_size = 32
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Train...
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 855 samples, validate on 286 samples
Epoch 1/15
855/855 [==============================] - 41s 47ms/step - loss: 0.5397 - acc: 0.8222 - val_loss: 0.4473 - val_acc: 0.8357
Epoch 2/15
855/855 [==============================] - 6s 7ms/step - loss: 0.4329 - acc: 0.8433 - val_loss: 0.4287 - val_acc: 0.8357
Epoch 3/15
855/855 [==============================] - 6s 7ms/step - loss: 0.4179 - acc: 0.8433 - val_loss: 0.4354 - val_acc: 0.8357
Epoch 4/15
855/855 [==============================] - 6s 7ms/step - loss: 0.4158 - acc: 0.8433 - val_loss: 0.4245 - val_acc: 0.8357
Epoch 5/15
855/855 [==============================] - 6s 7ms/step - loss: 0.3868 - acc: 0.8444 - val_loss: 0.3786 - val_acc: 0.8322
Epoch 6/15
855/855 [==============================] - 6s 7ms/step - loss: 0.3532 - acc: 0.8421 - val_loss: 0.3884 - val_acc: 0.8322
Epoch 7/15
855/855 [==============================] - 6s 7ms/step - loss: 0.3405 - acc: 0.8655 - val_loss: 0.4534 - val_acc: 0.8531
Epoch 8/15
855/855 [==============================] - 6s 7ms/step - loss: 0.3454 - acc: 0.8807 - val_loss: 0.3047 - val_acc: 0.8986
Epoch 9/15
855/855 [==============================] - 6s 7ms/step - loss: 0.3031 - acc: 0.8982 - val_loss: 0.3572 - val_acc: 0.8776
Epoch 10/15
855/855 [==============================] - 6s 7ms/step - loss: 0.2756 - acc: 0.9018 - val_loss: 0.3009 - val_acc: 0.9021
Epoch 11/15
855/855 [==============================] - 6s 7ms/step - loss: 0.2778 - acc: 0.8936 - val_loss: 0.4185 - val_acc: 0.7902
Epoch 12/15
855/855 [==============================] - 6s 7ms/step - loss: 0.2809 - acc: 0.8994 - val_loss: 0.3115 - val_acc: 0.8951
Epoch 13/15
855/855 [==============================] - 6s 7ms/step - loss: 0.2788 - acc: 0.9111 - val_loss: 0.3495 - val_acc: 0.8846
Epoch 14/15
855/855 [==============================] - 6s 7ms/step - loss: 0.2825 - acc: 0.9053 - val_loss: 0.3227 - val_acc: 0.8951
Epoch 15/15
855/855 [==============================] - 6s 7ms/step - loss: 0.2648 - acc: 0.9099 - val_loss: 0.2946 - val_acc: 0.8846
286/286 [==============================] - 0s 2ms/step
Test score: 0.2945976211474492
Test accuracy: 0.8846153829481218
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
"""Prediction of Users based on Tweet embeddings."""
import numpy as np
from sklearn.linear_model import LogisticRegression
from .models import User
from .twitter import BASILICA
def predict_user(user1_name, user2_name, tweet_text):
"""Determine and return which user is more likely to say a given Tweet."""
user1 = User.query.filter(User.name == user1_name).one()
user2 = User.query.filter(User.name == user2_name).one()
user1_embeddings = np.array([tweet.embedding for tweet in user1.tweets])
user2_embeddings = np.array([tweet.embedding for tweet in user2.tweets])
embeddings = np.vstack([user1_embeddings, user2_embeddings])
labels = np.concatenate([np.ones(len(user1.tweets)),
np.zeros(len(user2.tweets))])
log_reg = LogisticRegression().fit(embeddings, labels)
tweet_embedding = BASILICA.embed_sentence(tweet_text, model='twitter')
return log_reg.predict(np.array(tweet_embedding).reshape(1, -1))
predict_user('Austen Allred', 'Weird Al Yankovic',
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 15, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3. pkls4116_inline.png
Image URL: https://vetstreet-brightspot.s3.amazonaws.com/8d/ac/377fecad46d8820697c26efacc32/koi-pond-thinkstock-153560141-335sm61313.jpg
Completed Image ====> 4. koi-pond-thinkstock-153560141-335sm61313.jpg
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 5. alligator-animal-on-pond.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Completed Image ====> 6. amphibian_bronze-frog_julia-bartosh_400x267.ash
Image URL: https://cdn.pixabay.com/photo/2017/08/17/06/32/goose-2650209_960_720.jpg
Completed Image ====> 7. goose-2650209_960_720.jpg
Image URL: https://cdn.pixabay.com/photo/2017/04/19/20/37/frog-2243543_960_720.jpg
Completed Image ====> 8. frog-2243543_960_720.jpg
Image URL: https://www.pixoto.com/images-photography/animals/birds/birds-in-a-pond-5986310798966784.jpg
Completed Image ====> 9. birds-in-a-pond-5986310798966784.jpg
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 10. water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116.png
Completed Image ====> 11. pkls4116.png
Image URL: https://i.pinimg.com/originals/12/ae/e2/12aee2aa186a7b69a66563f138bba822.jpg
Completed Image ====> 12. 12aee2aa186a7b69a66563f138bba822.jpg
Image URL: https://i.ytimg.com/vi/uHdZb0LJZNQ/maxresdefault.jpg
Completed Image ====> 13. maxresdefault.jpg
Image URL: https://i.pinimg.com/originals/dd/69/c9/dd69c94f00312b5c487bf1018f38be58.png
Completed Image ====> 14. dd69c94f00312b5c487bf1018f38be58.png
Image URL: https://cdn.pixabay.com/photo/2018/04/11/23/05/frog-3312038__340.jpg
Completed Image ====> 15. frog-3312038__340.jpg
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
absolute_image_paths
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] in ['bullfrog', 'tree frog', 'tailed frog']:
return entry[2]
return 0.0
for p in absolute_image_paths['animal pond']:
if img_contains_frog(process_img_path(p)) != 0.0:
print(f"image {p} contains frog")
###Output
[('n03598930', 'jigsaw_puzzle', 0.8680313), ('n06359193', 'web_site', 0.06410024), ('n02834397', 'bib', 0.021264324)]
[('n01443537', 'goldfish', 0.8495859), ('n01631663', 'eft', 0.06760218), ('n02536864', 'coho', 0.035163548)]
[('n04243546', 'slot', 0.8712449), ('n04476259', 'tray', 0.04993588), ('n03908618', 'pencil_box', 0.023072386)]
[('n01443537', 'goldfish', 0.98815376), ('n09256479', 'coral_reef', 0.006681344), ('n12985857', 'coral_fungus', 0.00260608)]
[('n01698640', 'American_alligator', 0.96394104), ('n01697457', 'African_crocodile', 0.026759902), ('n01737021', 'water_snake', 0.005964664)]
[('n01641577', 'bullfrog', 0.95048445), ('n01644900', 'tailed_frog', 0.04144713), ('n01667114', 'mud_turtle', 0.0026259099)]
image /content/downloads/animal pond/6. amphibian_bronze-frog_julia-bartosh_400x267.ash contains frog
[('n01860187', 'black_swan', 0.8796092), ('n01847000', 'drake', 0.033985015), ('n01855032', 'red-breasted_merganser', 0.028971087)]
[('n01641577', 'bullfrog', 0.9223301), ('n01644900', 'tailed_frog', 0.07364703), ('n01644373', 'tree_frog', 0.001178118)]
image /content/downloads/animal pond/8. frog-2243543_960_720.jpg contains frog
[('n02009912', 'American_egret', 0.7822415), ('n02012849', 'crane', 0.14339209), ('n02009229', 'little_blue_heron', 0.021143455)]
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
from tpot import TPOTRegressor
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
!ls
###Output
downloads kc_house_data.csv sample_data
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
df = pd.read_csv('kc_house_data.csv')
df.head()
df.dtypes
df['date_timestamp'] = df.date. \
apply(lambda d: datetime.strptime(d, "%Y%m%dT000000").timestamp() if str(type(d)) == "<class 'str'>" else 0)
X = df.drop(['date','price'], axis=1)
y = df.price
X_train, X_test, y_train, y_test = train_test_split(
X, y.values, train_size=0.75, test_size=0.25)
%%time
tpot = TPOTRegressor(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
py = tpot.predict(X_test)
mean_squared_error(y_test,py), r2_score(y_test,py)
# encoders = Pipeline([
# ('category', ce.OrdinalEncoder(cols=['genre_top'])),
# ('binary', ce.BinaryEncoder(cols=['name']))
# # ('onehot', ce.OneHotEncoder(use_cat_names=True,cols=['type']))
# ])
# df_ = encoders.fit_transform(dfbc)
# df_.head()
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
al_tweets[0].fullname
austen_tweets[0].fullname
tweets = austen_tweets + al_tweets
len(tweets)
import pandas as pd
pd.set_option('max_colwidth', 75)
data = [[x.fullname, x.text] for x in tweets]
df = pd.DataFrame(data=data, columns=["Name", "Text"])
df.head()
df.tail()
austen_tweets = [x.text for x in austen_tweets]
austen_tweets[:5]
# al_tweets = [x.text for x in al_tweets]
al_tweets[:5]
austen_tweets = ' '.join(austen_tweets)
austen_tweets[:150]
tweets = [x.text for x in tweets]
tweets[:5]
tweets = ' '.join(tweets)
tweets[:150]
chars = list(set(tweets)) # split and remove duplicate characters. convert to list.
num_chars = len(chars) # the number of unique characters
txt_data_size = len(tweets)
print("unique characters : ", num_chars)
print("txt_data_size : ", txt_data_size)
print('All characters: \n', [x for x in chars])
# integer-encode
char_to_int = dict((c, i) for i, c in enumerate(chars)) # "enumerate" retruns index and value. Convert it to dictionary
int_to_char = dict((i, c) for i, c in enumerate(chars))
print(char_to_int)
print("----------------------------------------------------")
print(int_to_char)
print("----------------------------------------------------")
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
from sklearn.feature_extraction import DictVectorizer
from sklearn.model_selection import train_test_split
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
X = df.Text
y = df.Name
v = DictVectorizer(sparse=False)
D = char_to_int
X = v.fit_transform(D)
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = .9, random_state=42)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(tweets, embedding_vecor_length, max_features, 128, input_length=max_review_length, dropout=0.2))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Loading data...
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://www.featurepics.com/StockImage/20100713/pond-animals-stock-illustration-1611527.jpg
Completed Image ====> 3. pond-animals-stock-illustration-1611527.jpg
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 4. alligator-animal-on-pond.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 5. pkls4116_inline.png
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
absolute_image_paths
# Imports
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageOps # https://pillow.readthedocs.io/en/stable/
import tensorflow as tf
import tensorflow_hub as hub
image_path_list = absolute_image_paths['animal pond']
# Show images
def show_images(image_path_list):
plt.figure();
for i, image_path in enumerate(image_path_list):
plt.subplot(5,5, i+1)
plt.imshow(np.asarray(Image.open(image_path)))
# plt.title(image_path)
plt.grid(False)
plt.yticks([])
plt.xticks([])
plt.show()
show_images(image_path_list)
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1].find('frog'):
return entry[2]
return 0.0
all_predictions = []
for i, image_path in enumerate(image_path_list):
results = img_contains_frog(process_img_path(image_path))
all_predictions.append(results)
print(results)
###Output
[('n03598930', 'jigsaw_puzzle', 0.8680313), ('n06359193', 'web_site', 0.06410024), ('n02834397', 'bib', 0.021264324)]
0.8680313
[('n01443537', 'goldfish', 0.8495859), ('n01631663', 'eft', 0.06760218), ('n02536864', 'coho', 0.035163548)]
0.8495859
[('n02002556', 'white_stork', 0.7059624), ('n02012849', 'crane', 0.09389288), ('n02013706', 'limpkin', 0.034540534)]
0.7059624
[('n01698640', 'American_alligator', 0.96394104), ('n01697457', 'African_crocodile', 0.026759902), ('n01737021', 'water_snake', 0.005964664)]
0.96394104
[('n04243546', 'slot', 0.8712449), ('n04476259', 'tray', 0.04993588), ('n03908618', 'pencil_box', 0.023072386)]
0.8712449
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
import pandas as pd
pd.options.display.max_columns = 50
df = pd.read_csv('kc_house_data.csv')
df.head()
df = df.drop('id', axis=1)
df.head()
df.isna().sum()
df.describe()
df = df.drop('date', axis=1)
df.head()
from sklearn.model_selection import train_test_split
X = df.drop('price', axis=1).values
X_train, X_test, y_train, y_test = train_test_split(
X, df['price'].values, train_size=0.75, test_size=0.25)
%%time
from tpot import TPOTRegressor
tpot = TPOTRegressor(generations=1, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper -q
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
# encode characters as sequence of integers
tweets_text = ''
for twt in austen_tweets:
tweets_text += twt.text
for twt in al_tweets:
tweets_text += twt.text
chars = list(set(tweets_text))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
austen_encode = []
for twt in austen_tweets:
austen_encode.append([char_indices[char] for char in twt.text])
al_encode = []
for twt in al_tweets:
al_encode.append([char_indices[char] for char in twt.text])
# get data into proper format (positive class = austen's tweets)
df_austen = pd.DataFrame({'enc' : [e for e in austen_encode], 'target' : [1] * len(austen_tweets)})
df_al = pd.DataFrame({'enc' : [e for e in al_encode], 'target' : [0] * len(al_tweets)})
df = pd.concat([df_austen, df_al], ignore_index=True)
# train test split
max_tweet_length = 280
X = sequence.pad_sequences(df.enc, maxlen=max_tweet_length)
y = df.target
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2,
stratify=y)
# keras predictive model
model = Sequential()
model.add(Embedding(top_words, 30, input_length=max_tweet_length))
model.add(LSTM(50))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=100)
# overall score and accuracy
scores = model.evaluate(X_test, y_test, verbose=0)
print("Test Accuracy: %.2f%%" % (scores[1]*100))
###Output
Test Accuracy: 90.83%
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download -q
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3. pkls4116_inline.png
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 4. alligator-animal-on-pond.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Completed Image ====> 5. amphibian_bronze-frog_julia-bartosh_400x267.ash
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
def check_pond_animals(img_paths, confidence=0.1):
"""Checks for frogs and fish"""
# load resnet50
model = ResNet50(weights='imagenet')
fig, ax = plt.subplots(nrows=len(img_paths), figsize=(10*len(img_paths), 7))
for i, img_path in enumerate(img_paths):
frogs, fish = False, False
# load the image
img = image.load_img(img_path, target_size=(224, 224))
# preprocess image for prediction
X = preprocess_input(np.expand_dims(image.img_to_array(img), axis=0))
preds = decode_predictions(model.predict(X), top=3)[0]
# see if images have fish, frogs or both
if any(['fish' in pred[1] for pred in preds]):
fish = True
if any(['frog' in pred[1] for pred in preds]):
frogs = True
if fish and frogs:
ax[i].set_title('Contains Fish and Frogs')
elif fish:
ax[i].set_title('Contains Fish')
elif frogs:
ax[i].set_title('Contains Frogs')
else:
ax[i].set_title('No Fish or Frogs')
ax[i].imshow(img)
ax[i].axis('off')
plt.tight_layout()
plt.show()
check_pond_animals(absolute_image_paths['animal pond'])
###Output
_____no_output_____
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot -q
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
data = pd.read_csv('kc_house_data.csv')
X = data.drop(columns=['id', 'date', 'price'])
y = data.price
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2)
tpot = TPOTRegressor(generations=5, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print('Test Score %.2f' % tpot.score(X_test, y_test))
###Output
Test Score -13920826116.80
###Markdown
Major Neural Network Architectures Challenge *Data Science Unit 4 Sprint 3 Challenge*In this sprint challenge, you'll explore some of the cutting edge of Data Science. This week we studied several famous neural network architectures: recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Generative Adverserial Networks (GANs). In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Challenge Objectives*You should be able to:** Part 1: Train a RNN classification model* Part 2: Utilize a pre-trained CNN for objective detection* Part 3: Describe the difference between a discriminator and generator in a GAN* Part 4: Describe yourself as a Data Science and elucidate your vision of AI Part 1 - RNNsUse an RNN to fit a multi-class classification model on reuters news articles to distinguish topics of articles. The data is already encoded properly for use in an RNN model. Your Tasks: - Use Keras to fit a predictive model, classifying news articles into topics. - Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.__*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
from tensorflow.keras.datasets import reuters
(x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
x_train.shape, y_train.shape, x_test.shape, y_test.shape
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
max([len(word) for word in word_index])
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Dropout, SimpleRNN
max_features = 20000
maxlen = 50
batch_size = 64
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
x_train.shape, x_test.shape
from keras.utils import np_utils
y_train = np_utils.to_categorical(y_train, 46)
y_test = np_utils.to_categorical(y_test, 46)
model = Sequential()
model.add(Embedding(max_features, 64))
model.add(SimpleRNN(64, return_sequences=True))
model.add(Dropout(0.2))
model.add(SimpleRNN(64, return_sequences=True))
model.add(Dropout(0.2))
model.add(SimpleRNN(64, return_sequences=True))
model.add(Dropout(0.2))
model.add(SimpleRNN(64))
model.add(Dense(46, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=batch_size,
validation_data=(x_test, y_test))
###Output
Train on 8982 samples, validate on 2246 samples
Epoch 1/10
8982/8982 [==============================] - 26s 3ms/step - loss: 2.2640 - acc: 0.4431 - val_loss: 2.0236 - val_acc: 0.4911
Epoch 2/10
8982/8982 [==============================] - 10s 1ms/step - loss: 1.8861 - acc: 0.5190 - val_loss: 1.8025 - val_acc: 0.5303
Epoch 3/10
8982/8982 [==============================] - 10s 1ms/step - loss: 1.6138 - acc: 0.5828 - val_loss: 1.9079 - val_acc: 0.5223
Epoch 4/10
8982/8982 [==============================] - 10s 1ms/step - loss: 1.3220 - acc: 0.6641 - val_loss: 1.8618 - val_acc: 0.5606
Epoch 5/10
8982/8982 [==============================] - 10s 1ms/step - loss: 0.9901 - acc: 0.7531 - val_loss: 1.9170 - val_acc: 0.5681
Epoch 6/10
8982/8982 [==============================] - 10s 1ms/step - loss: 0.7266 - acc: 0.8198 - val_loss: 2.1229 - val_acc: 0.5539
Epoch 7/10
8982/8982 [==============================] - 10s 1ms/step - loss: 0.5437 - acc: 0.8673 - val_loss: 2.1082 - val_acc: 0.5748
Epoch 8/10
8982/8982 [==============================] - 10s 1ms/step - loss: 0.4258 - acc: 0.8995 - val_loss: 2.1999 - val_acc: 0.5686
Epoch 9/10
8982/8982 [==============================] - 10s 1ms/step - loss: 0.3250 - acc: 0.9257 - val_loss: 2.2242 - val_acc: 0.5801
Epoch 10/10
8982/8982 [==============================] - 10s 1ms/step - loss: 0.2804 - acc: 0.9349 - val_loss: 2.2876 - val_acc: 0.5654
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive model. To *really* improve the model, more playing with parameters would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNs Find the FrogTime to play "find the frog!" Use Keras and ResNet50 (pre-trained) to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1.Pondanimals.GIF
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2.hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3.PKLS4116_inline.png
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
URLError on an image...trying next one... Error: HTTP Error 503: Service Temporarily Unavailable
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116.png
Completed Image ====> 4.PKLS4116.png
Image URL: https://i.pinimg.com/originals/57/5c/5b/575c5b5c441e27ff04eb50571ee30127.jpg
Completed Image ====> 5.575c5b5c441e27ff04eb50571ee30127.jpg
Errors: 1
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if (entry[1] == 'bullfrog') or (entry[1] == 'tree frog') or (entry[1] == 'tailed frog') or (entry[1] == 'frog'):
return entry[2]
return 0.0
img_urls = ["./downloads/animal pond/5.575c5b5c441e27ff04eb50571ee30127.jpg",
"./downloads/animal pond/4.PKLS4116.png",
"./downloads/animal pond/3.PKLS4116_inline.png",
"./downloads/animal pond/2.hqdefault.jpg",
"./downloads/animal pond/1.Pondanimals.GIF"]
for i in img_urls:
img_contains_frog(process_img_path(i))
###Output
[('n02116738', 'African_hunting_dog', 0.6101571), ('n02105162', 'malinois', 0.19866791), ('n02114712', 'red_wolf', 0.051153056)]
[('n03485794', 'handkerchief', 0.8822726), ('n02834397', 'bib', 0.022680892), ('n03291819', 'envelope', 0.020095171)]
[('n04243546', 'slot', 0.8712449), ('n04476259', 'tray', 0.04993588), ('n03908618', 'pencil_box', 0.023072386)]
[('n01443537', 'goldfish', 0.8495859), ('n01631663', 'eft', 0.06760218), ('n02536864', 'coho', 0.035163548)]
[('n03598930', 'jigsaw_puzzle', 0.8680313), ('n06359193', 'web_site', 0.06410024), ('n02834397', 'bib', 0.021264324)]
###Markdown
* ResNet50 has a hard time evaluating cartoon images. As seen above, it misclassied every single image that was a cartoon. This makes sense as the training data is majorly composed of real images, and the network has a hard time adjusting to previously unseen forms of a frog, or a fish (i.e. cartoon forms). The model instead generalizes the pictures into something it does recognize.. like 'slot', or 'pencil box'. Part 3 - Generative Adverserial Networks (GANS)Describe the difference between a discriminator and generator in a GAN in your own words.__*Your Answer:*__ In a GAN, a generator and discriminator are working towards complete opposite goals. The discriminator in a GAN, in the case of an image, is trying to determine if an image is real or fake. The discriminator is able to detect the likelihood of this using some training data that was fed into its network. So if the Discriminator is trained using a lot of Piccaso paintings, it should be able to discriminate between a fake and real Picasso. It's goal is to reach 100% accuracy.The generator on the other hand has the goal to trick the discriminator into thinking that it's outputs are 100% real. The generator starts off with completely random outputs. The discriminator naturally is able to tell that the generator's initial outputs are fake. The trick of the generator is that it then uses the feedback from the discriminator to change the weights of its outputs in order to try and make something that is more likely to pass the test. After enough rejects, the discriminator will want to get to the point that it is generating something that can pass the discriminator test, but is not quite an exact replica of an already existing Picasso. Part 4 - More... Answer the following questions, with a target audience of a fellow Data Scientist:- What do you consider your strongest area, as a Data Scientist? * I have the most fun in the predictive modeling part of Data Science. This along with the story telling aspect of data science I feel is my strongest suit in the field. I really want to expand my knowledge of best practice when it comes to creating models.- What area of Data Science would you most like to learn more about, and why? * I would like to improve upon my data engineering and pipeline creation skills. I have the least exposure to these skills, and feel like they are important not only to add to my tools, but to really differentiate myself from others that do not appreciate the entire process of crunching data.- Where do you think Data Science will be in 5 years? * I think data science might become a little more organized or regulated than it is today. There are so many tools to use, most of which are not fully taken advantage of. I can see how that in itself can halt a lot of organizations from getting started in building their own pipelines and organize their data properly. With a more regularized framework of what data science should look like, I feel like it will unlock the benefits to many more.- What are the threats posed by AI to our society? * I honestly do not know what the actual threats AI poses to society from what I have learned so far. Even though it is true that the decisions of a lot of Neural Networks people are building are hazy, I feel like we still understand the limitations of what neural networks can do. The biggest threat I think is ourselves. AI, as far as I understand, can only go wrong if the creators are not thoughful about the biases and information they feed into a system.- How do you think we can counteract those threats? * Having a regularized approach in the creation and research of Neural Networks will be incredibly important. If there is a uniform way that people are building networks and testing them, just like any other research, others will be able to replicate and confirm findings (or prove wrong). With a uniform framework, udnerstanding can spread faster, and there is less chance of the technology to run away from us.- Do you think achieving General Artifical Intelligence is ever possible? * I do not know enough about current research, or about how general intelligence works in the first place to give a great answer. But.. I think it is possible. It is not artificial in the way we describe it, but general intelligence has already occured with biology.. naturally. It took a few billion years, but why would it be harder when there is directed energy and intelligence towards the effort? It may take a few thousand or million of years, but I see no reason that as we continue to learn about the brain, about the universe around us, and improve upon the technology that we currently have at the rate it has been improving.. that at some point we will have the ability to create general intelligence. Congratulations! Thank you for your hard work, and congratulations! You've learned a lot, and you should proudly call yourself a Data Scientist.
###Code
from IPython.display import HTML
HTML("""<iframe src="https://giphy.com/embed/26xivLqkv86uJzqWk" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/mumm-champagne-saber-26xivLqkv86uJzqWk">via GIPHY</a></p>""")
###Output
_____no_output_____
###Markdown
*Data Science Unit 4 Sprint 4* Sprint Challenge RNNs, CNNs, GANS, and AutoMLIn this Sprint Challenge, you'll explore some of the cutting edge of Data Science. *Caution* - these approaches can be pretty heavy computationally. All problems are designed to completed with 5-10 minutes of run time on most machines. If you approach takes longer, please double check your work. Part 1 - RNNsUse an RNN to fit a classification model on tweets to distinguish from tweets from any two accounts. The following code sample illustrates how to access data from an account (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen',1000)
len(austen_tweets)
ryan_tweets = query_tweets('from:rrherr', 1000)
len(ryan_tweets)
musk_tweets = query_tweets('from:elonmusk', 1000)
len(musk_tweets)
len(austen_tweets + ryan_tweets + musk_tweets)
tweets_text = ''
for tweet in austen_tweets:
tweets_text += tweet.text
for tweet in ryan_tweets:
tweets_text += tweet.text
for tweet in musk_tweets:
tweets_text += tweet.text
chars = list(set(tweets_text))
char_indexs = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
austen_encoded = []
for tweet in austen_tweets:
austen_encoded.append([char_indexs[char] for char in tweet.text])
ryan_encoded = []
for tweet in ryan_tweets:
ryan_encoded.append([char_indexs[char] for char in tweet.text])
musk_encoded = []
for tweet in musk_tweets:
musk_encoded.append([char_indexs[char] for char in tweet.text])
austen_encoded
import pandas as pd
###Output
_____no_output_____
###Markdown
Your Tasks:* Select two twitter accounts to gather data from* Use twitterscraper to get ~1,000 tweets from each account* Encode the characters to a sequence of integers for the model* Get the data into the appropriate shape/format, including labels and a train/test split* Use Keras to fit a predictive model, classying tweets as being from one acount or the other* Report your overall score and accuracyFor reference, the [Keras IMDB classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well as the RNN code we used in class.Note - focus on getting a running model, not on making accuracy with extreme data size or epoch numbers. Fit a baseline model based on tweet text. Only revisit and push accuracy or incorporate additional features if you get everything else done!
###Code
df_austen = pd.DataFrame({'enc' : [e for e in austen_encoded], 'target' : [1] * len(austen_tweets)})
df_ryan = pd.DataFrame({'enc' : [e for e in ryan_encoded], 'target' : [0] * len(ryan_tweets)})
df_musk = pd.DataFrame({'enc' : [e for e in musk_encoded], 'target' : [0] * len(musk_tweets)})
df = pd.concat([df_austen, df_ryan, df_musk], ignore_index=True)
df.head()
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.callbacks import LambdaCallback, ModelCheckpoint
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
max_tweet_length = 280
X = sequence.pad_sequences(df.enc, maxlen=max_tweet_length)
y = df.target
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2,
stratify=y)
keras.top
model = Sequential()
model.add(Embedding(280, 64, input_length=max_tweet_length))
model.add(LSTM(50))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=100)
scores = model.evaluate(X_test, y_test, verbose=0)
print("Test Accuracy: %.2f%%" % (scores[1]*100))
###Output
Test Accuracy: 89.19%
###Markdown
Part 2 - CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {'keywords': "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1.Pondanimals.GIF
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2.hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3.PKLS4116_inline.png
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 4.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Image URL: http://images.animalpicturesociety.com/images/5d/alligator_animal_on_pond.jpg
Completed Image ====> 5.alligator_animal_on_pond.jpg
Errors: 0
###Markdown
At the time of writing at least a few do, but since the internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is validly run ResNet50 on the input images - don't worry about tuning or improving the model. *Hint:* ResNet 50 doesn't just return "frog". The three labels it has for frogs are bullfrog, tree frog, and tailed frog.Stretch goal - also check for fish.
###Code
import matplotlib.pyplot as plt
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains(img, findstr='frog'):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
#for entry in results:
# print(f'Found {entry[1]} with prediction score {entry[2]}')
for entry in results:
entry_key = entry[1]
if entry_key.find(findstr) != -1:
return entry[2]
return 0.0
absolute_image_paths
image_path_list = absolute_image_paths[0]['animal pond']
image_path_list
for i, image_path in enumerate(image_path_list):
print(image_path)
processed_image = process_img_path(image_path)
results = img_contains(processed_image, 'frog')
print(f'Prediction for frog in the picture is {results}\n')
results = img_contains(processed_image, 'fish')
print(f'Prediction for fish in the picture is {results}\n')
###Output
/content/downloads/animal pond/1.Pondanimals.GIF
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5
102858752/102853048 [==============================] - 1s 0us/step
Downloading data from https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
40960/35363 [==================================] - 0s 0us/step
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animal pond/2.hqdefault.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.8495879769325256
/content/downloads/animal pond/3.PKLS4116_inline.png
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animal pond/4.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
/content/downloads/animal pond/5.alligator_animal_on_pond.jpg
Prediction for frog in the picture is 0.0
Prediction for fish in the picture is 0.0
###Markdown
Part 3 - AutoMLUse [TPOT](https://epistasislab.github.io/tpot/) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
import pandas as pd
url = "https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv"
df = pd.read_csv(url)
df.head()
###Output
_____no_output_____
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1`and `population_size=10` parameters, so your pipeline runs efficiently. You will want to be able to iterate and test. *Hint:* You will have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running - as long as you still get a valid model with reasonable predictive power.
###Code
df['date'] = pd.to_datetime(df['date'])
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day_of_month'] = df['date'].dt.day
df['day_of_week'] = df['date'].dt.weekday
X = df.drop(columns=['price', 'date'])
y = df['price']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=0.25)
%%time
from tpot import TPOTRegressor
tpot = TPOTRegressor(generations=1, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
y_test_predict = tpot.predict(X_test)
from sklearn.metrics import mean_squared_error, r2_score
MSE = mean_squared_error(y_test, y_test_predict)
RMSE = (np.sqrt(MSE))
print('MSE is {}'.format(MSE))
print('RMSE is {}'.format(RMSE))
R2 = r2_score(y_test, y_test_predict)
print('R^2 is {}'.format(R2))
###Output
R^2 is 0.8868944135068089
###Markdown
*Data Science Unit 4 Sprint 4* Sprint Challenge RNNs, CNNs, GANS, and AutoMLIn this Sprint Challenge, you'll explore some of the cutting edge of Data Science. *Caution* - these approaches can be pretty heavy computationally. All problems are designed to completed with 5-10 minutes of run time on most machines. If you approach takes longer, please double check your work. Part 1 - RNNsUse an RNN to fit a classification model on tweets to distinguish from tweets from any two accounts. The following code sample illustrates how to access data from an account (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper): This code produced a result in Domino but had a problem in Colab.
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen',1000)
len(austen_tweets)
austen_tweets[0].text
###Output
_____no_output_____
###Markdown
Your Tasks:* Select two twitter accounts to gather data from* Use twitterscraper to get ~1,000 tweets from each account* Encode the characters to a sequence of integers for the model* Get the data into the appropriate shape/format, including labels and a train/test split* Use Keras to fit a predictive model, classying tweets as being from one acount or the other* Report your overall score and accuracyFor reference, the [Keras IMDB classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well as the RNN code we used in class.Note - focus on getting a running model, not on making accuracy with extreme data size or epoch numbers. Fit a baseline model based on tweet text. Only revisit and push accuracy or incorporate additional features if you get everything else done!
###Code
tommy_tweets = query_tweets('from:tommycollison',1000)
patrick_tweets = query_tweets('from:patrickc', 1000)
len(tommy_tweets)
len(patrick_tweets)
type(tommy_tweets[0])
tommy_tweets[0]
type(patrick_tweets[0])
patrick_tweets[0]
#convert tweet object to string
index = 0
for tweet in tommy_tweets:
tweet = tommy_tweets[index].text
tommy_tweets[index] = tweet
index = index + 1
tommy_tweets[0]
#convert tweet object to string
index = 0
for tweet in patrick_tweets:
tweet = patrick_tweets[index].text
patrick_tweets[index] = tweet
index = index + 1
patrick_tweets[0]
#get a string with all text
tommy_tweets_string = " ".join(tommy_tweets)
patrick_tweets_string = " ".join(patrick_tweets)
all_text = " ".join([tommy_tweets_string, patrick_tweets_string])
#make a dictionary with all text
characters = list(set(all_text))
num_characters = len(characters)
text_length = len(all_text)
char_to_int = dict((c, i) for i, c in enumerate(characters))
int_to_char = dict((i, c) for i, c in enumerate(characters))
num_characters
#apply per tweet
index = 0
tommy_tweets_encoded = []
for tweet in tommy_tweets:
integer_encoded = [char_to_int[i] for i in tommy_tweets[index]]
tommy_tweets_encoded.append(integer_encoded)
index = index + 1
tommy_tweets_encoded[0]
#apply per tweet
index = 0
patrick_tweets_encoded = []
for tweet in patrick_tweets:
integer_encoded = [char_to_int[i] for i in patrick_tweets[index]]
patrick_tweets_encoded.append(integer_encoded)
index = index + 1
patrick_tweets_encoded[0]
len(tommy_tweets_encoded)
len(patrick_tweets_encoded)
#label tweets
import numpy as np
tommy_label = np.ones(len(tommy_tweets_encoded)).tolist()
patrick_label = np.zeros(len(patrick_tweets_encoded)).tolist()
print(len(tommy_label), len(patrick_label))
#combine the arrays
encoded_tweets = tommy_tweets_encoded + patrick_tweets_encoded
labels = tommy_label + patrick_label
print(len(encoded_tweets), len(labels))
import pandas as pd
data = {'label': labels, 'tweet': encoded_tweets}
df = pd.DataFrame(data = data)
df.head(5)
X = df['tweet'].values
y = df['label'].values
print(X.shape, y.shape)
import keras
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y,
test_size = 0.2,
random_state = 0)
print(ytest[0]) #shape of a y value
Xtest[0] #shape of an X value
max_features = num_characters #number of character mappings in dict
maxlen = 240 #number of characters in tweet
batch_size = 32
Xtrain = sequence.pad_sequences(Xtrain, maxlen=maxlen)
Xtest = sequence.pad_sequences(Xtest, maxlen=maxlen)
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(Xtrain, ytrain, batch_size=32, epochs=15,
validation_data=(Xtest, ytest))
score, acc = model.evaluate(Xtest, ytest,
batch_size=32)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
_____no_output_____
###Markdown
Part 2 - CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {'keywords': "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1.Pondanimals.GIF
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2.hqdefault.jpg
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 3.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 4.PKLS4116_inline.png
Image URL: http://images.animalpicturesociety.com/images/5d/alligator_animal_on_pond.jpg
Completed Image ====> 5.alligator_animal_on_pond.jpg
Errors: 0
###Markdown
At the time of writing at least a few do, but since the internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is validly run ResNet50 on the input images - don't worry about tuning or improving the model. *Hint:* ResNet 50 doesn't just return "frog". The three labels it has for frogs are bullfrog, tree frog, and tailed frog.Stretch goal - also check for fish.
###Code
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import os
animals = os.listdir('downloads/animal pond')
animals
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def find_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] == 'bullfrog' or entry[1] == 'tree frog' or entry[1] == 'tailed frog':
return entry[2]
for pic in animals:
path = 'downloads/animal pond/'+str(pic)
print(path)
view = image.load_img(path, target_size=(224, 224, 3))
print(find_frog(view))
###Output
downloads/animal pond/2.hqdefault.jpg
[('n01443537', 'goldfish', 0.849588), ('n01631663', 'eft', 0.06760151), ('n02536864', 'coho', 0.03516323)]
None
downloads/animal pond/3.water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
[('n02442845', 'mink', 0.30976495), ('n02363005', 'beaver', 0.23399015), ('n02361337', 'marmot', 0.20796911)]
None
downloads/animal pond/4.PKLS4116_inline.png
[('n04243546', 'slot', 0.8712441), ('n04476259', 'tray', 0.04993626), ('n03908618', 'pencil_box', 0.023072539)]
None
downloads/animal pond/1.Pondanimals.GIF
[('n03598930', 'jigsaw_puzzle', 0.86803216), ('n06359193', 'web_site', 0.06409976), ('n02834397', 'bib', 0.021264203)]
None
downloads/animal pond/5.alligator_animal_on_pond.jpg
[('n01698640', 'American_alligator', 0.96394134), ('n01697457', 'African_crocodile', 0.026759887), ('n01737021', 'water_snake', 0.0059646657)]
None
###Markdown
Part 3 - AutoMLUse [TPOT](https://epistasislab.github.io/tpot/) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
import pandas as pd
url = "https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv"
df = pd.read_csv(url)
df.head()
df.nunique()
df.dtypes
df = df.drop(columns = ['id'])
import datetime
df['date'] = pd.to_datetime(df['date'])
df['year'] = df['date'].dt.year
df.head()
df = df.drop(columns = 'date')
X = df.drop(columns = 'price').values
y = df['price'].values
###Output
_____no_output_____
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1`and `population_size=10` parameters, so your pipeline runs efficiently. You will want to be able to iterate and test. *Hint:* You will have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running - as long as you still get a valid model with reasonable predictive power.
###Code
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size = 0.2,
random_state = 0)
from tpot import TPOTRegressor
tpot = TPOTRegressor(generations=1, population_size=10, cv=5, verbosity=2)
tpot.fit(Xtrain, ytrain)
print(tpot.score(Xtest, ytest))
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
# Imports
import numpy as np
# from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from random import sample
from sklearn.model_selection import train_test_split
# Preprocess data: encoding, shaping, train/test split
# encoding
austen_text = ''
al_text = ''
def process_tweets(text, tweets):
for tweet in tweets: # austen_tweets, al_tweets
try:
text += '\n\n' + tweet.text # austen_text, al_text
except:
print('Failed: ' + tweet.text)
text = text.split('\n\n')
return text
def encode_tweets(text):
chars = list(set(text)) # split and remove duplicate characters. convert to list.
num_chars = len(chars) # the number of unique characters
txt_data_size = len(text)
print("unique characters : ", num_chars)
print("txt_data_size : ", txt_data_size)
# one hot encode
char_to_int = dict((c, i) for i, c in enumerate(chars)) # "enumerate" retruns index and value. Convert it to dictionary
int_to_char = dict((i, c) for i, c in enumerate(chars))
# print(char_to_int)
# print("----------------------------------------------------")
# print(int_to_char)
# print("----------------------------------------------------")
# integer encode input data
integer_encoded = [char_to_int[i] for i in text] # "integer_encoded" is a list which has a sequence converted from an original data to integers.
print(integer_encoded)
print("----------------------------------------------------")
print("data length : ", len(integer_encoded))
return integer_encoded
austen_text_procd = encode_tweets(process_tweets(austen_text, austen_tweets))
al_text_procd = encode_tweets(process_tweets(al_text, al_tweets))
x_train, x_test, y_train, y_test = train_test_split(austen_text_procd, sample(al_text_procd, 227),
test_size=0.25, random_state=7)
# Fit keras model and report score, accuracy
'''
Proviso: The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
**Notes**
- Choice of batch size is important; choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
max_features = 2000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
# print('Loading data...')
# print(len(x_train), 'train sequences')
# print(len(x_test), 'test sequences')
# print('Pad sequences (samples x time)')
# x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
# x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
# print('x_train shape:', x_train.shape)
# print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Build model...
WARNING:tensorflow:From C:\Users\jhump\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\Users\jhump\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
Train...
WARNING:tensorflow:From C:\Users\jhump\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 170 samples, validate on 57 samples
Epoch 1/15
170/170 [==============================] - 3s 17ms/step - loss: -3.8416 - acc: 0.0059 - val_loss: -11.7103 - val_acc: 0.0000e+00
Epoch 2/15
170/170 [==============================] - 0s 515us/step - loss: -19.5642 - acc: 0.0059 - val_loss: -25.4821 - val_acc: 0.0000e+00
Epoch 3/15
170/170 [==============================] - 0s 474us/step - loss: -38.6183 - acc: 0.0059 - val_loss: -41.3300 - val_acc: 0.0000e+00
Epoch 4/15
170/170 [==============================] - 0s 597us/step - loss: -63.3961 - acc: 0.0059 - val_loss: -59.2663 - val_acc: 0.0000e+00
Epoch 5/15
170/170 [==============================] - 0s 526us/step - loss: -96.7799 - acc: 0.0059 - val_loss: -80.1463 - val_acc: 0.0000e+00
Epoch 6/15
170/170 [==============================] - 0s 589us/step - loss: -144.0230 - acc: 0.0059 - val_loss: -104.1273 - val_acc: 0.0000e+00
Epoch 7/15
170/170 [==============================] - 0s 418us/step - loss: -214.1653 - acc: 0.0118 - val_loss: -132.1498 - val_acc: 0.0000e+00
Epoch 8/15
170/170 [==============================] - 0s 500us/step - loss: -300.0669 - acc: 0.0118 - val_loss: -165.8483 - val_acc: 0.0000e+00
Epoch 9/15
170/170 [==============================] - 0s 653us/step - loss: -430.4064 - acc: 0.0118 - val_loss: -206.0182 - val_acc: 0.0000e+00
Epoch 10/15
170/170 [==============================] - 0s 515us/step - loss: -611.5220 - acc: 0.0118 - val_loss: -253.8431 - val_acc: 0.0000e+00
Epoch 11/15
170/170 [==============================] - 0s 506us/step - loss: -854.1509 - acc: 0.0118 - val_loss: -311.8501 - val_acc: 0.0000e+00
Epoch 12/15
170/170 [==============================] - 0s 1ms/step - loss: -1173.5253 - acc: 0.0118 - val_loss: -380.8584 - val_acc: 0.0000e+00
Epoch 13/15
170/170 [==============================] - 0s 577us/step - loss: -1605.3596 - acc: 0.0118 - val_loss: -461.4919 - val_acc: 0.0000e+00
Epoch 14/15
170/170 [==============================] - 0s 500us/step - loss: -2133.9971 - acc: 0.0118 - val_loss: -557.1449 - val_acc: 0.0000e+00
Epoch 15/15
170/170 [==============================] - 0s 444us/step - loss: -2808.6257 - acc: 0.0118 - val_loss: -665.2370 - val_acc: 0.0000e+00
57/57 [==============================] - 0s 140us/step
Test score: -665.2370369894463
Test accuracy: 0.0
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://vetstreet-brightspot.s3.amazonaws.com/8d/ac/377fecad46d8820697c26efacc32/koi-pond-thinkstock-153560141-335sm61313.jpg
Completed Image ====> 3. koi-pond-thinkstock-153560141-335sm61313.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 4. pkls4116_inline.png
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 5. alligator-animal-on-pond.jpg
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
from IPython.display import Image
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] == 'bullfrog' or 'tree frog' or 'tailed frog':
return entry[2]
return 0.0
absolute_image_paths
procd_images = []
for path in absolute_image_paths['animal pond']:
image = process_img_path(path)
if img_contains_frog(image):
print(image, 'has a frog in it.')
import os
os.chdir('downloads\\animal pond')
os.listdir()
Image(filename='1. pondanimals.gif', width=600)
for image in procd_images:
print(image, img_contains_frog(process_img_path(image)))
###Output
_____no_output_____
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!pip install wget
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
_____no_output_____
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
# Imports
import pandas as pd
from tpot import TPOTRegressor
kc_house_prices = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv')
df = kc_house_prices.copy()
# shape is (21613, 21)
kc_house_prices.columns
# kc_house_prices = kc_house_prices.drop(['id', 'date', 'grade', 'lat', 'long'], axis=1)
kc_house_prices = kc_house_prices.drop('id', axis=1)
kc_house_prices.isna().sum()
kc_house_prices.date = kc_house_prices['date'].str.replace('-', '').astype(int)
# test_df['Date'].str.replace("-","").astype(int)
kc_house_prices.info()
X = kc_house_prices.drop('price', axis=1)
y = kc_house_prices.price
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# Fit model
tpot = TPOTRegressor(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
###Output
Warning: xgboost.XGBRegressor is not available and will not be used by TPOT.
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.layers import Flatten
from sklearn.model_selection import train_test_split
import numpy as np
def preprocessing(tweets):
'''
Pre-processes a corpus
'''
fulltext = ''
for twt in tweets:
fulltext += twt.text
chars = list(set(fulltext)) # create set to remove duplicate chars
num_chars = len(chars)
txt_data_size = len(fulltext)
# Integer-encoded dictionaries
char_to_int = dict((c, i) for i, c in enumerate(chars))
int_to_char = dict((i, c) for i, c in enumerate(chars))
# Turn each tweet into a list of integers representing the characters
encoded_tweets = []
for twt in tweets:
enc_tweet = [char_to_int[ch] for ch in twt.text]
encoded_tweets.append(enc_tweet)
# Pad each tweet to 280 characters
padded_tweets = sequence.pad_sequences(encoded_tweets, maxlen=280)
print("Unique characters : ", num_chars)
print("Size of tweet library (char) : ", txt_data_size)
print('All characters: \n', [x for x in chars])
print('Character dictionary: \n', char_to_int)
print('Example processed tweet: \n', padded_tweets[0])
print('Shape of tweet library: ', padded_tweets.shape)
return padded_tweets, chars
austen_tweets2, austen_chars = preprocessing(austen_tweets)
al_tweets2, al_chars = preprocessing(al_tweets)
vocabulary = set(austen_chars + al_chars)
X = np.concatenate((austen_tweets2, al_tweets2), axis=0)
y = np.concatenate((np.zeros((austen_tweets2.shape[0],1)),
np.ones((al_tweets2.shape[0],1))), axis=0)
X.shape, y.shape
# Divide into training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25, random_state=42)
# Create the Keras LSTM RNN
model = Sequential()
model.add(Embedding(input_dim=len(vocabulary), output_dim=128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.summary())
batch_size = 16
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=20,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print(f'Test accuracy: {acc*100:.2f}%')
###Output
286/286 [==============================] - 2s 8ms/step
Test score: 0.00011179090817003036
Test accuracy: 100.00%
###Markdown
Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 10, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3. pkls4116_inline.png
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 4. alligator-animal-on-pond.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Completed Image ====> 5. amphibian_bronze-frog_julia-bartosh_400x267.ash
Image URL: https://cdn.pixabay.com/photo/2017/08/17/06/32/goose-2650209_960_720.jpg
Completed Image ====> 6. goose-2650209_960_720.jpg
Image URL: https://www.pixoto.com/images-photography/animals/birds/birds-in-a-pond-5986310798966784.jpg
Completed Image ====> 7. birds-in-a-pond-5986310798966784.jpg
Image URL: https://cdn.pixabay.com/photo/2017/04/19/20/37/frog-2243543_960_720.jpg
Completed Image ====> 8. frog-2243543_960_720.jpg
Image URL: https://www.nycgovparks.org/pagefiles/88/urban-wildlife-morningside-red-ear-slider-lg.jpg
Completed Image ====> 9. urban-wildlife-morningside-red-ear-slider-lg.jpg
Image URL: https://get.pxhere.com/photo/water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Completed Image ====> 10. water-animal-pond-wildlife-mammal-fish-eat-fauna-whiskers-vertebrate-otter-mink-marmot-sea-otter-mustelidae-1383482.jpg
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if entry[1] == 'bullfrog':
return entry[2]
return 0.0
from IPython.display import Image
Image(filename='/content/downloads/animal pond/8. frog-2243543_960_720.jpg', width=600)
img_contains_frog(process_img_path('/content/downloads/animal pond/8. frog-2243543_960_720.jpg'))
from IPython.display import Image
Image(filename='/content/downloads/animal pond/6. goose-2650209_960_720.jpg', width=600)
img_contains_frog(process_img_path('/content/downloads/animal pond/6. goose-2650209_960_720.jpg'))
###Output
[('n01860187', 'black_swan', 0.8796092), ('n01847000', 'drake', 0.033985015), ('n01855032', 'red-breasted_merganser', 0.028971087)]
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
import pandas as pd
pd.set_option('display.max_columns', 500)
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv')
df.head()
df = df.drop(columns=['id','date','lat','long'], axis=1)
from tpot import TPOTRegressor
from sklearn.model_selection import train_test_split
X = df.drop('price', axis=1).values
X_train, X_test, y_train, y_test = train_test_split(
X, df['price'].values, train_size=0.75, test_size=0.25)
%%time
tpot = TPOTRegressor(generations=1, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Lambda School Data Science Unit 4 Sprint Challenge 4 RNNs, CNNs, AutoML, and more...In this sprint challenge, you'll explore some of the cutting edge of Data Science.*Caution* - these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime on Colab or a comparable environment. If something is running longer, doublecheck your approach! Part 1 - RNNsUse an RNN to fit a simple classification model on tweets to distinguish from tweets from Austen Allred and tweets from Weird Al Yankovic.Following is code to scrape the needed data (no API auth needed, uses [twitterscraper](https://github.com/taspinar/twitterscraper)):
###Code
!pip install twitterscraper
from twitterscraper import query_tweets
austen_tweets = query_tweets('from:austen', 1000)
len(austen_tweets)
austen_tweets[0].text
al_tweets = query_tweets('from:AlYankovic', 1000)
len(al_tweets)
al_tweets[0].text
len(austen_tweets + al_tweets)
###Output
_____no_output_____
###Markdown
Your tasks:- Encode the characters to a sequence of integers for the model- Get the data into the appropriate shape/format, including labels and a train/test split- Use Keras to fit a predictive model, classifying tweets as being from Austen versus Weird Al- Report your overall score and accuracyFor reference, the [Keras IMDB sentiment classification example](https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py) will be useful, as well the RNN code we used in class.*Note* - focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done! We have 181 tweets from Austen, and 960 from Weird Al. We should re-sample them to make them 50-50. Re-sampling will re-shuffle them, so we don't need to do that before passing them to the RNN. We should mark Austen's tweets as 1 in a separate column, and Wierd Al's as 0 We should create X_train, X_test, y_train, y_test using train_test_split
###Code
import pandas as pd
import numpy as np
from collections import Counter
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
a_tweets = list(map(lambda x: x.text, austen_tweets))
al_tweets = list(map(lambda x: x.text, al_tweets))
df1 = pd.DataFrame(data=a_tweets, columns=['tweet'])
df1['austens_tweet'] = np.ones((df1.shape[0], 1))
df1.head()
df2 = pd.DataFrame(data=al_tweets, columns=['tweet'])
df2['austens_tweet'] = np.zeros((df2.shape[0], 1))
df2.head()
df = pd.concat([df1, df2], ignore_index=True)
# df.austens_tweet = df.austens_tweet.astype(int)
print(df.shape)
df.head()
from imblearn.over_sampling import RandomOverSampler
X = df.drop(['austens_tweet'], axis = 1)
y = df.austens_tweet
rus = RandomOverSampler(sampling_strategy=1.0)
X_res, y_res = rus.fit_resample(X, y)
print(X_res.shape, y_res.shape)
print(pd.value_counts(y_res))
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
print(x_train.shape)
x_train.values[0]
# one hot encode
article_text = ''.join(df.tweet.values)
chars = list(set(article_text)) # split and remove duplicate characters. convert to list.
num_chars = len(chars) # the number of unique characters
txt_data_size = len(article_text)
print("unique characters : ", num_chars)
print("txt_data_size : ", txt_data_size)
char_to_int = dict((c, i) for i, c in enumerate(chars)) # "enumerate" retruns index and value. Convert it to dictionary
int_to_char = dict((i, c) for i, c in enumerate(chars))
print(char_to_int)
print("----------------------------------------------------")
print(int_to_char)
print("----------------------------------------------------")
# integer encode input data
x_train_res = []
for _ in range(x_train.shape[0]):
for t in x_train.values[0]:
integer_encoded = [char_to_int[i] for i in t] # "integer_encoded" is a list which has a sequence converted from an original data to integers.
x_train_res.append(integer_encoded)
x_test_res = []
for _ in range(x_train.shape[0]):
for t in x_test.values[0]:
integer_encoded = [char_to_int[i] for i in t] # "integer_encoded" is a list which has a sequence converted from an original data to integers.
x_test_res.append(integer_encoded)
# print(x_train[:5])
# print(x_test[:5])
x_train = x_train_res
x_test = x_test_res
x_test = x_test[:len(x_train)] # Keep input samples same as test samples
y_test = y_test[:len(x_train)]
print(len(x_train))
print(len(x_test))
print(len(y_train))
print(len(y_test))
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen) # https://github.com/keras-team/keras-preprocessing/blob/master/keras_preprocessing/sequence.py
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128)) # https://keras.io/layers/embeddings/
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2)) # https://keras.io/layers/recurrent/#lstm
model.add(Dense(1, activation='sigmoid')) # https://keras.io/layers/core/#dense
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('len(x_train):', len(x_train), 'len(y_train):', len(y_train))
print('len(x_test):', len(x_test), 'len(y_test):', len(y_test))
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
###Output
Pad sequences (samples x time)
x_train shape: (570, 80)
x_test shape: (570, 80)
Build model...
len(x_train): 570 len(y_train): 570
len(x_test): 570 len(y_test): 570
Train...
WARNING:tensorflow:From /anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 570 samples, validate on 570 samples
Epoch 1/15
570/570 [==============================] - 7s 13ms/step - loss: 0.5308 - acc: 0.8316 - val_loss: 0.5164 - val_acc: 0.8263
Epoch 2/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4199 - acc: 0.8561 - val_loss: 0.5124 - val_acc: 0.8263
Epoch 3/15
570/570 [==============================] - 2s 4ms/step - loss: 0.4108 - acc: 0.8561 - val_loss: 0.5010 - val_acc: 0.8263
Epoch 4/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4146 - acc: 0.8561 - val_loss: 0.5297 - val_acc: 0.8263
Epoch 5/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4169 - acc: 0.8561 - val_loss: 0.5420 - val_acc: 0.8263
Epoch 6/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4202 - acc: 0.8561 - val_loss: 0.5337 - val_acc: 0.8263
Epoch 7/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4185 - acc: 0.8561 - val_loss: 0.5584 - val_acc: 0.8263
Epoch 8/15
570/570 [==============================] - 3s 5ms/step - loss: 0.4125 - acc: 0.8561 - val_loss: 0.5575 - val_acc: 0.8263
Epoch 9/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4154 - acc: 0.8561 - val_loss: 0.5688 - val_acc: 0.8263
Epoch 10/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4179 - acc: 0.8561 - val_loss: 0.5514 - val_acc: 0.8263
Epoch 11/15
570/570 [==============================] - 2s 4ms/step - loss: 0.4181 - acc: 0.8561 - val_loss: 0.5698 - val_acc: 0.8263
Epoch 12/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4150 - acc: 0.8561 - val_loss: 0.5605 - val_acc: 0.8263
Epoch 13/15
570/570 [==============================] - 3s 5ms/step - loss: 0.4180 - acc: 0.8561 - val_loss: 0.5702 - val_acc: 0.8263
Epoch 14/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4145 - acc: 0.8561 - val_loss: 0.5654 - val_acc: 0.8263
Epoch 15/15
570/570 [==============================] - 3s 4ms/step - loss: 0.4161 - acc: 0.8561 - val_loss: 0.5649 - val_acc: 0.8263
570/570 [==============================] - 0s 626us/step
Test score: 0.5649347370130974
Test accuracy: 0.8263157896828233
###Markdown
Conclusion - RNN runs, and gives pretty decent improvement over a naive "It's Al!" model. To *really* improve the model, more playing with parameters, and just getting more data (particularly Austen tweets), would help. Also - RNN may well not be the best approach here, but it is at least a valid one. Part 2- CNNsTime to play "find the frog!" Use Keras and ResNet50 to detect which of the following images contain frogs:
###Code
!pip install google_images_download
from google_images_download import google_images_download
response = google_images_download.googleimagesdownload()
arguments = {"keywords": "animal pond", "limit": 5, "print_urls": True}
absolute_image_paths = response.download(arguments)
###Output
Item no.: 1 --> Item name = animal pond
Evaluating...
Starting Download...
Image URL: https://www.enchantedlearning.com/pgifs/Pondanimals.GIF
Completed Image ====> 1. pondanimals.gif
Image URL: https://i.ytimg.com/vi/NCbu0TND9vE/hqdefault.jpg
Completed Image ====> 2. hqdefault.jpg
Image URL: https://pklifescience.com/staticfiles/articles/images/PKLS4116_inline.png
Completed Image ====> 3. pkls4116_inline.png
Image URL: https://pixnio.com/free-images/fauna-animals/reptiles-and-amphibians/alligators-and-crocodiles-pictures/alligator-animal-on-pond.jpg
Completed Image ====> 4. alligator-animal-on-pond.jpg
Image URL: https://www.nwf.org/-/media/NEW-WEBSITE/Programs/Garden-for-Wildlife/amphibian_bronze-frog_Julia-Bartosh_400x267.ashx
Completed Image ====> 5. amphibian_bronze-frog_julia-bartosh_400x267.ash
Errors: 0
###Markdown
At time of writing at least a few do, but since the Internet changes - it is possible your 5 won't. You can easily verify yourself, and (once you have working code) increase the number of images you pull to be more sure of getting a frog. Your goal is to validly run ResNet50 on the input images - don't worry about tuning or improving the model.*Hint* - ResNet 50 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`*Stretch goal* - also check for fish.
###Code
import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_contains_frog_fish(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=3)[0]
print(results)
for entry in results:
if ('frog' in entry[1]) or ('fish' in entry[1]):
return entry[2]
return 0.0
from IPython.display import Image
Image(filename='downloads/animal pond/1. pondanimals.gif', width=600)
img_contains_frog(process_img_path('downloads/animal pond/1. pondanimals.gif'))
from IPython.display import Image
Image(filename='downloads/animal pond/2. hqdefault.jpg', width=600)
img_contains_frog_fish(process_img_path('downloads/animal pond/2. hqdefault.jpg'))
from IPython.display import Image
Image(filename='downloads/animal pond/3. pkls4116_inline.png', width=600)
img_contains_frog_fish(process_img_path('downloads/animal pond/3. pkls4116_inline.png'))
###Output
[('n04243546', 'slot', 0.87124443), ('n04476259', 'tray', 0.04993609), ('n03908618', 'pencil_box', 0.02307246)]
###Markdown
Part 3 - AutoMLUse [TPOT](https://github.com/EpistasisLab/tpot) to fit a predictive model for the King County housing data, with `price` as the target output variable.
###Code
!pip install tpot
!wget https://raw.githubusercontent.com/ryanleeallred/datasets/master/kc_house_data.csv
!head kc_house_data.csv
###Output
id,date,price,bedrooms,bathrooms,sqft_living,sqft_lot,floors,waterfront,view,condition,grade,sqft_above,sqft_basement,yr_built,yr_renovated,zipcode,lat,long,sqft_living15,sqft_lot15
"7129300520","20141013T000000",221900,3,1,1180,5650,"1",0,0,3,7,1180,0,1955,0,"98178",47.5112,-122.257,1340,5650
"6414100192","20141209T000000",538000,3,2.25,2570,7242,"2",0,0,3,7,2170,400,1951,1991,"98125",47.721,-122.319,1690,7639
"5631500400","20150225T000000",180000,2,1,770,10000,"1",0,0,3,6,770,0,1933,0,"98028",47.7379,-122.233,2720,8062
"2487200875","20141209T000000",604000,4,3,1960,5000,"1",0,0,5,7,1050,910,1965,0,"98136",47.5208,-122.393,1360,5000
"1954400510","20150218T000000",510000,3,2,1680,8080,"1",0,0,3,8,1680,0,1987,0,"98074",47.6168,-122.045,1800,7503
"7237550310","20140512T000000",1.225e+006,4,4.5,5420,101930,"1",0,0,3,11,3890,1530,2001,0,"98053",47.6561,-122.005,4760,101930
"1321400060","20140627T000000",257500,3,2.25,1715,6819,"2",0,0,3,7,1715,0,1995,0,"98003",47.3097,-122.327,2238,6819
"2008000270","20150115T000000",291850,3,1.5,1060,9711,"1",0,0,3,7,1060,0,1963,0,"98198",47.4095,-122.315,1650,9711
"2414600126","20150415T000000",229500,3,1,1780,7470,"1",0,0,3,7,1050,730,1960,0,"98146",47.5123,-122.337,1780,8113
###Markdown
As with previous questions, your goal is to run TPOT and successfully run and report error at the end. Also, in the interest of time, feel free to choose small `generation=1` and `population_size=10` parameters so your pipeline runs efficiently and you are able to iterate and test.*Hint* - you'll have to drop and/or type coerce at least a few variables to get things working. It's fine to err on the side of dropping to get things running, as long as you still get a valid model with reasonable predictive power.
###Code
housing = pd.read_csv('kc_house_data.csv', header=0)
print(housing.shape)
housing.head()
housing.dtypes
for col in housing.select_dtypes(include='number'):
housing[col] = housing[col].astype(float)
housing.dtypes
housing['year'] = housing.date.apply(lambda x: int(x[:4])).astype(float)
housing['month'] = housing.date.apply(lambda x: int(x[4:6])).astype(float)
housing = housing.drop(columns=['date'], axis=1)
from tpot import TPOTRegressor
X = housing.drop('price', axis=1).values
X_train, X_test, y_train, y_test = train_test_split(
X, housing['price'].values, train_size=0.75, test_size=0.25)
%%time
tpot = TPOTRegressor(generations=5, population_size=10, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
###Output
_____no_output_____ |
scripts_and_notebooks/notebooks/analyzed/fractopo_network_Getaberget_20m_1_3_area.ipynb | ###Markdown
Fractopo – Fracture Network Analysis
###Code
import warnings
warnings.filterwarnings("ignore")
# Cell contents only required for development env runs
from importlib.util import find_spec
if find_spec("fractopo") is None:
import sys
sys.path.append("..")
from fractopo.analysis.network import Network
import fractopo.contour_grid as contour_grid
import matplotlib.pyplot as plt
import geopandas as gpd
plt.close()
###Output
_____no_output_____
###Markdown
Data Trace and target area data required. The paths can be urls to GeoJSON or local file paths to spatial filetypes (e.g. shapefile, geopackage). The name is used in plot labels and titles.
###Code
trace_data = ""
area_data = ""
name = ""
# Parameters
trace_data = "/mnt/d/Data/trace_repo/ahvenanmaa/traces/20m/Getaberget_20m_1_traces.gpkg"
area_data = "/mnt/d/Data/trace_repo/ahvenanmaa/areas/20m/Getaberget_20m_1_3_area.gpkg"
name = "Getaberget_20m_1_3_area"
###Output
_____no_output_____
###Markdown
The defaults in the next cell are only applied if no parameters are given to the above cell.
###Code
if len(trace_data) == 0:
# Set defaults
# Trace and target area data available on GitHub
trace_data = "https://raw.githubusercontent.com/nialov/fractopo/master/tests/sample_data/KB11/KB11_traces.geojson"
area_data = "https://raw.githubusercontent.com/nialov/fractopo/master/tests/sample_data/KB11/KB11_area.geojson"
# Name the dataset
name = "KB11"
# Use geopandas to load data from urls
traces = gpd.read_file(trace_data)
area = gpd.read_file(area_data)
area.total_bounds
def focus_plot_to_bounds(ax, total_bounds):
xmin, ymin, xmax, ymax = total_bounds
extend_x = (xmax - xmin) * 0.05
extend_y = (ymax - ymin) * 0.05
ax.set_xlim(xmin - extend_x, xmax + extend_x)
ax.set_ylim(ymin - extend_y, ymax + extend_y)
return ax
###Output
_____no_output_____
###Markdown
Visualizing trace map data
###Code
fix, ax = plt.subplots(figsize=(9, 9))
traces.plot(ax=ax, color="blue")
area.boundary.plot(ax=ax, color="red")
ax = focus_plot_to_bounds(ax, area.total_bounds)
###Output
_____no_output_____
###Markdown
Create Network
###Code
# Create Network and automatically determine branches and nodes
network = Network(
traces, area, name=name, determine_branches_nodes=True, snap_threshold=0.001
)
###Output
_____no_output_____
###Markdown
Visualizing branches and nodes
###Code
from fractopo.general import CC_branch, CI_branch, II_branch, X_node, Y_node, I_node
# Function to determine color for each branch and node type
def assign_colors(feature_type: str):
if feature_type in (CC_branch, X_node):
return "green"
if feature_type in (CI_branch, Y_node):
return "blue"
if feature_type in (II_branch, I_node):
return "black"
return "red"
###Output
_____no_output_____
###Markdown
| Branch or Node Type | Color ||---------------------|-------|| C - C, X | Green || C - I, Y | Blue || I - I, I | Black || Other | Red | Branches
###Code
fix, ax = plt.subplots(figsize=(9, 9))
network.branch_gdf.plot(
colors=[assign_colors(bt) for bt in network.branch_types], ax=ax
)
area.boundary.plot(ax=ax, color="red")
###Output
_____no_output_____
###Markdown
Nodes
###Code
fix, ax = plt.subplots(figsize=(9, 9))
# Traces
network.trace_gdf.plot(ax=ax, linewidth=0.5)
# Nodes
network.node_gdf.plot(
c=[assign_colors(bt) for bt in network.node_types], ax=ax, markersize=10
)
area.boundary.plot(ax=ax, color="red")
ax = focus_plot_to_bounds(ax, area.total_bounds)
###Output
_____no_output_____
###Markdown
Rose plots
###Code
# Plot azimuth rose plot of fracture traces
azimuth_bin_dict, fig, ax = network.plot_trace_azimuth()
###Output
_____no_output_____
###Markdown
Length distributions Trace length distribution
###Code
# Fit for traces
fit_traces = network.trace_lengths_powerlaw_fit()
# Plot length distribution fits (powerlaw, exponential and lognormal) of fracture traces
fit, fig, ax = network.plot_trace_lengths()
# Fit properties
print(f"Automatically determined powerlaw cut-off: {fit_traces.xmin}")
print(f"Powerlaw exponent: {fit_traces.alpha - 1}")
print(
f"Proportion of data cut off by cut off: {network.trace_lengths_cut_off_proportion()}"
)
print(
f"Compare powerlaw fit to lognormal: R, p = {fit_traces.distribution_compare('power_law', 'lognormal')}"
)
###Output
Automatically determined powerlaw cut-off: 1.2244547113982434
Powerlaw exponent: 1.6662419280649168
Proportion of data cut off by cut off: 0.7063492063492064
Compare powerlaw fit to lognormal: R, p = (-0.8403166756843783, 0.5373503631362355)
###Markdown
Branch length distribution
###Code
# Fit for branches
fit_branches = network.branch_lengths_powerlaw_fit()
# Plot length distribution fits (powerlaw, exponential and lognormal) of fracture branches
fit, fig, ax = network.plot_branch_lengths()
# Fit properties
print(f"Automatically determined powerlaw cut-off: {fit_branches.xmin}")
print(f"Powerlaw exponent: {fit_branches.alpha - 1}")
print(
f"Proportion of data cut off by cut off: {network.branch_lengths_cut_off_proportion()}"
)
print(
f"Compare powerlaw fit to lognormal: R, p = {fit_branches.distribution_compare('power_law', 'lognormal')}"
)
###Output
Automatically determined powerlaw cut-off: 1.172237842499404
Powerlaw exponent: 2.5789798572615226
Proportion of data cut off by cut off: 0.8278688524590164
Compare powerlaw fit to lognormal: R, p = (-1.219409596170895, 0.28340166873354256)
###Markdown
Crosscutting and abutting relationships
###Code
# Sets are defaults
print(f"Azimuth set names: {network.azimuth_set_names}")
print(f"Azimuth set ranges: {network.azimuth_set_ranges}")
# Plot crosscutting and abutting relationships between azimuth sets
figs, fig_axes = network.plot_azimuth_crosscut_abutting_relationships()
###Output
_____no_output_____
###Markdown
Node and branch proportions
###Code
network.node_counts
# Plot ternary XYI-node proportion plot
fig, ax, tax = network.plot_xyi()
network.branch_counts
# Plot ternary branch (C-C, C-I, I-I) proportion plot
fig, ax, tax = network.plot_branch()
###Output
_____no_output_____
###Markdown
General topological and geometric parameters
###Code
network.parameters
###Output
_____no_output_____
###Markdown
Contour grids for target area
###Code
sampled_grid = contour_grid.run_grid_sampling(
traces=network.trace_gdf,
branches=network.branch_gdf,
nodes=network.node_gdf,
cell_width=1.0,
snap_threshold=0.01,
)
sampled_grid.columns
# From https://geopandas.org/mapping.html
from mpl_toolkits.axes_grid1 import make_axes_locatable
def plot_contour(column: str):
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
sampled_grid.plot(
column=column, legend=True, cax=cax, ax=ax, legend_kwds={"label": column}
)
plot_contour("Fracture Intensity P21")
plot_contour("Connections per Branch")
###Output
_____no_output_____ |
Applied Text Mining in Python/Module+2+(Python+3).ipynb | ###Markdown
Module 2 (Python 3) Basic NLP Tasks with NLTK
###Code
import nltk
nltk.download("book")
from nltk.book import *
###Output
[nltk_data] Downloading collection 'book'
[nltk_data] |
[nltk_data] | Downloading package abc to /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/abc.zip.
[nltk_data] | Downloading package brown to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/brown.zip.
[nltk_data] | Downloading package chat80 to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/chat80.zip.
[nltk_data] | Downloading package cmudict to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/cmudict.zip.
[nltk_data] | Downloading package conll2000 to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/conll2000.zip.
[nltk_data] | Downloading package conll2002 to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/conll2002.zip.
[nltk_data] | Downloading package dependency_treebank to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/dependency_treebank.zip.
[nltk_data] | Downloading package genesis to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/genesis.zip.
[nltk_data] | Downloading package gutenberg to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/gutenberg.zip.
[nltk_data] | Downloading package ieer to /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/ieer.zip.
[nltk_data] | Downloading package inaugural to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/inaugural.zip.
[nltk_data] | Downloading package movie_reviews to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/movie_reviews.zip.
[nltk_data] | Downloading package nps_chat to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/nps_chat.zip.
[nltk_data] | Downloading package names to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/names.zip.
[nltk_data] | Downloading package ppattach to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/ppattach.zip.
[nltk_data] | Downloading package reuters to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Downloading package senseval to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/senseval.zip.
[nltk_data] | Downloading package state_union to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/state_union.zip.
[nltk_data] | Downloading package stopwords to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/stopwords.zip.
[nltk_data] | Downloading package swadesh to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/swadesh.zip.
[nltk_data] | Downloading package timit to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/timit.zip.
[nltk_data] | Downloading package treebank to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/treebank.zip.
[nltk_data] | Downloading package toolbox to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/toolbox.zip.
[nltk_data] | Downloading package udhr to /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/udhr.zip.
[nltk_data] | Downloading package udhr2 to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/udhr2.zip.
[nltk_data] | Downloading package unicode_samples to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/unicode_samples.zip.
[nltk_data] | Downloading package webtext to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/webtext.zip.
[nltk_data] | Downloading package wordnet to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/wordnet.zip.
[nltk_data] | Downloading package wordnet_ic to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/wordnet_ic.zip.
[nltk_data] | Downloading package words to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/words.zip.
[nltk_data] | Downloading package maxent_treebank_pos_tagger to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping taggers/maxent_treebank_pos_tagger.zip.
[nltk_data] | Downloading package maxent_ne_chunker to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping chunkers/maxent_ne_chunker.zip.
[nltk_data] | Downloading package universal_tagset to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping taggers/universal_tagset.zip.
[nltk_data] | Downloading package punkt to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping tokenizers/punkt.zip.
[nltk_data] | Downloading package book_grammars to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping grammars/book_grammars.zip.
[nltk_data] | Downloading package city_database to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/city_database.zip.
[nltk_data] | Downloading package tagsets to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping help/tagsets.zip.
[nltk_data] | Downloading package panlex_swadesh to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Downloading package averaged_perceptron_tagger to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] |
[nltk_data] Done downloading collection book
*** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
text6: Monty Python and the Holy Grail
text7: Wall Street Journal
text8: Personals Corpus
text9: The Man Who Was Thursday by G . K . Chesterton 1908
###Markdown
Counting vocabulary of words
###Code
text7
sent7
len(sent7)
len(text7)
len(set(text7))
list(set(text7))[:10]
###Output
_____no_output_____
###Markdown
Frequency of words
###Code
dist = FreqDist(text7)
len(dist)
vocab1 = dist.keys()
#vocab1[:10]
# In Python 3 dict.keys() returns an iterable view instead of a list
list(vocab1)[:10]
dist['four']
freqwords = [w for w in vocab1 if len(w) > 5 and dist[w] > 100]
freqwords
###Output
_____no_output_____
###Markdown
Normalization and stemming
###Code
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
udhr = nltk.corpus.udhr.words('English-Latin1')
udhr[:20]
[porter.stem(t) for t in udhr[:20]] # Still Lemmatization
WNlemma = nltk.WordNetLemmatizer()
[WNlemma.lemmatize(t) for t in udhr[:20]]
###Output
_____no_output_____
###Markdown
Tokenization
###Code
text11 = "Children shouldn't drink a sugary drink before bed."
text11.split(' ')
nltk.word_tokenize(text11)
text12 = "This is the first sentence. A gallon of milk in the U.S. costs $2.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text12)
len(sentences)
sentences
###Output
_____no_output_____
###Markdown
Advanced NLP Tasks with NLTK POS tagging
###Code
nltk.help.upenn_tagset('MD')
text13 = nltk.word_tokenize(text11)
nltk.pos_tag(text13)
text14 = nltk.word_tokenize("Visiting aunts can be a nuisance")
nltk.pos_tag(text14)
# Parsing sentence structure
text15 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
""")
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text15)
for tree in trees:
print(tree)
text16 = nltk.word_tokenize("I saw the man with a telescope")
grammar1 = nltk.data.load('mygrammar.cfg')
grammar1
parser = nltk.ChartParser(grammar1)
trees = parser.parse_all(text16)
for tree in trees:
print(tree)
from nltk.corpus import treebank
text17 = treebank.parsed_sents('wsj_0001.mrg')[0]
print(text17)
###Output
(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken))
(, ,)
(ADJP (NP (CD 61) (NNS years)) (JJ old))
(, ,))
(VP
(MD will)
(VP
(VB join)
(NP (DT the) (NN board))
(PP-CLR (IN as) (NP (DT a) (JJ nonexecutive) (NN director)))
(NP-TMP (NNP Nov.) (CD 29))))
(. .))
###Markdown
POS tagging and parsing ambiguity
###Code
text18 = nltk.word_tokenize("The old man the boat")
nltk.pos_tag(text18)
text19 = nltk.word_tokenize("Colorless green ideas sleep furiously")
nltk.pos_tag(text19)
###Output
_____no_output_____
###Markdown
Module 2 (Python 3) Basic NLP Tasks with NLTK
###Code
import nltk
nltk.download('popular')
nltk.download('nps_chat')
nltk.download('webtext')
from nltk.book import *
###Output
[nltk_data] Downloading collection 'popular'
[nltk_data] |
[nltk_data] | Downloading package cmudict to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/cmudict.zip.
[nltk_data] | Downloading package gazetteers to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/gazetteers.zip.
[nltk_data] | Downloading package genesis to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/genesis.zip.
[nltk_data] | Downloading package gutenberg to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/gutenberg.zip.
[nltk_data] | Downloading package inaugural to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/inaugural.zip.
[nltk_data] | Downloading package movie_reviews to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/movie_reviews.zip.
[nltk_data] | Downloading package names to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/names.zip.
[nltk_data] | Downloading package shakespeare to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/shakespeare.zip.
[nltk_data] | Downloading package stopwords to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/stopwords.zip.
[nltk_data] | Downloading package treebank to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/treebank.zip.
[nltk_data] | Downloading package twitter_samples to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/twitter_samples.zip.
[nltk_data] | Downloading package omw to /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/omw.zip.
[nltk_data] | Downloading package wordnet to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/wordnet.zip.
[nltk_data] | Downloading package wordnet_ic to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/wordnet_ic.zip.
[nltk_data] | Downloading package words to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping corpora/words.zip.
[nltk_data] | Downloading package maxent_ne_chunker to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping chunkers/maxent_ne_chunker.zip.
[nltk_data] | Downloading package punkt to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping tokenizers/punkt.zip.
[nltk_data] | Downloading package snowball_data to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Downloading package averaged_perceptron_tagger to
[nltk_data] | /home/jovyan/nltk_data...
[nltk_data] | Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] |
[nltk_data] Done downloading collection popular
[nltk_data] Downloading package nps_chat to /home/jovyan/nltk_data...
[nltk_data] Unzipping corpora/nps_chat.zip.
[nltk_data] Downloading package webtext to /home/jovyan/nltk_data...
[nltk_data] Unzipping corpora/webtext.zip.
*** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
text6: Monty Python and the Holy Grail
text7: Wall Street Journal
text8: Personals Corpus
text9: The Man Who Was Thursday by G . K . Chesterton 1908
###Markdown
Counting vocabulary of words
###Code
text7
sent7
len(sent7)
len(text7)
len(set(text7))
list(set(text7))[:10]
###Output
_____no_output_____
###Markdown
Frequency of words
###Code
dist = FreqDist(text7)
len(dist)
vocab1 = dist.keys()
#vocab1[:10]
# In Python 3 dict.keys() returns an iterable view instead of a list
list(vocab1)[:10]
dist['four']
freqwords = [w for w in vocab1 if len(w) > 5 and dist[w] > 100]
freqwords
###Output
_____no_output_____
###Markdown
Normalization and stemming
###Code
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
nltk.download('udhr')
udhr = nltk.corpus.udhr.words('English-Latin1')
udhr[:20]
[porter.stem(t) for t in udhr[:20]] # Still Lemmatization
WNlemma = nltk.WordNetLemmatizer()
[WNlemma.lemmatize(t) for t in udhr[:20]]
###Output
_____no_output_____
###Markdown
Tokenization
###Code
text11 = "Children shouldn't drink a sugary drink before bed."
text11.split(' ')
nltk.word_tokenize(text11)
text12 = "This is the first sentence. A gallon of milk in the U.S. costs $2.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text12)
len(sentences)
sentences
###Output
_____no_output_____
###Markdown
Advanced NLP Tasks with NLTK POS tagging
###Code
nltk.download('tagsets')
nltk.help.upenn_tagset('MD')
text13 = nltk.word_tokenize(text11)
nltk.pos_tag(text13)
text14 = nltk.word_tokenize("Visiting aunts can be a nuisance")
nltk.pos_tag(text14)
# Parsing sentence structure
text15 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
""")
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text15)
for tree in trees:
print(tree)
text16 = nltk.word_tokenize("I saw the man with a telescope")
grammar1 = nltk.data.load('mygrammar.cfg')
grammar1
parser = nltk.ChartParser(grammar1)
trees = parser.parse_all(text16)
for tree in trees:
print(tree)
from nltk.corpus import treebank
text17 = treebank.parsed_sents('wsj_0001.mrg')[0]
print(text17)
###Output
(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken))
(, ,)
(ADJP (NP (CD 61) (NNS years)) (JJ old))
(, ,))
(VP
(MD will)
(VP
(VB join)
(NP (DT the) (NN board))
(PP-CLR (IN as) (NP (DT a) (JJ nonexecutive) (NN director)))
(NP-TMP (NNP Nov.) (CD 29))))
(. .))
###Markdown
POS tagging and parsing ambiguity
###Code
text18 = nltk.word_tokenize("The old man the boat")
nltk.pos_tag(text18)
text19 = nltk.word_tokenize("Colorless green ideas sleep furiously")
nltk.pos_tag(text19)
###Output
_____no_output_____
###Markdown
Module 2 (Python 3) Basic NLP Tasks with NLTK
###Code
import nltk
from nltk.book import *
###Output
*** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
###Markdown
Counting vocabulary of words
###Code
text7
sent7
len(sent7)
len(text7)
len(set(text7))
list(set(text7))[:10]
###Output
_____no_output_____
###Markdown
Frequency of words
###Code
dist = FreqDist(text7)
len(dist)
vocab1 = dist.keys()
#vocab1[:10]
# In Python 3 dict.keys() returns an iterable view instead of a list
list(vocab1)[:10]
dist['four']
freqwords = [w for w in vocab1 if len(w) > 5 and dist[w] > 100]
freqwords
###Output
_____no_output_____
###Markdown
Normalization and stemming
###Code
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
udhr = nltk.corpus.udhr.words('English-Latin1')
udhr[:20]
[porter.stem(t) for t in udhr[:20]] # Still Lemmatization
WNlemma = nltk.WordNetLemmatizer()
[WNlemma.lemmatize(t) for t in udhr[:20]]
###Output
_____no_output_____
###Markdown
Tokenization
###Code
text11 = "Children shouldn't drink a sugary drink before bed."
text11.split(' ')
nltk.word_tokenize(text11)
text12 = "This is the first sentence. A gallon of milk in the U.S. costs $2.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text12)
len(sentences)
sentences
###Output
_____no_output_____
###Markdown
Advanced NLP Tasks with NLTK POS tagging
###Code
nltk.help.upenn_tagset('MD')
text13 = nltk.word_tokenize(text11)
nltk.pos_tag(text13)
text14 = nltk.word_tokenize("Visiting aunts can be a nuisance")
nltk.pos_tag(text14)
# Parsing sentence structure
text15 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
""")
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text15)
for tree in trees:
print(tree)
text16 = nltk.word_tokenize("I saw the man with a telescope")
grammar1 = nltk.data.load('mygrammar.cfg')
grammar1
parser = nltk.ChartParser(grammar1)
trees = parser.parse_all(text16)
for tree in trees:
print(tree)
from nltk.corpus import treebank
text17 = treebank.parsed_sents('wsj_0001.mrg')[0]
print(text17)
###Output
(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken))
(, ,)
(ADJP (NP (CD 61) (NNS years)) (JJ old))
(, ,))
(VP
(MD will)
(VP
(VB join)
(NP (DT the) (NN board))
(PP-CLR (IN as) (NP (DT a) (JJ nonexecutive) (NN director)))
(NP-TMP (NNP Nov.) (CD 29))))
(. .))
###Markdown
POS tagging and parsing ambiguity
###Code
text18 = nltk.word_tokenize("The old man the boat")
nltk.pos_tag(text18)
text19 = nltk.word_tokenize("Colorless green ideas sleep furiously")
nltk.pos_tag(text19)
###Output
_____no_output_____ |
Notebook/Day2.ipynb | ###Markdown
**Note: Considering that the 'nvconvert' module cannot convert notebook with Chinese characters, Notes are gonna written in English from now on** Step 1:Preorcess the data- import modules- import Dataset- replace missing data with mean or max/min- split dataset- feature scaling
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# show figures in current window
%matplotlib inline
dataset = pd.read_csv('../datasets/studentscores.csv')
X = dataset.iloc[:, :1].values
Y = dataset.iloc[:, 1].values
print("Original data X: {}, Y:{}".format(X.shape, Y.shape))
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25, random_state = 0)
print(X_train.shape)
###Output
Original data X: (25, 1), Y:(25,)
(18, 1)
###Markdown
Step 2: Fitting socre(Y)-hour(X) relation with Linear Regeression- Y: student score- X: learning hours for each student
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model = model.fit(X_train, Y_trian)
###Output
_____no_output_____
###Markdown
Step 3: Predict the result
###Code
Y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Step 4: Visualization- Visualising the training results
###Code
plt.scatter(X_train, Y_train, color='red')
###Output
_____no_output_____
###Markdown
- Visualizing the test results
###Code
plt.scatter(X_test, Y_test, color='red')
plt.plot(X_test, model.predict(X_test), color='blue')
###Output
_____no_output_____ |