code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
# Intermediate Lesson on Geospatial Data
## Data, Information, Knowledge and Wisdom
<strong>Lesson Developers:</strong> Jayakrishnan Ajayakumar, Shana Crosson, Mohsen Ahmadkhani
#### Part 1 of 5
```
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
# sys.path.append('supplementary')
import hourofci
try:
import os
os.chdir('supplementary')
except:
pass
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
```
## Reminder
<a href="#/slide-2-0" class="navigate-right" style="background-color:blue;color:white;padding:8px;margin:2px;font-weight:bold;">Continue with the lesson</a>
<br>
</br>
<font size="+1">
By continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.
Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.
If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.
For the full description please navigate to this website: <a href="../../gateway-lesson/gateway/gateway-1.ipynb">Gateway Lesson Research Study Permission</a>.
</font>
## Do you believe in Global Warming???
What if I ask you this question and throw some numbers at you?!
```
from ipywidgets import Button, HBox, VBox,widgets,Layout
from IPython.display import display
import pandas as pd
table1 = pd.read_csv('databases/antartica_mass.csv').sample(frac = 1)
table1['0'] = pd.to_datetime(table1['0'])
table2 = pd.read_csv('databases/global_temperature.csv').sample(frac = 1)
table2['0'] = pd.to_datetime(table2['0'],format='%Y')
table3 = pd.read_csv('databases/carbon_dioxide.csv').sample(frac = 1)
table3['2'] = pd.to_datetime(table3['2'])
table1_disp = widgets.Output()
table2_disp = widgets.Output()
table3_disp = widgets.Output()
with table1_disp:
display(table1)
with table2_disp:
display(table2)
with table3_disp:
display(table3)
out=HBox([VBox([table1_disp],layout = Layout(margin='0 100px 0 0')),VBox([table2_disp],layout = Layout(margin='0 100px 0 0')),VBox([table3_disp])])
out
```
These are just symbols and numbers (of course we can identify the date as we have seen that pattern before) and doesn't convey anything.
This is what we essentially call **Data (or raw Data)**.
## What is Data?
>**Data is a collection of facts** in a **raw or unorganized form** such as **numbers or characters**.
Without **context** data has no value!!
Now what if we are provided with the **information** about **what** these symbols represent, **who** collected the data, **where** is this data collected from and **when** was the data collected.
## What is Information (Data+Context)?
>**Information** is a **collection of data** that is **arranged and ordered in a consistent way**. Data in the form of information becomes **more useful because storage and retrieval are easy**.
For our sample datasets, what if we know about the **"what, who, where, and when"** questions. For example, if we are provided with the information that these datasets represent the change in Antartic Ice mass in giga tonnes, the temperature anomaly across the globe in celsius, and the carbon dioxide content in the atmosphere as parts per million, we can try to deduce patterns from the data.
```
table1.columns = ["Time", "Antartic_Mass(Gt)"]
table2.columns = ["Time", "Temperature_Anomaly(C)"]
table3.columns = ["Time", "Carbon_Dioxide(PPM)"]
table1_disp = widgets.Output()
table2_disp = widgets.Output()
table3_disp = widgets.Output()
with table1_disp:
display(table1)
with table2_disp:
display(table2)
with table3_disp:
display(table3)
out=HBox([VBox([table1_disp],layout = Layout(margin='0 100px 0 0')),VBox([table2_disp],layout = Layout(margin='0 100px 0 0')),VBox([table3_disp])])
out
```
We can do more **processing** on the data and can convert it into structured forms. For example we can **sort** these datasets to look for temporal changes
```
table1_disp = widgets.Output()
table2_disp = widgets.Output()
table3_disp = widgets.Output()
with table1_disp:
display(table1.sort_values(by='Time'))
with table2_disp:
display(table2.sort_values(by='Time'))
with table3_disp:
display(table3.sort_values(by='Time'))
out=HBox([VBox([table1_disp],layout = Layout(margin='0 100px 0 0')),VBox([table2_disp],layout = Layout(margin='0 100px 0 0')),VBox([table3_disp])])
out
```
After sorting we can see that, with Time, there is a depletion in Antratic mass and increase in temperature anomaly as well as carbon dioxide content in the atmosphere.
We can also **visualize** these datasets (as a picture is worth 1000 words!!) to bolster our arguments
```
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1,3)
fig.set_figheight(8)
fig.set_figwidth(20)
table1.sort_values(by='Time').plot(x='Time',y='Antartic_Mass(Gt)',ax=axes[0]);
table2.sort_values(by='Time').plot(x='Time',y='Temperature_Anomaly(C)',ax=axes[1]);
table3.sort_values(by='Time').plot(x='Time',y='Carbon_Dioxide(PPM)',ax=axes[2]);
plt.show()
```
By asking relevant questions about ‘who’, ‘what’, ‘when’, ‘where’, etc., we can derive valuable information from the data and make it more useful for us.
## What is Knowledge? (Patterns from Information)
>**Knowledge** is the **appropriate collection of information**, such that it's **intent is to be useful**.
Knowledge deals with the question of **"How"**.
**"How"** is the **information, derived from the collected data, relevant to our goals?**
**"How"** are the **pieces of this information connected to other pieces** to add more meaning and value?
So how do we find this connection between our pieces of information. For example now we have the information that with time there is a decrease in Antartic Ice mass and a corresponding increase in Temperature Anomaly and the Carbon Dioxide content in the atmosphere. Can we prove that there is a relationship? This is where the simulation and model building skills come into play. Machine Learning (which has been a buzz word for long time) is used to answer such questions from large sets of data.
## What is Wisdom?(Acting up on Knowledge)
>**Wisdom** is the **ability to select the best way to reach the desired outcome based on knowledge**.
So its a very subjective concept. In our example, we now have the knowledge that (from developing climatic models) increases in atmospheric carbon dioxide are responsible for about two-thirds of the total energy imbalance that is causing earth's temperature to rise and we also know that rise in temperature leads to melting of ice mass which is a big threat to earth's bio-diversity. So what are we going to do about that? What is the **best way** to do it? So **Wisdom** here is **acting up on this knowledge** regarding carbon emissions and finding ways to reduce it.
## The DIKW pyramid
We can essentially represent these concepts in a Pyramid with Data at the bottom and Wisdom at the top.

So where does **Database** fits into this Pyrmaid (or model)?
We are going to look at that in the upcoming chapters
#### Resources
https://www.ontotext.com/knowledgehub/fundamentals/dikw-pyramid/
https://www.systems-thinking.org/dikw/dikw.htm
https://www.csestack.org/dikw-pyramid-model-difference-between-data-information/
https://developer.ibm.com/articles/ba-data-becomes-knowledge-1/
https://www.certguidance.com/explaining-dikw-hierarchy/
https://www.spreadingscience.com/our-approach/diffusion-of-innovations-in-a-community/1-the-dikw-model-of-innovation/
https://climate.nasa.gov/vital-signs/ice-sheets/
https://climate.nasa.gov/vital-signs/global-temperature/
https://climate.nasa.gov/vital-signs/carbon-dioxide/
### Click on the link below to move on!
<br>
<font size="+1"><a style="background-color:blue;color:white;padding:12px;margin:10px;font-weight:bold;" href="gd-2.ipynb">Click here to go to the next notebook.</a></font>
| github_jupyter |
<a href="https://colab.research.google.com/github/tfrizza/DALL-E-tf/blob/main/tfFlowers_demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%pip install -q tensorflow_addons
!git clone https://github.com/tfrizza/DALL-E-tf.git
%cd DALL-E-tf
import tensorflow as tf
from tensorflow.keras import Model, mixed_precision
from tensorflow.keras.losses import Loss, MeanSquaredError, MeanAbsoluteError, MSE, MAE
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.schedules import PolynomialDecay
from tensorflow.keras.callbacks import Callback
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
from tensorflow_probability import layers as tfpl
import tensorflow_datasets as tfds
from tensorflow_addons.optimizers import LAMB, AdamW
from dall_e_tf.encoder import dvae_encoder
from dall_e_tf.decoder import dvae_decoder
from dall_e_tf.vae import dVAE
from dall_e_tf.losses import LatentLoss
from dall_e_tf.utils import plot_reconstructions
import numpy as np
import attr
from functools import partial
mixed_precision.set_global_policy('float32')
AUTOTUNE = tf.data.AUTOTUNE
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
NUM_DEVICES = strategy.num_replicas_in_sync
print("REPLICAS: ", NUM_DEVICES)
def crop(image):
y_nonzero, x_nonzero, _ = tf.experimental.numpy.nonzero(image)
return image[tf.reduce_min(y_nonzero):tf.reduce_max(y_nonzero), tf.reduce_min(x_nonzero):tf.reduce_max(x_nonzero)]
def preprocess(data, h=128, w=128):
img = crop(data['image'])
img = tf.image.resize(img, size=(h,w), antialias=False)
img /= 255
return img
GLOBAL_BATCH_SIZE = 16 * NUM_DEVICES
train_dataset = tfds.load('tf_flowers',
split='train',
shuffle_files=True,
try_gcs=True
)
train_dataset = train_dataset.map(preprocess, num_parallel_calls=AUTOTUNE)\
.batch(GLOBAL_BATCH_SIZE)\
.prefetch(buffer_size=AUTOTUNE)
train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)
train_dataset
class LatentLoss(Loss):
def call(self, dummy_ground_truth, outputs):
del dummy_ground_truth
z_e, z_q = tf.split(outputs, 2, axis=-1)
vq_loss = tf.reduce_mean(tf.square(tf.stop_gradient(z_e) - z_q))
commit_loss = tf.reduce_mean(tf.square(z_e - tf.stop_gradient(z_q)))
return vq_loss + 1.0 * commit_loss
from tensorflow.python.keras import backend as K
from tensorflow.python.framework import ops
# class TemperatureScheduler(Callback):
# def __init__(self, schedule, layer_name='gumbel-softmax', verbose=0):
# super(TemperatureScheduler, self).__init__()
# self.schedule = schedule
# self.layer_name = layer_name
# self.verbose = verbose
# def on_epoch_begin(self, epoch, logs=None):
# layer = self.model.get_layer(self.layer_name)
# if not hasattr(layer, '_most_recently_built_distribution'):
# raise ValueError('Layer must have a "_most_recently_built_distribution" attribute.')
# distrib = layer._most_recently_built_distribution
# if not hasattr(distrib, 'temperature'):
# raise ValueError('Distribution must have a "temperature" attribute.')
# # T = float(K.get_value(distrib.temperature))
# # T = distrib.temperature
# T = self.schedule(epoch)
# if not isinstance(T, (ops.Tensor, float, np.float32, np.float64)):
# raise ValueError('The output of the "schedule" function '
# 'should be float.')
# if isinstance(T, ops.Tensor) and not T.dtype.is_floating:
# raise ValueError('The dtype of Tensor should be float')
# K.set_value(distrib.temperature, K.get_value(T))
# if self.verbose > 0:
# print('\nEpoch %05d: TemperatureScheduler reducing temperature '
# 'rate to %s.' % (epoch + 1, T))
# def on_epoch_end(self, epoch, logs=None):
# logs = logs or {}
# T = self.model.get_layer(self.layer_name)._most_recently_built_distribution.temperature
# logs['temperature'] = K.get_value(T)
class TemperatureScheduler(Callback):
def __init__(self, schedule, verbose=False):
super(TemperatureScheduler, self).__init__()
self.schedule = schedule
self.verbose = verbose
def on_epoch_begin(self, epoch, logs=None):
temperature = self.schedule(epoch)
# self.model.temperature = temperature
if self.verbose:
print(f'Setting temperature to {self.model.temperature}')
vocab_size = 8192//2
n_hid = 256//2
class dVAE(Model):
def __init__(self, enc, dec, initial_temp=1.0, temp_decay=0.9):
super(dVAE, self).__init__()
self.enc = enc
self.dec = dec
self.temperature = initial_temp
self.temp_decay = temp_decay
self.gumbel_softmax = tfpl.DistributionLambda(
lambda x: tfd.RelaxedOneHotCategorical(temperature=x[0], logits=x[1]) # Gumbel-softmax
, name='gumbel-softmax'
)
def call(self, inputs, training=False):
x, temperature = inputs
z_e = self.enc(x)
z_q = self.gumbel_softmax([temperature, z_e])
z_hard = tf.math.argmax(z_e, axis=-1) # non-differentiable
z_hard = tf.one_hot(z_hard, enc.output.shape[-1], dtype=z_q.dtype)
z = z_q + tf.stop_gradient(z_hard - z_q) # straight-through Gumbel-softmax
x_rec = self.dec(z)
latents = tf.stack([z_hard, z_q], -1, name='latent')
return x_rec, latents, temperature
def train_step(self, x):
with tf.GradientTape() as tape:
x_pred, latents, T = self([x, self.temperature], training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(x, x_pred, regularization_losses=self.losses)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(x, x_pred)
results = {m.name: m.result() for m in self.metrics}
results['temperature'] = self.temperature
return results
with strategy.scope():
enc = dvae_encoder(group_count=2, n_hid=n_hid, n_blk_per_group=2, input_channels=3, vocab_size=vocab_size, activation='swish')
dec = dvae_decoder(group_count=2, n_init=n_hid//2, n_hid=n_hid, n_blk_per_group=2, output_channels=3, vocab_size=vocab_size, activation='swish')
vae = dVAE(enc, dec)
epochs = 200
temp_schedule = PolynomialDecay(0.9, epochs, 1/16, 8) # quadratic decay
lr_schedule = PolynomialDecay(1e-3, epochs, 1e-4, 0.1) # sqrt decay
optimizer = AdamW(weight_decay=1e-4, learning_rate=lr_schedule)
def psnr(x1, x2):
return tf.image.psnr(x1, x2, max_val=1.0)
vae.compile(loss=['mse', None], optimizer=optimizer, metrics=[psnr])
# vae.build(input_shape=(128,128,128,3))
# vae.summary(line_length=200)
vae.fit(train_dataset,
# validation_data=x_test,
epochs=1000,
callbacks=[TemperatureScheduler(temp_schedule, True)]
)
train_batch = next(iter(train_dataset))
plot_reconstructions(vae(train_batch[:10])[0], train_batch[:10])
```
| github_jupyter |
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.append('.')
import utils
def f(x):
return x * np.cos(np.pi*x)
utils.set_fig_size(mpl, (4.5, 2.5))
x = np.arange(-1.0, 2.0, 0.1)
fig = plt.figure()
subplot = fig.add_subplot(111)
subplot.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0), arrowprops=dict(facecolor='black', shrink=0.05))
subplot.annotate('global minimum', xy=(1.1, -0.9), xytext=(0.6, 0.8), arrowprops=dict(facecolor='black', shrink=0.05))
plt.plot(x, f(x))
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
x = np.arange(-2.0, 2.0, 0.1)
fig = plt.figure()
subplt = fig.add_subplot(111)
subplt.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),
arrowprops=dict(facecolor='black', shrink=0.05))
plt.plot(x, x**3)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x, y = np.mgrid[-1:1:31j, -1:1:31j]
z = x**2 - y**2
ax.plot_surface(x, y, z, **{'rstride':1, 'cstride':1, 'cmap':'Greens_r'})
ax.plot([0], [0], [0], 'ro')
ax.view_init(azim=50, elev=20)
plt.xticks([-1, -0.5, 0, 0.5, 1])
plt.yticks([-1, -0.5, 0, 0.5, 1])
ax.set_zticks([-1, -0.5, 0, 0.5, 1])
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# mini-batch SGD
def sgd(params, lr, batch_size):
for param in params:
param[:] = param - lr * param.grad/batch_size
%config InlineBackend.figure_format='retina'
%matplotlib inline
import mxnet as mx
from mxnet import autograd
from mxnet import gluon
from mxnet import nd
import numpy as np
import random
import sys
sys.path.append('..')
import utils
# 生成数据集。
num_inputs = 2
num_examples = 1000
true_w = [2, -3.4]
true_b = 4.2
X = nd.random_normal(scale=1, shape=(num_examples, num_inputs))
y = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b
y += .01 * nd.random_normal(scale=1, shape=y.shape)
def init_params():
w = nd.random_normal(scale=1, shape=(num_inputs, 1))
b = nd.zeros(shape=(1,))
params = [w, b]
for param in params:
param.attach_grad()
return params
def linreg(X, w, b):
return nd.dot(X, w) + b
def squared_loss(yhat, y):
return (yhat - y.reshape(yhat.shape))**2/2
def data_iter(batch_size, num_examples, X, y):
idx = list(range(num_examples))
random.shuffle(idx)
for i in range(0, num_examples, batch_size):
j = nd.array(idx[i:min(i+batch_size, num_examples)])
yield X.take(j), y.take(j)
net = linreg
squared_loss = squared_loss
# mini-batch sgd 时,当迭代周期大于2,lr 在每个迭代周期开始时自乘 0.1 做衰减 decay
def optimize(batch_size, lr, num_epochs, log_interval, decay_epoch):
w, b = init_params()
y_vals = [squared_loss(net(X, w, b), y).mean().asnumpy()]
print('batch_size', batch_size)
for epoch in range(1, num_epochs+1):
# 学习率自我衰减
if decay_epoch and epoch > decay_epoch:
lr *= 0.1
for batch_i, (features, label) in enumerate(data_iter(batch_size, num_examples, X, y)):
with autograd.record():
output = net(features, w, b)
loss = squared_loss(output, label)
loss.backward()
sgd([w, b], lr, batch_size)
if batch_i*batch_size % log_interval == 0:
y_vals.append(squared_loss(net(X, w, b), y).mean().asnumpy())
print('epoch %d, learning rage %f, loss %.4e' %(epoch, lr, y_vals[-1]))
# 为了便于打印,改变输出形状并转化成numpy数组。
print('w', w.reshape((1, -1)).asnumpy(), 'b', b.asscalar(), '\n')
x_vals = np.linspace(0, num_epochs, len(y_vals), endpoint=True)
utils.semilogy(x_vals, y_vals, 'epoch', 'loss')
optimize(batch_size=2, lr=0.2, num_epochs=3, decay_epoch=2, log_interval=10)
```
| github_jupyter |
```
!pip install d2l==0.17.2
# implement several utility functions to facilitate data downloading
import hashlib
import os
import tarfile
import zipfile
import requests
DATA_HUB = dict()
DATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'
# download function to download a dataset
def download(name, cache_dir=os.path.join('..', 'data')):
"""Download a file inserted into DATA_HUB, return the local filename."""
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split('/')[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, 'rb') as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f'Downloading {fname} from {url}...')
r = requests.get(url, stream=True, verify=True)
with open(fname, 'wb') as f:
f.write(r.content)
return fname
# implement two additional utility functions: one is to download and extract a zip or tar file and the other to download all the datasets used in this book from DATA_HUB into the cache directory
def download_extract(name, folder=None):
"""Download and extract a zip/tar file."""
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == '.zip':
fp = zipfile.ZipFile(fname, 'r')
elif ext in ('.tar', '.gz'):
fp = tarfile.open(fname, 'r')
else:
assert False, 'Only zip/tar files can be extracted.'
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
def download_all():
"""Download all files in the DATA_HUB."""
for name in DATA_HUB:
download(name)
```
Accessing and Reading the Dataset
```
# If pandas is not installed, please uncomment the following line:
!pip install pandas
%matplotlib inline
import numpy as np
import pandas as pd
import tensorflow as tf
from d2l import tensorflow as d2l
# download and cache the Kaggle housing dataset
DATA_HUB['kaggle_house_train'] = (
DATA_URL + 'kaggle_house_pred_train.csv',
'585e9cc93e70b39160e7921475f9bcd7d31219ce')
DATA_HUB['kaggle_house_test'] = (
DATA_URL + 'kaggle_house_pred_test.csv',
'fa19780a7b011d9b009e8bff8e99922a8ee2eb90')
# use pandas to load the two csv files containing training and test data respectively
train_data = pd.read_csv(download('kaggle_house_train'))
test_data = pd.read_csv(download('kaggle_house_test'))
# training dataset includes 1460 examples, 80 features, and 1 label, while the test data contains 1459 examples and 80 features
print(train_data.shape)
print(test_data.shape)
# take a look at the first four and last two features as well as the label (SalePrice)
print(train_data.iloc[0:4, [0, 1, 2, 3, -3, -2, -1]])
all_features = pd.concat((train_data.iloc[:, 1:-1], test_data.iloc[:, 1:]))
```
Data Preprocessing
```
# If test data were inaccessible, mean and standard deviation could be
# calculated from training data
numeric_features = all_features.dtypes[all_features.dtypes != 'object'].index
all_features[numeric_features] = all_features[numeric_features].apply(
lambda x: (x - x.mean()) / (x.std()))
# After standardizing the data all means vanish, hence we can set missing
# values to 0
all_features[numeric_features] = all_features[numeric_features].fillna(0)
# `Dummy_na=True` considers "na" (missing value) as a valid feature value, and
# creates an indicator feature for it
all_features = pd.get_dummies(all_features, dummy_na=True)
all_features.shape
# extract the NumPy format from the pandas format and convert it into the tensor
n_train = train_data.shape[0]
train_features = tf.constant(all_features[:n_train].values, dtype=tf.float32)
test_features = tf.constant(all_features[n_train:].values, dtype=tf.float32)
train_labels = tf.constant(
train_data.SalePrice.values.reshape(-1, 1), dtype=tf.float32)
```
Training
```
loss = tf.keras.losses.MeanSquaredError()
def get_net():
net = tf.keras.models.Sequential()
net.add(tf.keras.layers.Dense(
1, kernel_regularizer=tf.keras.regularizers.l2(weight_decay)))
return net
def log_rmse(y_true, y_pred):
# To further stabilize the value when the logarithm is taken, set the
# value less than 1 as 1
clipped_preds = tf.clip_by_value(y_pred, 1, float('inf'))
return tf.sqrt(tf.reduce_mean(loss(
tf.math.log(y_true), tf.math.log(clipped_preds))))
def train(net, train_features, train_labels, test_features, test_labels,
num_epochs, learning_rate, weight_decay, batch_size):
train_ls, test_ls = [], []
train_iter = d2l.load_array((train_features, train_labels), batch_size)
# The Adam optimization algorithm is used here
optimizer = tf.keras.optimizers.Adam(learning_rate)
net.compile(loss=loss, optimizer=optimizer)
for epoch in range(num_epochs):
for X, y in train_iter:
with tf.GradientTape() as tape:
y_hat = net(X)
l = loss(y, y_hat)
params = net.trainable_variables
grads = tape.gradient(l, params)
optimizer.apply_gradients(zip(grads, params))
train_ls.append(log_rmse(train_labels, net(train_features)))
if test_labels is not None:
test_ls.append(log_rmse(test_labels, net(test_features)))
return train_ls, test_ls
```
K -Fold Cross-Validation
```
# function that returns the ith fold of the data in a K -fold cross-validation procedure
def get_k_fold_data(k, i, X, y):
assert k > 1
fold_size = X.shape[0] // k
X_train, y_train = None, None
for j in range(k):
idx = slice(j * fold_size, (j + 1) * fold_size)
X_part, y_part = X[idx, :], y[idx]
if j == i:
X_valid, y_valid = X_part, y_part
elif X_train is None:
X_train, y_train = X_part, y_part
else:
X_train = tf.concat([X_train, X_part], 0)
y_train = tf.concat([y_train, y_part], 0)
return X_train, y_train, X_valid, y_valid
# The training and verification error averages are returned when we train K times in the K -fold cross-validation
def k_fold(k, X_train, y_train, num_epochs, learning_rate, weight_decay,
batch_size):
train_l_sum, valid_l_sum = 0, 0
for i in range(k):
data = get_k_fold_data(k, i, X_train, y_train)
net = get_net()
train_ls, valid_ls = train(net, *data, num_epochs, learning_rate,
weight_decay, batch_size)
train_l_sum += train_ls[-1]
valid_l_sum += valid_ls[-1]
if i == 0:
d2l.plot(list(range(1, num_epochs + 1)), [train_ls, valid_ls],
xlabel='epoch', ylabel='rmse', xlim=[1, num_epochs],
legend=['train', 'valid'], yscale='log')
print(f'fold {i + 1}, train log rmse {float(train_ls[-1]):f}, '
f'valid log rmse {float(valid_ls[-1]):f}')
return train_l_sum / k, valid_l_sum / k
```
Model Selection
```
!pip uninstall matplotlib
!pip install --upgrade matplotlib
k, num_epochs, lr, weight_decay, batch_size = 5, 100, 5, 0, 64
train_l, valid_l = k_fold(k, train_features, train_labels, num_epochs, lr,
weight_decay, batch_size)
print(f'{k}-fold validation: avg train log rmse: {float(train_l):f}, '
f'avg valid log rmse: {float(valid_l):f}')
```
| github_jupyter |
```
#Goal: Have customers narrow their travel searches based on temp and precipitation
import pandas as pd
import requests
import gmaps
from config import g_key
weather_data_df=pd.read_csv("data/WeatherPy_Database.csv")
weather_data_df
weather_data_df.dtypes
#configure gmaps to use the appropriate key
gmaps.configure(api_key=g_key)
#Confuguring inputs
min_temp=float(input("What is the minimum temperature you would like for your vacation?"))
max_temp=float(input("What is the maximum temperature you would like for your vacation?"))
#Ask if they would like it to rain
rain_check=input("Would you like it to be raining? (yes/no)")
#Do the same for snow
snow_check=input("Would you like it to be snowing? (yes/no)")
#performing the conditionals for rain check and snow check with min and max temps
#starting if loop and developing the loop with elifs for other criteria to be met
if rain_check == "no" and snow_check == "no":
resulting_cities=weather_data_df.loc[(weather_data_df["Max Temp"] <= max_temp) &
(weather_data_df["Max Temp"] >= min_temp) &
(weather_data_df["Rainfall"] == 0) &
(weather_data_df["Snowfall"] == 0)]
elif rain_check == "no" and snow_check == "yes":
resulting_cities=weather_data_df.loc[(weather_data_df["Max Temp"] <= max_temp) &
(weather_data_df["Max Temp"] >= min_temp) &
(weather_data_df["Rainfall"] == 0) &
(weather_data_df["Snowfall"] > 0.0)]
elif rain_check == "yes" and snow_check == "no":
resulting_cities=weather_data_df.loc[(weather_data_df["Max Temp"] <= max_temp) &
(weather_data_df["Max Temp"] >= min_temp) &
(weather_data_df["Rainfall"] > 0.0) &
(weather_data_df["Snowfall"] == 0)]
else:
resulting_cities=weather_data_df.loc[(weather_data_df["Max Temp"] <= max_temp) &
(weather_data_df["Max Temp"] >= min_temp) &
(weather_data_df["Rainfall"] > 0.0) &
(weather_data_df["Snowfall"] > 0.0)]
#checking results
resulting_cities.count()
#Display top ten cities
resulting_cities.head(10)
#drop null values
resulting_cities=resulting_cities.dropna()
resulting_cities
#making new dataframe to set up for markers by copying resulting dataframe and adding hotel name column
hotel_markers_df=resulting_cities[["City", "Country", "Max Temp", "Lat", "Lng"]].copy()
hotel_markers_df["Hotel Name"]=""
hotel_markers_df.head(10)
# Set parameters to search for a hotel.
params = {
"radius": 5000,
"type": "lodging",
"key": g_key
}
# Iterate through the DataFrame.
for index, row in hotel_markers_df.iterrows():
# Get the latitude and longitude.
lat = row["Lat"]
lng = row["Lng"]
params["location"]=f"{lat},{lng}"
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
hotels = requests.get(base_url, params=params).json()
try:
hotel_markers_df.loc[index, "Hotel Name"] = hotels["results"][0]["name"]
except (IndexError):
print("Hotel not found... skipping.")
#checking results
hotel_markers_df.head(10)
#creating the csv file to store in
hotel_data_file="data/WeatherPy_Vacation.csv"
hotel_markers_df.to_csv(hotel_data_file, index_label="City_ID")
vacation_hotel_df=pd.read_csv("data/WeatherPy_Vacation.csv")
vacation_hotel_df.head()
#adding hotel marks to map
info_box_template = """
<dl>
<dt>Hotel Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
<dt>Hotel Name</dt><dd>{Hotel Name} at {Max Temp}</dd>
</dl>
"""
# Store the DataFrame Row.
hotel_info = [info_box_template.format(**row) for index, row in vacation_hotel_df.iterrows()]
locations=vacation_hotel_df[["Lat", "Lng"]]
# Add a heatmap of temperature for the vacation spots.
max_temp = vacation_hotel_df["Max Temp"]
fig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)
marker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig.add_layer(marker_layer)
# Call the figure to plot the data.
fig
```
| github_jupyter |
```
!pip install lightgbm
!pip install xgboost
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
import zipfile
archive = zipfile.ZipFile('test.csv.zip', 'r')
test = pd.read_csv(archive.open('test.csv'), sep=";", decimal=",",parse_dates=True)
archive = zipfile.ZipFile('train.csv.zip', 'r')
train = pd.read_csv(archive.open('train.csv'), sep=";", decimal=",",parse_dates=True)
import datetime
test.date = test.date.str.split('-').apply(lambda x: datetime.datetime(int(x[0]),int(x[1]),int(x[2])))
train.date = train.date.str.split('-').apply(lambda x: datetime.datetime(int(x[0]),int(x[1]),int(x[2])))
train['dayofweek'] = train.date.dt.dayofweek
test['dayofweek'] = test.date.dt.dayofweek
train['quarter'] = train.date.dt.quarter
test['quarter'] = test.date.dt.quarter
train['week'] = train.date.dt.week
test['week'] = test.date.dt.week
train['month'] = train.date.dt.month
test['month'] = test.date.dt.month
## some more feature engineering
train["qteG"] = train.article_nom.str.extract('(\d+)G',expand=True).fillna(0).astype(int)
test["qteG"] = test.article_nom.str.extract('(\d+)G',expand=True).fillna(0).astype(int)
train['qteX'] = train.article_nom.str.extract('X ?(\d)',expand=True).fillna(0).astype(int)
test['qteX'] = test.article_nom.str.extract('X ?(\d)',expand=True).fillna(0).astype(int)
train['qteMl'] = train.article_nom.str.extract('(\d+) ?Ml',expand=True).fillna(0).astype(int)
test['qteMl'] = test.article_nom.str.extract('(\d+) ?Ml',expand=True).fillna(0).astype(int)
ytrain = train.set_index('id').qte_article_vendue
cat_features = ['implant', 'article_nom']
from sklearn import preprocessing
label_encoders = {}
for cat in cat_features:
label_encoders.update({cat:preprocessing.LabelEncoder()})
for cat, le in label_encoders.items():
cat_str = cat+'_label'
train[cat_str] = le.fit_transform(train[cat])
test[cat_str] = le.transform(test[cat])
##aggregates
#data = pd.concat([train.set_index('id'),test.set_index('id')],axis=0)
#data.groupby(['article_nom','date','implant']).qte_article_vendue.rolling(2).mean().reset_index()
trainingset = train.set_index('id').select_dtypes(include=['float64','int64']).drop('qte_article_vendue', axis=1)
testset = test.set_index('id').select_dtypes(include=['float64','int64'])
# Feature Selection
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.feature_selection import SelectFromModel
regressor = ExtraTreesRegressor().fit(trainingset, ytrain)
#lsvc = LinearSVC(C=0.01, penalty="l1", dual=False).fit(trainingset, ytrain)
model = SelectFromModel(regressor, prefit=True)
X = model.transform(trainingset)
Xpredict = model.transform(testset)
```
# Modeling
```
trainingset.columns[model.get_support()]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(trainingset, ytrain, test_size=0.05, random_state=42)
print('Start training...')
# train
gbm = lgb.LGBMRegressor(objective='regression',
num_leaves=60,
learning_rate=0.1,
n_estimators=150, random_state=42)
gbm.fit(X_train, y_train,
eval_set=[(X_test, y_test)],
eval_metric='rmse',
early_stopping_rounds=5)
print('Start predicting...')
# predict
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)
# eval
print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5)
# feature importances
print('Feature importances:', list(gbm.feature_importances_))
import numpy as np
for i in np.argsort(gbm.feature_importances_)[::-1][:10]:
print(trainingset.columns[i])
help(xgbReg.fit)
xgbReg = xgb.XGBRegressor(nthread=-1, min_child_weight=4, subsample=0.9, max_depth=5)
xgbReg.fit(X_train, y_train,
eval_metric='rmse',
eval_set=[(X_test, y_test)],
early_stopping_rounds=5)
print('Start predicting...')
# predict
y_pred2 = xgbReg.predict(X_test)
# eval
print('The rmse of prediction is:', mean_squared_error(y_test, y_pred2) ** 0.5)
# feature importances
print('Feature importances:', list(xgbReg.feature_importances_))
import numpy as np
for i in np.argsort(xgbReg.feature_importances_)[::-1][:10]:
print(trainingset.columns[i])
print('The rmse of prediction is:', mean_squared_error(y_test, 0.5*(y_pred+y_pred2)) ** 0.5)
y_sub = gbm.predict(testset, num_iteration=gbm.best_iteration_)
y_sub2 = xgbReg.predict(testset)
pd.DataFrame(0.5*(y_sub+y_sub2),index=testset.index,columns=['quantite_vendue']).to_csv('sub.csv',sep=';',decimal=',')
```
| github_jupyter |
## Final Output
```
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
from statistics import mean, median, variance
plt.rcParams['figure.figsize'] = [10, 5]
import pprint
import math
import tabulate
def get_overheads(file_name):
data = []
with open(file_name, 'r') as results:
for line in results:
line = line.split(",")
trip_duration = float(line[4])
overhead = float(line[6])
agent = line[7]
preference = line[8].replace('\r', '').replace('\n', '')
data.append(overhead)
return data
def get_utilizations(file_name):
utilizations = []
line_no = 0
with open(file_name, 'r') as results:
for line in results:
line = line.split(",")
line[len(line)-1] = line[len(line)-1].replace('\r', '').replace('\n', '')
line_no = line_no + 1
if line_no == 1:
edges = line
else:
utilizations.append([float(u) for u in line[1:]])
streets_data = {}
for i in range(len(edges)):
streets_data[edges[i]] = [utilization[i] for utilization in utilizations]
streets_utilizations = {}
for key, value in streets_data.iteritems():
streets_utilizations[key] = mean(value)
return streets_utilizations
def get_wait_times(file_name):
wait_times = []
line_no = 0
with open(file_name, 'r') as results:
for line in results:
line = line.split(",")
line[len(line)-1] = line[len(line)-1].replace('\r', '').replace('\n', '')
line_no = line_no + 1
if line_no == 1:
lanes = line
else:
wait_times.append([float(u) for u in line[1:]])
wait_times_data = {}
for i in range(len(lanes)):
wait_times_data[lanes[i]] = [wait_time[i] for wait_time in wait_times]
lane_wait_times = {}
for key, value in wait_times_data.iteritems():
lane_wait_times[key] = mean(value)
return lane_wait_times
lb_b0p0_overhead_csv = "data/lb_b0p0overheads.csv"
lb_b0p0_streets_csv = "data/lb_b0p0streets.csv"
lb_b0p0_waits_csv = "data/lb_b0p0waits.csv"
lb_b0p9_overhead_csv = "data/lb_b0p9overheads.csv"
lb_b0p9_streets_csv = "data/lb_b0p9streets.csv"
lb_b0p9_waits_csv = "data/lb_b0p9waits.csv"
lb_b1p0_overhead_csv = "data/lb_b1p0overheads.csv"
lb_b1p0_streets_csv = "data/lb_b1p0streets.csv"
lb_b1p0_waits_csv = "data/lb_b1p0waits.csv"
lb_b0p0_overhead = get_overheads(lb_b0p0_overhead_csv)
lb_b0p0_streets = get_utilizations(lb_b0p0_streets_csv)
lb_b0p0_waits = get_wait_times(lb_b0p0_waits_csv)
lb_b0p9_overhead = get_overheads(lb_b0p9_overhead_csv)
lb_b0p9_streets = get_utilizations(lb_b0p9_streets_csv)
lb_b0p9_waits = get_wait_times(lb_b0p9_waits_csv)
lb_b1p0_overhead = get_overheads(lb_b1p0_overhead_csv)
lb_b1p0_streets = get_utilizations(lb_b1p0_streets_csv)
lb_b1p0_waits = get_wait_times(lb_b1p0_waits_csv)
lb_overheads = []
lb_overheads.append(lb_b1p0_overhead)
lb_overheads.append(lb_b0p0_overhead)
lb_overheads.append(lb_b0p9_overhead)
lb_utilizations = []
lb_utilizations.append(lb_b1p0_streets)
lb_utilizations.append(lb_b0p0_streets)
lb_utilizations.append(lb_b0p9_streets)
lb_waits = []
lb_waits.append(lb_b1p0_waits)
lb_waits.append(lb_b0p0_waits)
lb_waits.append(lb_b0p9_waits)
labels = []
labels.append("Beta")
labels.append("Median of Trip Overhead")
labels.append("Δ")
labels.append("Variance of Street Utilization")
labels.append("Δ")
labels.append("Mean Wait time of lanes")
labels.append("Δ")
betas = [1.0, 0.0, 0.9]
output = []
line = 0
baseline_o = float()
baseline_u = float()
baseline_w = float()
for i in range(len(betas)):
if line == 0:
baseline_o = median(lb_overheads[i])
baseline_u = variance(lb_utilizations[i].values())
baseline_w = mean(lb_waits[i].values())
temp = []
temp.append(betas[i])
temp.append(median(lb_overheads[i]))
if(line != 0):
temp.append(float(((median(lb_overheads[i]) - baseline_o) / baseline_o) * 100))
else:
temp.append("(baseline)")
temp.append(variance(lb_utilizations[i].values()))
if(line != 0):
temp.append(float(((variance(lb_utilizations[i].values()) - baseline_u) / baseline_u) * 100))
else:
temp.append("(baseline)")
temp.append(mean(lb_waits[i].values()))
if(line != 0):
temp.append(float(((mean(lb_waits[i].values()) - baseline_w) / baseline_w) * 100))
else:
temp.append("(baseline)")
output.append(temp)
line = line + 1
# output
table = tabulate.tabulate(output, labels, 'unsafehtml')
table
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
from statistics import mean, median, variance
plt.rcParams['figure.figsize'] = [10, 5]
import pprint
import math
import tabulate
def get_overheads(file_name):
data = []
with open(file_name, 'r') as results:
for line in results:
line = line.split(",")
trip_duration = float(line[4])
overhead = float(line[6])
agent = line[7]
preference = line[8].replace('\r', '').replace('\n', '')
data.append(overhead)
return data
def get_utilizations(file_name):
utilizations = []
line_no = 0
with open(file_name, 'r') as results:
for line in results:
line = line.split(",")
line[len(line)-1] = line[len(line)-1].replace('\r', '').replace('\n', '')
line_no = line_no + 1
if line_no == 1:
edges = line
else:
utilizations.append([float(u) for u in line[1:]])
streets_data = {}
for i in range(len(edges)):
streets_data[edges[i]] = [utilization[i] for utilization in utilizations]
streets_utilizations = {}
for key, value in streets_data.iteritems():
streets_utilizations[key] = mean(value)
return streets_utilizations
def get_wait_times(file_name):
wait_times = []
line_no = 0
with open(file_name, 'r') as results:
for line in results:
line = line.split(",")
line[len(line)-1] = line[len(line)-1].replace('\r', '').replace('\n', '')
line_no = line_no + 1
if line_no == 1:
lanes = line
else:
wait_times.append([float(u) for u in line[1:]])
wait_times_data = {}
for i in range(len(lanes)):
wait_times_data[lanes[i]] = [wait_time[i] for wait_time in wait_times]
lane_wait_times = {}
for key, value in wait_times_data.iteritems():
lane_wait_times[key] = mean(value)
return lane_wait_times
tl_b0p0_overhead_csv = "data/tl_b0p0overheads.csv"
tl_b0p0_streets_csv = "data/tl_b0p0streets.csv"
tl_b0p0_waits_csv = "data/tl_b0p0waits.csv"
tl_b0p9_overhead_csv = "data/tl_b0p9overheads.csv"
tl_b0p9_streets_csv = "data/tl_b0p9streets.csv"
tl_b0p9_waits_csv = "data/tl_b0p9waits.csv"
tl_b1p0_overhead_csv = "data/tl_b1p0overheads.csv"
tl_b1p0_streets_csv = "data/tl_b1p0streets.csv"
tl_b1p0_waits_csv = "data/tl_b1p0waits.csv"
tl_b0p0_overhead = get_overheads(tl_b0p0_overhead_csv)
tl_b0p0_streets = get_utilizations(tl_b0p0_streets_csv)
tl_b0p0_waits = get_wait_times(tl_b0p0_waits_csv)
tl_b0p9_overhead = get_overheads(tl_b0p9_overhead_csv)
tl_b0p9_streets = get_utilizations(tl_b0p9_streets_csv)
tl_b0p9_waits = get_wait_times(tl_b0p9_waits_csv)
tl_b1p0_overhead = get_overheads(tl_b1p0_overhead_csv)
tl_b1p0_streets = get_utilizations(tl_b1p0_streets_csv)
tl_b1p0_waits = get_wait_times(tl_b1p0_waits_csv)
tl_overheads = []
tl_overheads.append(tl_b1p0_overhead)
tl_overheads.append(tl_b0p0_overhead)
tl_overheads.append(tl_b0p9_overhead)
tl_utilizations = []
tl_utilizations.append(tl_b1p0_streets)
tl_utilizations.append(tl_b0p0_streets)
tl_utilizations.append(tl_b0p9_streets)
tl_waits = []
tl_waits.append(tl_b1p0_waits)
tl_waits.append(tl_b0p0_waits)
tl_waits.append(tl_b0p9_waits)
labels = []
labels.append("Beta")
labels.append("Median of Trip Overhead")
labels.append("Δ")
labels.append("Variance of Street Utilization")
labels.append("Δ")
labels.append("Mean Wait time of lanes")
labels.append("Δ")
output = []
line = 0
baseline_o = float()
baseline_u = float()
baseline_w = float()
for i in range(len(betas)):
if line == 0:
baseline_o = median(tl_overheads[i])
baseline_u = variance(tl_utilizations[i].values())
baseline_w = mean(tl_waits[i].values())
temp = []
temp.append(betas[i])
temp.append(median(tl_overheads[i]))
if(line != 0):
temp.append(float(((median(tl_overheads[i]) - baseline_o) / baseline_o) * 100))
else:
temp.append("(baseline)")
temp.append(variance(tl_utilizations[i].values()))
if(line != 0):
temp.append(float(((variance(tl_utilizations[i].values()) - baseline_u) / baseline_u) * 100))
else:
temp.append("(baseline)")
temp.append(mean(tl_waits[i].values()))
if(line != 0):
temp.append(float(((mean(tl_waits[i].values()) - baseline_w) / baseline_w) * 100))
else:
temp.append("(baseline)")
output.append(temp)
line = line + 1
# output
table = tabulate.tabulate(output, labels, 'unsafehtml')
table
```
| github_jupyter |
## Machine learning sur le titanic
```
import pandas as pd
import numpy as np
```
On importe les données
```
titanic = pd.read_csv("./data/titanic_train.csv")
titanic.head()
```
On sélectionne les colonnes de x
```
x = titanic.drop(["PassengerId","Survived","Name","Ticket"],axis=1)
y = titanic["Survived"]
```
On simplifie la colonne `Cabin`
```
x["Cabin"]=x["Cabin"].str[0].fillna("No").replace({"T":"No","G":"No"})#.replace("G","No")
# on transforme toutes colonnes quali en binaires
x = pd.get_dummies(x,columns=["Sex","Cabin","Embarked"])
def transfo(x):
""" Cette fonction permet de transformer en binaires toutes les colonnes
objet d'un DataFrame en utilisant get_dummies()
"""
list_col_quali =[]
for col in x.columns:
if x[col].dtype == object:
list_col_quali.append(col)
print(list_col_quali)
return pd.get_dummies(x,columns=list_col_quali)
x = transfo(x)
# on remplace par la médiane
x["Age"]=x["Age"].fillna(x["Age"].median())
```
# Séparation apprentissage / test
```
from sklearn.model_selection import train_test_split
```
On veut découper nos données en train / test
```
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.3)
print(x_train.shape, x_test.shape)
```
On va construire et estimer des modèles de ML
```
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix, roc_auc_score, accuracy_score
dico_modeles = dict(logit=LogisticRegression(),
rf=RandomForestClassifier(n_estimators=1000),
gbm=GradientBoostingClassifier(),
knn = KNeighborsClassifier(),
rn = MLPClassifier()
)
for modele in dico_modeles.keys():
dico_modeles[modele].fit(x_train,y_train)
y_predict = dico_modeles[modele].predict(x_test)
y_predict_proba = dico_modeles[modele].predict_proba(x_test)
print("Matrice de confusion pour modèle {} ".format(modele), confusion_matrix(y_test,y_predict),sep="\n")
print("Auc pour modèle {} ".format(modele) ,roc_auc_score(y_test,y_predict_proba[:,1] ))
print("Accuracy pour modèle {} ".format(modele), accuracy_score(y_test,y_predict))
pd.DataFrame(dico_modeles['rf'].feature_importances_,index=x.columns,
columns=["importance"]).sort_values("importance",ascending = False)
```
On va rechercher les hyper-paramètres du modèle en utilisant une grille
```
from sklearn.model_selection import GridSearchCV
# on construit la grille de paramètres
param = dict(n_estimators=[10,100,1000], max_depth=[3,5,7,9])
# on crée un objet de la classe GridSearchCV
modele_grid= GridSearchCV(RandomForestClassifier(),param,scoring="roc_auc",cv=4)
modele_grid.fit(x_train,y_train)
modele_grid.best_score_
modele_grid.best_params_
pd.DataFrame(modele_grid.cv_results_)
```
Si on veut exporter un modèle, on peut utiliser :
```
from sklearn.externals import joblib
joblib.dump(modele_grid,"modele_grid.pkl")
```
## Construction d'un pipeline
```
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
# création d'un objet de la classe Pipeline
mon_pipe = Pipeline(steps=[("acp",PCA(n_components=4)),("svm",SVC())])
mon_pipe.fit(x_train,y_train)
confusion_matrix(y_test,mon_pipe.predict(x_test))
```
| github_jupyter |
# Imporing Libraries and Dataset
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
data_train= pd.read_csv(r"C:\Users\shruti\Desktop\Decodr Session Recording\Project\Decodr Project\Power Plant Data Analysis\train.csv", delimiter=",")
data_train.head()
data_train.shape
y_train= data_train[" EP"]
del data_train[" EP"]
data_train.head()
y_train.head()
```
# Structure of Dataset
```
data_train.describe()
y_train.shape
```
# Checking for Null values
```
data_train.isnull().sum()
data_train.isna().sum()
y_train.isnull().sum()
y_train.isna().sum()
```
# Exploratory Data Analysis
```
# Statistics
min_EP= y_train.min()
max_EP= y_train.max()
mean_EP= y_train.mean()
median_EP= y_train.median()
std_EP= y_train.std()
# Quartile calculator
first_quar= np.percentile(y_train, 25)
third_quar= np.percentile(y_train, 75)
inter_quar= third_quar - first_quar
# Print Statistics
print("Statistics for combined cycle Power Plant:\n")
print("Minimum EP:", min_EP)
print("Maximum EP:", max_EP)
print("Mean EP:", mean_EP)
print("Median EP:", median_EP)
print("Standard Deviation of EP:", std_EP)
print("First Quartile of EP:", first_quar)
print("Third Quartile of EP:", third_quar)
print("InterQuartile of EP:",inter_quar)
```
# Plotting
```
sns.set(rc={"figure.figsize":(5,5)})
sns.distplot(data_train, bins=30, color= "orange")
plt.show()
```
# Correlation
```
corr_df=data_train.copy()
corr_df["EP"]=y_train
corr_df.head()
sns.set(style="ticks", color_codes=True)
plt.figure(figsize=(12,12))
sns.heatmap(corr_df.astype("float32").corr(), linewidths=0.1, square=True, annot=True)
plt.show()
```
# Features Plot
```
# Print all Features
data_train.columns
plt.plot(corr_df["# T"], corr_df["EP"], "+", color= "green")
plt.plot(np.unique(corr_df["# T"]), np.poly1d(np.polyfit(corr_df["# T"], corr_df["EP"], 1))
(np.unique(corr_df["# T"])), color="yellow")
plt.xlabel("Temperature", fontsize=12)
plt.ylabel("EP", fontsize=12)
plt.show()
plt.plot(corr_df[" V"], corr_df["EP"], "o", color= "pink")
plt.plot(np.unique(corr_df[" V"]), np.poly1d(np.polyfit(corr_df[" V"], corr_df["EP"], 1))
(np.unique(corr_df[" V"])), color="blue")
plt.xlabel("Exhaust Vaccum", fontsize=12)
plt.ylabel("EP", fontsize=12)
plt.show()
plt.plot(corr_df[" AP"], corr_df["EP"], "o", color= "orange")
plt.plot(np.unique(corr_df[" AP"]), np.poly1d(np.polyfit(corr_df[" AP"], corr_df["EP"], 1))
(np.unique(corr_df[" AP"])), color="green")
plt.xlabel("Ambient Pressure", fontsize=12)
plt.ylabel("EP", fontsize=12)
plt.show()
plt.plot(corr_df[" RH"], corr_df["EP"], "o", color= "seagreen")
plt.plot(np.unique(corr_df[" RH"]), np.poly1d(np.polyfit(corr_df[" RH"], corr_df["EP"], 1))
(np.unique(corr_df[" RH"])), color="blue")
plt.xlabel("Relative Humidity", fontsize=12)
plt.ylabel("EP", fontsize=12)
plt.show()
fig, ax=plt.subplots(ncols=4, nrows=1, figsize=(20,10))
index=0
ax=ax.flatten()
for i,v in data_train.items():
sns.boxplot(y=i, data=data_train, ax=ax[index], color= "orangered")
index+=1
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=0.5)
sns.set(style="whitegrid")
features_plot=data_train.columns
sns.pairplot(data_train[features_plot]);
plt.tight_layout
plt.show()
```
# Feature Scaling
```
from sklearn.preprocessing import StandardScaler
scaler= StandardScaler()
scaler.fit_transform(data_train)
```
# Gradient Descent Model
```
x_train= data_train
x_train.shape, y_train.shape
from sklearn.ensemble import GradientBoostingRegressor
gbr=GradientBoostingRegressor(learning_rate=1.9, n_estimators=2000)
gbr
gbr.fit(x_train, y_train)
x_test= np.genfromtxt(r"C:\Users\shruti\Desktop\Decodr Session Recording\Project\Decodr Project\Power Plant Data Analysis\test.csv", delimiter=",")
y_train.ravel(order="A")
y_pred=gbr.predict(x_test)
y_pred
```
# Model Evaluation
```
gbr.score(x_train, y_train)
```
# Saving the Prediction
```
np.savetxt("Predict_csv", y_pred, fmt="%.5f")
```
| github_jupyter |
# NumPy - 科学计算
## 一、简介
NumPy是Python语言的一个扩充程序库。支持高级大量的维度数组与矩阵运算,此外也针对数组运算提供大量的数学函数库。Numpy内部解除了[CPython的GIL](https://www.cnblogs.com/wj-1314/p/9056555.html)(全局解释器锁),运行效率极好,是大量机器学习框架的基础库!
NumPy的全名为Numeric Python,是一个开源的Python科学计算库,它包括:
- 一个强大的N维数组对象ndrray;
- 比较成熟的(广播)函数库;
- 用于整合C/C++和Fortran代码的工具包;
- 实用的线性代数、傅里叶变换和随机数生成函数
NumPy的优点:
- 对于同样的数值计算任务,使用NumPy要比直接编写Python代码便捷得多;
- NumPy中的数组的存储效率和输入输出性能均远远优于Python中等价的基本数据结构,且其能够提升的性能是与数组中的元素成比例的;
- NumPy的大部分代码都是用C语言写的,其底层算法在设计时就有着优异的性能,这使得NumPy比纯Python代码高效得多
NumPy的缺点:
- 由于NumPy使用内存映射文件以达到最优的数据读写性能,而内存的大小限制了其对TB级大文件的处理;
- 此外,NumPy数组的通用性不及Python提供的list容器。
因此,在科学计算之外的领域,NumPy的优势也就不那么明显。
以下内容来自于:https://www.bilibili.com/video/av8727995?from=search&seid=12457925641538891537
## 二、基本使用
### 2.1 ndarray属性
```
# 调用
import numpy as np
# 将列表转换为矩阵
a = [[1,2,3],[2,3,4]]
print('list:\n',a)
array = np.array(a)
print('array:\n',array)
# 获取矩阵基本属性
print('num of dim:',array.ndim) # 秩,矩阵维度
print('shape:',array.shape) # 矩阵形状
print('size:',array.size) # 矩阵元素数量
```
### 2.2 矩阵生成
(1)指定元素格式。
NumPy 支持比 Python 更多种类的数值类型,为了区别于 Python 原生的数据类型,bool、int、float、complex、str 等类型名称末尾都加了_,详细的数据类型列表可在以下网址查看:https://www.cnblogs.com/gl1573/p/10549547.html。
```
# 将列表转化为指定格式的矩阵
a = np.array([2,3,4],dtype=np.float32)
print(a.dtype)
```
(2)全零矩阵
```
zero = np.zeros((3,4))
print(zero)
```
(3) 全1矩阵
```
one = np.ones((3,4),dtype=np.int32)
print(one)
```
(4) 全空矩阵
```
empty = np.empty((3,4))
print(empty)
```
(5) 有序数列
```
range = np.arange(10,20,2) # [10,20)
print(range)
```
(6) 指定shape的矩阵
```
shape = np.arange(12).reshape((3,4))
print(shape)
```
(7) 线段
```
line = np.linspace(10,20,5)
print(line)
```
### 2.3 按位运算
(1) 加减乘除
直接加减乘除是对矩阵各元素同位运算:
```
a1 = np.array([5,8,11,15,20])
a2 = np.arange(1,6)
b1 = a1 + a2
print(b1)
b2 = a1 - a2
print(b2)
b3 = a1 * a2
print(b3)
b4 = a1 / a2
print(b4)
```
(2) 幂次方 (双星号)
```
b5 = a2**3
print(b5)
```
(4) sin/cos
```
b6 = 10*np.sin(a2)
print(b6)
```
(5) 判断
```
print(a2<5)
```
### 2.4 矩阵运算
```
c1 = np.array([[1,2],[1,2]])
c2 = np.arange(4).reshape((2,2))
print(c1)
print(c2)
```
(1) 乘
```
c_dot = np.dot(c1,c2)
c_dot2 = c1.dot(c2)
print(c_dot)
print(c_dot2)
```
(2) 矩阵求和/最大值/最小值/平均值
```
c3 = np.random.random((2,4))
print(c3)
print(np.sum(c3))
print(np.sum(c3,axis=1)) # 行内
print(np.min(c3))
print(np.sum(c3,axis=0)) # 列内
print(np.max(c3))
print(np.mean(c3))
print(c3.mean())
```
(3) 元素索引
```
print(np.argmin(c3))
print(np.argmax(c3))
```
(4) 元素累加
```
print(np.cumsum(c3))
```
(5) 矩阵转置
```
print(c3.T)
```
(6) 截止(限制矩阵的最大值和最小值)
```
print(c3.clip(0.25,0.75))
```
## 三 进阶技巧
### 3.1 索引
索引都是从0开始
```
d1 = np.arange(3,15).reshape((3,4))
print(d1)
print(d1[1]) # 整行索引
print(d1[1,:]) # 整行索引
print(d1[1][3]) #
print(d1[1,3]) # 使用这种方法吧
print(d1[:,2]) # 整列索引
print(d1[0:2,2]) #
# 按行打印
for row in d1:
print(row)
# 按列打印
for colum in d1.T:
print(colum)
# 按索引打印
for items in d1.flat: #flat将矩阵展开为行矩阵
print(items)
```
### 3.2 合并
```
e1 = np.array([1,1,1])
e2 = np.array([2,2,2])
# 上下合并
f1 = np.vstack((e1,e2))
print(f1)
# 左右合并
f2 = np.hstack((e1,e2))
print(f2)
# 行向量转列向量
print(e1[:,np.newaxis])
print(e1.T) # .T适用于矩阵,不适用于向量
# concatenate行列合并0=列,1=行
f3 = np.concatenate((e1,e2),axis=0)
print(f3)
```
### 3.3 分割
(1) 等量分割
```
g1 = np.arange(12).reshape((3,4))
print(g1)
print(np.split(g1,2,axis=1))
print(np.split(g1,3,axis=0))
print(np.hsplit(g1,2))
print(np.vsplit(g1,3))
```
(2) 不等量分割
```
print(np.array_split(g1,2,axis=0))
```
### 3.4 复制
(1) 浅复制
```
a = np.arange(4)
b = a
c = a
d = b
a[0] = 15
d[3] = 20
print(a)
print(b)
print(c)
print(d)
print(b is a)
```
(2) 深复制(deep copy)
```
a = np.arange(6)
b = np.copy(a)
a[0]=666
print(a)
print(b)
```
| github_jupyter |
# Filling in Missing Values in Tabular Records
You can select Run->Run All Cells from the menu to run all cells in Studio (or Cell->Run All in a SageMaker Notebook Instance).
## Introduction
Missing data values are common due to omissions during manual entry or optional input. Simple data imputation such as using the median/mode/average may not be satisfactory. When there are many features, we can sometimes train a model to use the existing features to predict the desired feature.
This solution provides and end-to-end example that takes a tabular data set with a target column, trains and deploys an endpoint, and calls that endpoint to make predictions.
## Architecture
As part of the solution, the following services are used:
* Amazon S3: Used to store datasets.
* Amazon SageMaker Notebook: Used to preprocess and process the data, and to train the deep learning model.
* Amazon SageMaker Endpoint: Used to deploy the trained model.

## Data Set
We will use public data from the City of Cincinnati Public Services describing Fleet Inventory. We will train a model to predict missing values of a 'target' column based on the other columns.
Please see.
https://www.cincinnati-oh.gov/public-services/about-public-services/fleet-services/
https://data.cincinnati-oh.gov/Thriving-Neighborhoods/Fleet-Inventory/m8ba-xmjz
## Acknowledgements
AutoPilot code based on
https://github.com/aws/amazon-sagemaker-examples/blob/master/autopilot/sagemaker_autopilot_direct_marketing.ipynb
```
# Replace these with your train/test CSV data and target columns.
# If left empty, the sample data set will be used.
data_location = '' # Ex. s3://your_bucket/your_file.csv
target = '' # Specify target column name
if data_location == '':
# Use sample dataset.
dataset_file = 'data/dataset.csv'
target = 'ASSET_TYPE'
else:
# Download custom dataset.
!aws s3 cp $data_location data/custom_dataset.csv
print('Downloaded custom dataset')
dataset_file = 'data/custom_dataset.csv'
```
## Inspect the Data
```
import pandas as pd
data = pd.read_csv(dataset_file)
data
```
## Preprocess Data
Some of the entries in the target column are null. We will remove those entries for training/testing.
```
import numpy as np
def remove_null_rows(data, target):
idx = data[target].notna()
return data.loc[idx]
def split_train_test(data, p=.9):
idx = np.random.choice([True, False], replace = True, size = len(data), p=[.8, .2])
train_df = data.iloc[idx]
test_df = data.iloc[[not i for i in idx]]
return train_df, test_df
non_null_data = remove_null_rows(data, target)
train, test = split_train_test(non_null_data)
train_file = 'data/train.csv'
test_file = 'data/test.csv'
train.to_csv(train_file, index=False, header=True)
test.to_csv(test_file, index=False, header=True)
```
## Store Processed Data on S3
Now that we have our data in files, we store this data to S3 so we can use SageMaker AutoPilot.
```
import sagemaker
from sagemaker.s3 import S3Uploader
import json
with open('stack_outputs.json') as f:
sagemaker_configs = json.load(f)
s3_bucket = sagemaker_configs['S3Bucket']
train_data_s3_path = S3Uploader.upload(train_file, 's3://{}/data'.format(s3_bucket))
print('Train data uploaded to: ' + train_data_s3_path)
test_data_s3_path = S3Uploader.upload(test_file, 's3://{}/data'.format(s3_bucket))
print('Test data uploaded to: ' + test_data_s3_path)
```
### Configure AutoPilot
For the purposes of a demo, we will use only 2 candidates. Remove this parameter to run AutoPilot with its defaults (note: for this data set a full run will take ~ 4 several hours.)
```
input_data_config = [{
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': 's3://{}/data/train'.format(s3_bucket)
}
},
'TargetAttributeName': target
}]
output_data_config = {
'S3OutputPath': 's3://{}/data/output'.format(s3_bucket)
}
automl_job_config ={
'CompletionCriteria': {
'MaxCandidates': 2 # Remove this option for the default run.
}
}
import boto3
from time import gmtime, strftime, sleep
role = sagemaker_configs['SageMakerIamRole']
solution_prefix = sagemaker_configs['SolutionPrefix']
auto_ml_job_name = solution_prefix + strftime('%d-%H-%M-%S', gmtime())
print('AutoMLJobName: ' + auto_ml_job_name)
sm = boto3.Session().client(service_name='sagemaker',region_name='us-west-2')
sm.create_auto_ml_job(AutoMLJobName=auto_ml_job_name,
InputDataConfig=input_data_config,
OutputDataConfig=output_data_config,
AutoMLJobConfig=automl_job_config,
RoleArn=role)
# This will take approximately 20 minutes to run.
secondary_status = ''
while True:
describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)
job_run_status = describe_response['AutoMLJobStatus']
if job_run_status in ('Failed', 'Completed', 'Stopped'):
print('\n{}: {}'.format(describe_response['AutoMLJobSecondaryStatus'], job_run_status))
break
if secondary_status == describe_response['AutoMLJobSecondaryStatus']:
print('.', end='')
else:
secondary_status = describe_response['AutoMLJobSecondaryStatus']
print('\n{}: {}'.format(secondary_status, job_run_status), end='')
sleep(60)
best_candidate = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['BestCandidate']
best_candidate_name = best_candidate['CandidateName']
print(best_candidate)
print('\n')
print("CandidateName: " + best_candidate_name)
print("FinalAutoMLJobObjectiveMetricName: " + best_candidate['FinalAutoMLJobObjectiveMetric']['MetricName'])
print("FinalAutoMLJobObjectiveMetricValue: " + str(best_candidate['FinalAutoMLJobObjectiveMetric']['Value']))
model_name = sagemaker_configs['SageMakerModelName']
model = sm.create_model(Containers=best_candidate['InferenceContainers'],
ModelName=model_name,
ExecutionRoleArn=role)
```
## Deploy and Endpoint
```
print("Building endpoint with model {}".format(model))
endpoint_config_name = sagemaker_configs['SageMakerEndpointName'] + '-config'
create_endpoint_config_response = sm.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
endpoint_name = sagemaker_configs['SageMakerEndpointName']
create_endpoint_response = sm.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
)
print(create_endpoint_response['EndpointArn'])
resp = sm.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
import time
print('Creating Endpoint... this may take several minutes')
while status=='Creating':
resp = sm.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print('.', end='')
time.sleep(15)
print("\nStatus: " + status)
```
## Test the Endpoint
```
runtime_client = boto3.client('runtime.sagemaker')
test_input = test.drop(columns=[target])[0:10]
test_input_csv = test_input.to_csv(index=False, header=False).split('\n')
test_labels = test[target][0:10]
for i, (single_test, single_label) in enumerate(zip(test_input_csv, test_labels)):
print('=== Test {} ===\nInput: {}\n'.format(i, single_test))
response = runtime_client.invoke_endpoint(EndpointName = endpoint_name,
ContentType = 'text/csv',
Body = single_test)
result = response['Body'].read().decode('ascii')
print('Predicted label is {}\nCorrect label is {}\n'.format(result.rstrip(), single_label.rstrip()))
```
## Clean up
Stack deletion will clean up all created resources including S3 buckets, Endpoint configurations, Endpoints and Models.
| github_jupyter |
Osnabrück University - Machine Learning (Summer Term 2018) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack
# Exercise Sheet 06
## Introduction
This week's sheet should be solved and handed in before the end of **Sunday, May 20, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups designated tutor or whomever of us you run into first. Please upload your results to your group's studip folder.
## Assignment 0: Math recap (Hyperplanes) [2 Bonus Points]
This exercise is supposed to be very easy and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know.
**a)** What is a *hyperplane*? What are the hyperlanes in $\mathbb{R}^2$ and $\mathbb{R}^3$? How are the usually described?
A hyperplane is a subspace that has one less dimensio than its ambient space. A hpyerplane in $\mathbb{R}^2$ is $\mathbb{R}^1$ (a line) and a hyperplane in $\mathbb{R}^3$ is $\mathbb{R}^2$ (a plane).
**Description:**
$$\vec{x}\cdot\vec{n} = d$$
where $\vec{x}$ is a position vector of a point, $\cdot$ is the dot product and $d$ is the distance to the origin. All points that fulfill this equation lie on(inside?) the hyperplane.
**b)** What is the Hesse normal form? What is the intuition behind? What are its advantages?
**Definition**
The Hesse normal form is a special type of equation which describes a line in $\mathbb{R}^2$ or a plane in $\mathbb{R}^3$ (or even higher-dimensional hyperplanes) through a unit normal vector and the distance to the origin. The Hesse normal form is useful when wanting to calculate the distance of a point to a plane or a line.
**Intuition**
**Advatages**
**c)** Can you transform the standard form of a hyperplane into the Hesse normal form and vice versa?
Yes, the standard form of a hyperplane can be transformed into the Hesse normal form and vice versa:
\begin{align*}
\vec{x}\cdot\vec{n} &= d \\
\Rightarrow \sum_{i=1}^n x_in_i &= 0
\end{align*}
## Assignment 1: Local PCA (8 Points)
In the lecture we learned that regular PCA is ill suited for special cases of data. In this assignment we will take a look at local PCA which is used for clustered data (ML-06, Slide 25). This is mostly a repetition of algorithms we already used. Feel free to use the built-in functions for k-means clustering and PCA from the libraries (we already included the right imports to set you on track).
```
%matplotlib notebook
import numpy as np
import numpy.random as rnd
import matplotlib.colors as mplc
import matplotlib.pyplot as plt
from numpy.random import multivariate_normal as multNorm
from scipy.cluster.vq import kmeans, vq
from sklearn.decomposition.pca import PCA
def pdist2(x, y):
"""
Pairwise distance between all points of two datasets.
Args:
x (ndarray): Containing j data points of dimension n. Shape (j, n).
y (ndarray): Containing k data points of dimension n. Shape (k, n).
Returns:
ndarray: Pairwise distances between all data points. Shape (j, k).
"""
distance_mat = np.empty((x.shape[0], y.shape[0]))
for i in range(y.shape[0]):
distance_mat[:, i] = np.linalg.norm(x - y[i], axis=-1)
return distance_mat
# Generate clustered data - you may plot the data to take a look at it
data = np.vstack((multNorm([2,2],[[0.1, 0], [0, 1]],100), multNorm([-2,-4],[[1, 0], [0, 0.3]],100)))
# colors = ['indianred','steelblue','yellowgreen','lightseegreen','wheat','purple','sandybrown']
colors = ['red','blue','green','cyan','yellow','magenta','orange']
# Apply k-means to the data (for k=1,3,5)
for k in [3, 5, 7]:
centroids, distortion = kmeans(data, k)
# Generate distance matrix for all observations with all centroids
distances = pdist2(data, centroids)
# Assign data to best matching centroid
labels = [min_c for min_c in np.argmin(distances, axis=1)]
# Plot the results of k-means
fig = plt.figure('k-means for k ={}'.format(k))
plt.scatter(data[:,0], data[:,1], c=labels)
plt.scatter(centroids[:,0], centroids[:,1],
c=list(set(labels)), alpha=.1, marker='o',
s=np.array([np.count_nonzero(labels==label) for label in set(labels)])*100)
plt.title('k = {}'.format(k))
# Plot the results of local PCA
pca_fig = plt.figure('projected data and components for k ={}'.format(k))
comps = np.array([[0, 0]])
# Apply PCA for each cluster and store each two largest components.
for i, cluster in enumerate(centroids):
pca = PCA(n_components=2)
cluster_data = np.array([data[idx] for idx,label in enumerate(labels) if label == i])
pca.fit(cluster_data)
comps = np.concatenate((comps, [pca.components_[0,:]]), axis=0)
comps = np.concatenate((comps, [pca.components_[1,:]]), axis=0)
# row_sums = cluster_data.sum(axis=1)
# proj = cluster_data / row_sums[:, np.newaxis]
proj = np.array(cluster_data @ pca.components_)
plt.scatter(proj[:,0], proj[:,1], c=np.zeros(proj.shape[0]).fill(i))
comps = np.delete(comps, 0, 0)
filler = np.zeros(len(comps))
plt.quiver(filler, filler, comps[:,0], comps[:,1], scale=0.2) #, color=colors, scale=1)
```
## Assignment 2: Data Visualization and Chernoff Faces (6 Points)
The following exercise contains no programming (unless you want to go through the implementation). Answer the questions that are posted below the code segment (and run the code before - it's really worth it!). In case you are even more interested - here is a link to the [original paper](http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD0738473).
```
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse, Arc
from numpy.random import rand
import numpy as np
def cface(ax, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15, x16, x17, x18):
"""
This implementation of chernov faces is taken from Abraham Flaxman. You can
find the original source files here: https://gist.github.com/aflaxman/4043086
Only minor adjustments have been made.
x1 = height of upper face
x2 = overlap of lower face
x3 = half of vertical size of face
x4 = width of upper face
x5 = width of lower face
x6 = length of nose
x7 = vertical position of mouth
x8 = curvature of mouth
x9 = width of mouth
x10 = vertical position of eyes
x11 = separation of eyes
x12 = slant of eyes
x13 = eccentricity of eyes
x14 = size of eyes
x15 = position of pupils
x16 = vertical position of eyebrows
x17 = slant of eyebrows
x18 = size of eyebrows
"""
# transform some values so that input between 0,1 yields variety of output
x3 = 1.9 * (x3 - .5)
x4 = (x4 + .25)
x5 = (x5 + .2)
x6 = .3 * (x6 + .01)
x8 = 5 * (x8 + .001)
x11 /= 5
x12 = 2 * (x12 - .5)
x13 += .05
x14 += .1
x15 = .5 * (x15 - .5)
x16 = .25 * x16
x17 = .5 * (x17 - .5)
x18 = .5 * (x18 + .1)
# top of face, in box with l=-x4, r=x4, t=x1, b=x3
e = Ellipse((0, (x1 + x3) / 2), 2 * x4, (x1 - x3), ec='black', linewidth=2)
ax.add_artist(e)
# bottom of face, in box with l=-x5, r=x5, b=-x1, t=x2+x3
e = Ellipse((0, (-x1 + x2 + x3) / 2), 2 * x5, (x1 + x2 + x3), fc='white', ec='black', linewidth=2)
ax.add_artist(e)
# cover overlaps
e = Ellipse((0, (x1 + x3) / 2), 2 * x4, (x1 - x3), fc='white', ec='none')
ax.add_artist(e)
e = Ellipse((0, (-x1 + x2 + x3) / 2), 2 * x5, (x1 + x2 + x3), fc='white', ec='none')
ax.add_artist(e)
# draw nose
plt.plot([0, 0], [-x6 / 2, x6 / 2], 'k')
# draw mouth
p = Arc((0, -x7 + .5 / x8), 1 / x8, 1 / x8, theta1=270 - 180 / np.pi * np.arctan(x8 * x9),
theta2=270 + 180 / np.pi * np.arctan(x8 * x9))
ax.add_artist(p)
# draw eyes
p = Ellipse((-x11 - x14 / 2, x10), x14, x13 * x14, angle=-180 / np.pi * x12, fc='white', ec='black')
ax.add_artist(p)
p = Ellipse((x11 + x14 / 2, x10), x14, x13 * x14, angle=180 / np.pi * x12, fc='white', ec='black')
ax.add_artist(p)
# draw pupils
p = Ellipse((-x11 - x14 / 2 - x15 * x14 / 2, x10), .05, .05, facecolor='black')
ax.add_artist(p)
p = Ellipse((x11 + x14 / 2 - x15 * x14 / 2, x10), .05, .05, facecolor='black')
ax.add_artist(p)
# draw eyebrows
plt.plot([-x11 - x14 / 2 - x14 * x18 / 2, -x11 - x14 / 2 + x14 * x18 / 2],
[x10 + x13 * x14 * (x16 + x17), x10 + x13 * x14 * (x16 - x17)], 'k')
plt.plot([x11 + x14 / 2 + x14 * x18 / 2, x11 + x14 / 2 - x14 * x18 / 2],
[x10 + x13 * x14 * (x16 + x17), x10 + x13 * x14 * (x16 - x17)], 'k')
fig = plt.figure('Chernoff Faces', figsize=(11, 11))
for i in range(25):
ax = fig.add_subplot(5, 5, i + 1, aspect='equal')
cface(ax, .9, *rand(17))
ax.axis([-1.2, 1.2, -1.2, 1.2])
ax.set_xticks([])
ax.set_yticks([])
fig.subplots_adjust(hspace=0, wspace=0)
fig.canvas.draw()
```
### a) Data Visualization Techniques
Why do we need data visualization techniques and what are techniques to visualize high dimensional data?
Automated analysis of high-dimensional data is rarely possible, but humans have remarkable pattern recognition abilities, so visualizing data for humans to analyze is an important field of study.
**Available Techniques:**
- PCA
- reduce dimensions and project data onto those to display
- Scatterplot Matrix
- project onto 2 dimensions and display all combinations as scatterplots
- Glyphs
- use some kind of geometry, where each dimension controls one parameter of the geometry
- Chernovb Faces: map dimensions onto features of a face (as was done in the above plot)
- Parallel Coordinate Plots
- make *columns* for each dimension and connect them via lines
### b) Chernoff faces
Why did Chernoff use faces for his representation? Why not something else, like dogs or houses?
Humans have a highly developed ability for (human) facial recognition, so it makes sense to show data features using human faces. Our ability to distinguish faces of other animals is rather poor. Also, for items such as houses, there may be less features availabe that can be varied in an easily recognisable way.
### c) Alternatives
Explain at least one other data visualization technique from the lecture.
The **Parallel Coordinate Plot** maps the different features/dimensions of data onto columns on the x-axis and the the values for those features is mapped onto the y-axis. Then, one line per datum is drawn from the first to the last column, at the height that represent the value of the respective feature.
What is somewhat troublesome when using these kinds of plots is that scaling makes a huge difference (maybe the ranges of values for different features vary a lot) in terms of interpretability and navigation becomes harder the more data (e.g. lines) are present.
**Example Image:**

## Assignment 3: Hebbian Learning (6 Points)
In the lecture (ML-07, Slides 10ff.) there is a simplified version of Ivan Pavlov's famous experiment on classical conditioning. In this exercise you will take a look into this simplified model and create your own conditionable dog with a simple Hebbian learning rule.
### a) Programming a Dog
To model the dog Saliva behavior we will need to model an unconditioned and a conditioned stimulus: food and bell. They are represented as lists: `weight_food` and `weight_bell`. Note that one could just use a single number, the lists are only here to keep track of the history for a nice output. It is possible to access the current weight by selecting the last item of each list, respectively: `weight_food[-1]`.
A list of trials is already given as well as a condition database. Each entry represents an index to select from the `condition_db`. To figure out the value of the stimulus `food` in the second trial (which maps to condition `1`) one could do: `condition_db[1]["food"]`.
Your task is to implement a `for` loop over all trials. In each iteration select the correct values for $x_1$ and $x_2$ from the condition database and retrieve the current weights $w_1$ and $w_2$. Then calculate the response of the dog with the threshold $\theta$:
$$
r_t = \Theta(x_{1,t-1} w_{1,t-1} + x_{2,t-1} w_{2,t-1})\\
\Theta(x)= \begin{cases}1 \text{ if } x >= \theta\\0 \text{ else }\end{cases}
$$
With this response calculate both $w_{n,t}$ according to the Hebbian rule:
$$w_{n,t} = w_{n, t-1} + \epsilon \cdot r_t \cdot x_{n,t}$$
*Note: While you program the output might look a little messy, don't worry about it. Once you fill up all three lists properly, it will look much like on ML-07, Slide 14.*
```
# Initialization
condition_db = [{"food": 1, "bell": 0},
{"food": 0, "bell": 1},
{"food": 1, "bell": 1}]
trials = [0, 1, 2, 2, 1, 2, 1]
epsilon = 0.2
theta = 1/2
responses = []
weight_food = [1]
weight_bell = [0]
def calc_response(sample):
if weight_food[-1]*sample["food"] + weight_bell[-1]*sample["bell"] > theta:
return 1
else:
return 0
def update_weights(response, sample):
weight_food.append(weight_food[-1] + epsilon * response * sample["food"])
weight_bell.append(weight_bell[-1] + epsilon * response * sample["bell"])
# For each trial, update the current weights of the US and CS and store
# the results in the respective lists. Also store the response.
for t in trials:
responses.append(calc_response(condition_db[t]))
update_weights(responses[-1], condition_db[t])
# Output
print("| Food | |" + "| |".join(["{:3d}".format(condition_db[trial]["food"]) for trial in trials]) + "| |")
print("| Bell | |" + "| |".join(["{:3d}".format(condition_db[trial]["bell"]) for trial in trials]) + "| |")
print("| Saliva | |" + "| |".join(["{:3d}".format(response) for response in responses]) + "| |")
print("| w_Food |" + "| |".join(["{:3.1f}".format(w) for w in weight_food]) + "|")
print("| w_Bell |" + "| |".join(["{:3.1f}".format(w) for w in weight_bell]) + "|")
```
### b) Parameter adjustment
In the above default setting of trials (`[0, 1, 2, 2, 1, 2, 1]`, in case you changed it), how many learning steps did you need until the dog started to produce saliva on the conditioned stimulus? What happens if you change the parameters $\epsilon$ and $\theta$? Try smaller and bigger values for each or present different conditions to the dog.
**How many learning steps were needed with the default settings?** → 5 steps were needed
**Smaller Values:**
For smaller values ($\epsilon = 0.01$ and $\theta = 0.1$), the default number of trials does not suffice for the dog to learn to react to the conditioned stimulus.
**Larger Values:**
For larger values ($\epsilon = 0.5$ and $\theta = 0.9$), the dog already responds to the conditioned stimulus after only **one** trial.
**Explanation:**
Since we always increase the likelihood to resond to a stimulus $s_i$ by $\theta$, choosing a large value for $\theta$ results in the subject responding to the stimulus much faster, while a small value for $\theta$ means that it takes a much longer time to reach the threshold for the subject to respond to the stimulus.
In contrast, when choosing a large value for $\epsilon$, the reaction threshold, the subject only responds after a stimulus has a likelihood value that high. When choosing a small value for $\epsilon$, the subject resonds to stimuli much faster.
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, StratifiedKFold, KFold, StratifiedShuffleSplit
from sklearn.preprocessing import StandardScaler
import xgboost as xgb
from sklearn.metrics import precision_score, recall_score, jaccard_score, roc_auc_score, accuracy_score, classification_report, balanced_accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.externals import joblib
import os
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
path_parent = os.path.dirname(os.getcwd())
data_dir = os.path.join(path_parent,'data')
model_dir = os.path.join(path_parent,'models')
processed_dir = os.path.join(data_dir,'processed')
# df_anomaly = pd.read_csv(os.path.join(processed_dir,"anomaly_anotated.csv"))
df_audsome = pd.read_csv(os.path.join(processed_dir,"anomaly_anotated_audsome.csv"))
print("Dataset chosen ...")
data = df_audsome
drop_col = ['t1','t2','t3','t4']
print("Remove unwanted columns ...")
print("Shape before drop: {}".format(data.shape))
data.drop(drop_col, axis=1, inplace=True)
print("Shape after drop: {}".format(data.shape))
# Nice print
nice_y = data['target']
# Uncomment for removing dummy
print("Removed Dummy class")
data.loc[data.target == "dummy", 'target'] = "0"
# Uncomment for removing cpu
data.loc[data.target == "cpu", 'target'] = "0"
# Uncomment for removing copy
data.loc[data.target == "copy", 'target'] = "0"
# Print new unique columns
print(data['target'].unique())
#Creating the dependent variable class
factor = pd.factorize(data['target'])
data.target = factor[0]
definitions = factor[1]
# print(data.target.head())
# print(definitions)
# Plot class distribution
print("Ploting class distribution ..")
class_dist = sns.countplot(nice_y)
# plt.title('Confusion Matrix Fold {}'.format(fold), fontsize = 15) # title with fontsize 20
# plt.xlabel('Ground Truth', fontsize = 10) # x-axis label with fontsize 15
# plt.ylabel('Predictions', fontsize = 10) # y-axis label with fontsize 15
dist_fig = "Class_distribution.png"
class_dist.figure.savefig(os.path.join(model_dir, dist_fig))
print("Splitting dataset into training and ground truth ...")
X = data.drop(['target', 'time'], axis=1)
y = data['target']
scaler = StandardScaler()
# XGB best performing
# paramgrid = {"n_estimators": 50,
# "max_depth": 4,
# "learning_rate": 0.1,
# "subsample": 0.2,
# "min_child_weight": 6,
# "gamma": 1,
# "seed": 42,
# "objective": "multi:softmax"}
paramgrid = {"n_estimators": 1000,
"max_depth": 4,
"learning_rate": 0.01,
"subsample": 0.2,
"min_child_weight": 6,
"gamma": 0,
"seed": 42,
"objective": "binary:logistic",
"n_jobs": -1}
# paramgird = {"n_estimators": 50,
# "max_depth": 4,
# "learning_rate": 0.1,
# "subsample": 0.2,
# "min_child_weight": 6,
# "gamma": 1,
# "seed": 42,
# "objective": "multi:softmax"}
model = xgb.XGBClassifier(**paramgrid)
model.get_params().keys()
# skFold = StratifiedKFold(n_splits=5)
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.25, random_state=21)
ml_method = 'xgb_mem'
print("="*100)
clf_models = []
report = {
"Accuracy": [],
"BallancedAccuracy": [],
"Jaccard": []
}
fold = 1
for train_index, test_index in sss.split(X, y):
# print("Train:", train_index, "Test:", test_index)
print("Starting fold {}".format(fold))
Xtrain, Xtest = X.iloc[train_index], X.iloc[test_index]
ytrain, ytest = y.iloc[train_index], y.iloc[test_index]
print("Scaling data ....")
Xtrain = scaler.fit_transform(Xtrain)
Xtest = scaler.transform(Xtest)
print("Start training ....")
eval_set = [(Xtest, ytest)]
model.fit(Xtrain, ytrain, early_stopping_rounds=10, eval_set=eval_set, verbose=0)
# model.fit(Xtrain, ytrain, verbose=True)
# sys.exit()
# Append model
clf_models.append(model)
print("Predicting ....")
ypred = model.predict(Xtest, ntree_limit=model.best_ntree_limit)
print("-"*100)
acc = accuracy_score(ytest, ypred)
report['Accuracy'].append(acc)
print("Accuracy score fold {} is: {}".format(fold, acc))
bacc = balanced_accuracy_score(ytest, ypred)
report['BallancedAccuracy'].append(bacc)
print("Ballanced accuracy fold {} score is: {}".format(fold, bacc))
jaccard = jaccard_score(ytest, ypred)
print("Jaccard score fold {}: {}".format(fold, jaccard))
report['Jaccard'].append(jaccard)
print("Full classification report for fold {}".format(fold))
print(classification_report(ytest, ypred, digits=4,target_names=definitions))
cf_report = classification_report(ytest, ypred, output_dict=True, digits=4, target_names=definitions)
df_classification_report = pd.DataFrame(cf_report).transpose()
print("Saving classification report")
classification_rep_name = "classification_{}_fold_{}.csv".format(ml_method, fold)
df_classification_report.to_csv(os.path.join(model_dir,classification_rep_name), index=False)
print("Generating confusion matrix fold {}".format(fold))
cf_matrix = confusion_matrix(ytest, ypred)
ht_cf=sns.heatmap(cf_matrix, annot=True, yticklabels=list(definitions), xticklabels=list(definitions))
plt.title('Confusion Matrix Fold {}'.format(fold), fontsize = 15) # title with fontsize 20
plt.xlabel('Ground Truth', fontsize = 10) # x-axis label with fontsize 15
plt.ylabel('Predictions', fontsize = 10) # y-axis label with fontsize 15
cf_fig = "CM_{}_{}.png".format(ml_method, fold)
ht_cf.figure.savefig(os.path.join(model_dir, cf_fig))
plt.show()
print("Extracting Feature improtance ...")
# xgb.plot_importance(model)
# plt.title("xgboost.plot_importance(model)")
# plt.show()
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
sorted_feature = feat_importances.sort_values(ascending=True)
# print(sorted_feature)
# Number of columns
sorted_feature = sorted_feature.tail(20)
n_col = len(sorted_feature)
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances Fold {}".format(fold), fontsize = 15)
plt.barh(range(n_col), sorted_feature,
color="r", align="center")
# If you want to define your own labels,
# change indices to a list of labels on the following line.
plt.yticks(range(n_col), sorted_feature.index)
plt.ylim([-1, n_col])
fi_fig = "FI_{}_{}.png".format(ml_method, fold)
plt.savefig(os.path.join(model_dir, fi_fig))
plt.show()
#increment fold count
fold+=1
print("#"*100)
print("Saving final report ...")
# Validation Report
df_report = pd.DataFrame(report)
final_report = "Model_{}_report.csv".format(ml_method)
df_report.to_csv(os.path.join(model_dir,final_report), index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mrklees/pgmpy/blob/feature%2Fcausalmodel/examples/Causal_Games.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Causal Games
Causal Inference is a new feature for pgmpy, so I wanted to develop a few examples which show off the features that we're developing!
This particular notebook walks through the 5 games that used as examples for building intuition about backdoor paths in *The Book of Why* by Judea Peal. I have consistently been using them to test different implementations of backdoor adjustment from different libraries and include them as unit tests in pgmpy, so I wanted to walk through them and a few other related games as a potential resource to both understand the implementation of CausalInference in pgmpy, as well as develope some useful intuitions about backdoor paths.
## Objective of the Games
For each game we get a causal graph, and our goal is to identify the set of deconfounders (often denoted $Z$) which will close all backdoor paths from nodes $X$ to $Y$. For the time being, I'll assume that you're familiar with the concept of backdoor paths, though I may expand this portion to explain it.
```
import sys
!pip3 install -q daft
import matplotlib.pyplot as plt
%matplotlib inline
import daft
from daft import PGM
# We can now import the development version of pgmpy
from pgmpy.models.BayesianModel import BayesianModel
from pgmpy.inference.CausalInference import CausalInference
def convert_pgm_to_pgmpy(pgm):
"""Takes a Daft PGM object and converts it to a pgmpy BayesianModel"""
edges = [(edge.node1.name, edge.node2.name) for edge in pgm._edges]
model = BayesianModel(edges)
return model
#@title # Game 1
#@markdown While this is a "trivial" example, many statisticians would consider including either or both A and B in their models "just for good measure". Notice though how controlling for A would close off the path of causal information from X to Y, actually *impeding* your effort to measure that effect.
pgm = PGM(shape=[4, 3])
pgm.add_node(daft.Node('X', r"X", 1, 2))
pgm.add_node(daft.Node('Y', r"Y", 3, 2))
pgm.add_node(daft.Node('A', r"A", 2, 2))
pgm.add_node(daft.Node('B', r"B", 2, 1))
pgm.add_edge('X', 'A')
pgm.add_edge('A', 'Y')
pgm.add_edge('A', 'B')
pgm.render()
plt.show()
#@markdown Notice how there are no nodes with arrows pointing into X. Said another way, X has no parents. Therefore, there can't be any backdoor paths confounding X and Y. pgmpy will confirm this in the following way:
game1 = convert_pgm_to_pgmpy(pgm)
inference1 = CausalInference(game1)
print(f"Are there are active backdoor paths? {not inference1.is_valid_backdoor_adjustment_set('X', 'Y')}")
adj_sets = inference1.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}")
#@title # Game 2
#@markdown This graph looks harder, but actualy is also trivial to solve. The key is noticing the one backdoor path, which goes from X <- A -> B <- D -> E -> Y, has a collider at B (or a 'V structure'), and therefore the backdoor path is closed.
pgm = PGM(shape=[4, 4])
pgm.add_node(daft.Node('X', r"X", 1, 1))
pgm.add_node(daft.Node('Y', r"Y", 3, 1))
pgm.add_node(daft.Node('A', r"A", 1, 3))
pgm.add_node(daft.Node('B', r"B", 2, 3))
pgm.add_node(daft.Node('C', r"C", 3, 3))
pgm.add_node(daft.Node('D', r"D", 2, 2))
pgm.add_node(daft.Node('E', r"E", 2, 1))
pgm.add_edge('X', 'E')
pgm.add_edge('A', 'X')
pgm.add_edge('A', 'B')
pgm.add_edge('B', 'C')
pgm.add_edge('D', 'B')
pgm.add_edge('D', 'E')
pgm.add_edge('E', 'Y')
pgm.render()
plt.show()
graph = convert_pgm_to_pgmpy(pgm)
inference = CausalInference(graph)
print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}")
adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}")
#@title # Game 3
#@markdown This game actually requires some action. Notice the backdoor path X <- B -> Y. This is a confounding pattern, is one of the clearest signs that we'll need to control for something, in this case B.
pgm = PGM(shape=[4, 4])
pgm.add_node(daft.Node('X', r"X", 1, 1))
pgm.add_node(daft.Node('Y', r"Y", 3, 1))
pgm.add_node(daft.Node('A', r"A", 2, 1.75))
pgm.add_node(daft.Node('B', r"B", 2, 3))
pgm.add_edge('X', 'Y')
pgm.add_edge('X', 'A')
pgm.add_edge('B', 'A')
pgm.add_edge('B', 'X')
pgm.add_edge('B', 'Y')
pgm.render()
plt.show()
graph = convert_pgm_to_pgmpy(pgm)
inference = CausalInference(graph)
print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}")
adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}")
#@title # Game 4
#@markdown Pearl named this particular configuration "M Bias", not only because of it's shape, but also because of the common practice of statisticians to want to control for B in many situations. However, notice how in this configuration X and Y start out as *not confounded* and how by controlling for B we would actually introduce confounding by opening the path at the collider, B.
pgm = PGM(shape=[4, 4])
pgm.add_node(daft.Node('X', r"X", 1, 1))
pgm.add_node(daft.Node('Y', r"Y", 3, 1))
pgm.add_node(daft.Node('A', r"A", 1, 3))
pgm.add_node(daft.Node('B', r"B", 2, 2))
pgm.add_node(daft.Node('C', r"C", 3, 3))
pgm.add_edge('A', 'X')
pgm.add_edge('A', 'B')
pgm.add_edge('C', 'B')
pgm.add_edge('C', 'Y')
pgm.render()
plt.show()
graph = convert_pgm_to_pgmpy(pgm)
inference = CausalInference(graph)
print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}")
adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}")
#@title # Game 5
#@markdown This is the last game in The Book of Why is the most complex. In this case we have two backdoor paths, one going through A and the other through B, and it's important to notice that if we only control for B that the path: X <- A -> B <- C -> Y (which starts out as closed because B is a collider) actually is opened. Therefore we have to either close both A and B or, as astute observers will notice, we can also just close C and completely close both backdoor paths. pgmpy will nicely confirm these results for us.
pgm = PGM(shape=[4, 4])
pgm.add_node(daft.Node('X', r"X", 1, 1))
pgm.add_node(daft.Node('Y', r"Y", 3, 1))
pgm.add_node(daft.Node('A', r"A", 1, 3))
pgm.add_node(daft.Node('B', r"B", 2, 2))
pgm.add_node(daft.Node('C', r"C", 3, 3))
pgm.add_edge('A', 'X')
pgm.add_edge('A', 'B')
pgm.add_edge('C', 'B')
pgm.add_edge('C', 'Y')
pgm.add_edge("X", "Y")
pgm.add_edge("B", "X")
pgm.render()
plt.show()
graph = convert_pgm_to_pgmpy(pgm)
inference = CausalInference(graph)
print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}")
adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {adj_sets}")
#@title # Game 6
#@markdown So these are no longer drawn from The Book of Why, but were either drawn from another source (which I will reference) or a developed to try to induce a specific bug.
#@markdown This example is drawn from Causality by Pearl on p. 80. This example is kind of interesting because there are many possible combinations of nodes which will close the two backdoor paths which exist in this graph. In turns out that D plus any other node in {A, B, C, E} will deconfound X and Y.
pgm = PGM(shape=[4, 4])
pgm.add_node(daft.Node('X', r"X", 1, 1))
pgm.add_node(daft.Node('Y', r"Y", 3, 1))
pgm.add_node(daft.Node('A', r"A", 1, 3))
pgm.add_node(daft.Node('B', r"B", 3, 3))
pgm.add_node(daft.Node('C', r"C", 1, 2))
pgm.add_node(daft.Node('D', r"D", 2, 2))
pgm.add_node(daft.Node('E', r"E", 3, 2))
pgm.add_node(daft.Node('F', r"F", 2, 1))
pgm.add_edge('X', 'F')
pgm.add_edge('F', 'Y')
pgm.add_edge('C', 'X')
pgm.add_edge('A', 'C')
pgm.add_edge('A', 'D')
pgm.add_edge('D', 'X')
pgm.add_edge('D', 'Y')
pgm.add_edge('B', 'D')
pgm.add_edge('B', 'E')
pgm.add_edge('E', 'Y')
pgm.render()
plt.show()
graph = convert_pgm_to_pgmpy(pgm)
inference = CausalInference(graph)
print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}")
bd_adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {bd_adj_sets}")
fd_adj_sets = inference.get_all_frontdoor_adjustment_sets("X", "Y")
print(f"Ehat's the possible front adjustment sets? {fd_adj_sets}")
#@title # Game 7
#@markdown This game tests the front door adjustment. B is taken to be unobserved, and therfore we cannot close the backdoor path X <- B -> Y.
pgm = PGM(shape=[4, 3])
pgm.add_node(daft.Node('X', r"X", 1, 1))
pgm.add_node(daft.Node('Y', r"Y", 3, 1))
pgm.add_node(daft.Node('A', r"A", 2, 1))
pgm.add_node(daft.Node('B', r"B", 2, 2))
pgm.add_edge('X', 'A')
pgm.add_edge('A', 'Y')
pgm.add_edge('B', 'X')
pgm.add_edge('B', 'Y')
pgm.render()
plt.show()
graph = convert_pgm_to_pgmpy(pgm)
inference = CausalInference(graph)
print(f"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}")
bd_adj_sets = inference.get_all_backdoor_adjustment_sets("X", "Y")
print(f"If so, what's the possible backdoor adjustment sets? {bd_adj_sets}")
fd_adj_sets = inference.get_all_frontdoor_adjustment_sets("X", "Y")
print(f"Ehat's the possible front adjustment sets? {fd_adj_sets}")
```
| github_jupyter |
```
import sys
import os
import math
import subprocess
import pandas as pd
import numpy as np
from tqdm import tqdm
import random
import torch
import torch.nn as nn
#Initialise the random seeds
def random_init(**kwargs):
random.seed(kwargs['seed'])
torch.manual_seed(kwargs['seed'])
torch.cuda.manual_seed(kwargs['seed'])
torch.backends.cudnn.deterministic = True
def normalise(text):
chars = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
text = text.upper()
words=[]
for w in text.strip().split():
if w.startswith('HTTP'):
continue
while len(w)>0 and w[0] not in chars:
w = w[1:]
while len(w)>0 and w[-1] not in chars:
w = w[:-1]
if len(w) == 0:
continue
words.append(w)
text=' '.join(words)
return text
def read_vocabulary(train_text, **kwargs):
vocab = dict()
counts = dict()
num_words = 0
for line in train_text:
line = (list(line.strip()) if kwargs['characters'] else line.strip().split())
for char in line:
if char not in vocab:
vocab[char] = num_words
counts[char] = 0
num_words+=1
counts[char] += 1
num_words = 0
vocab2 = dict()
if not kwargs['characters']:
for w in vocab:
if counts[w] >= args['min_count']:
vocab2[w] = num_words
num_words += 1
vocab = vocab2
for word in [kwargs['start_token'],kwargs['end_token'],kwargs['unk_token']]:
if word not in vocab:
vocab[word] = num_words
num_words += 1
return vocab
def load_data(premise, hypothesis, targets=None, cv=False, **kwargs):
assert len(premise) == len(hypothesis)
num_seq = len(premise)
max_words = max([len(t) for t in premise+hypothesis])+2
dataset = len(kwargs['vocab'])*torch.ones((2,max_words,num_seq),dtype=torch.long)
labels = torch.zeros((num_seq),dtype=torch.uint8)
idx = 0
utoken_value = kwargs['vocab'][kwargs['unk_token']]
for i,line in tqdm(enumerate(premise),desc='Allocating data memory',disable=(kwargs['verbose']<2)):
words = (list(line.strip()) if kwargs['characters'] else line.strip().split())
if len(words)==0 or words[0] != kwargs['start_token']:
words.insert(0,kwargs['start_token'])
if words[-1] != kwargs['end_token']:
words.append(kwargs['end_token'])
for jdx,word in enumerate(words):
dataset[0,jdx,idx] = kwargs['vocab'].get(word,utoken_value)
line=hypothesis[i]
words = (list(line.strip()) if kwargs['characters'] else line.strip().split())
if len(words)==0 or words[0] != kwargs['start_token']:
words.insert(0,kwargs['start_token'])
if words[-1] != kwargs['end_token']:
words.append(kwargs['end_token'])
for jdx,word in enumerate(words):
dataset[1,jdx,idx] = kwargs['vocab'].get(word,utoken_value)
if targets is not None:
labels[idx] = targets[i]
idx += 1
if cv == False:
return dataset, labels
idx = [i for i in range(num_seq)]
random.shuffle(idx)
trainset = dataset[:,:,idx[0:int(num_seq*(1-kwargs['cv_percentage']))]]
trainlabels = labels[idx[0:int(num_seq*(1-kwargs['cv_percentage']))]]
validset = dataset[:,:,idx[int(num_seq*(1-kwargs['cv_percentage'])):]]
validlabels = labels[idx[int(num_seq*(1-kwargs['cv_percentage'])):]]
return trainset, validset, trainlabels, validlabels
class LSTMEncoder(nn.Module):
def __init__(self, **kwargs):
super(LSTMEncoder, self).__init__()
#Base variables
self.vocab = kwargs['vocab']
self.in_dim = len(self.vocab)
self.start_token = kwargs['start_token']
self.end_token = kwargs['end_token']
self.unk_token = kwargs['unk_token']
self.characters = kwargs['characters']
self.embed_dim = kwargs['embedding_size']
self.hid_dim = kwargs['hidden_size']
self.n_layers = kwargs['num_layers']
#Define the embedding layer
self.embed = nn.Embedding(self.in_dim+1,self.embed_dim,padding_idx=self.in_dim)
#Define the lstm layer
self.lstm = nn.LSTM(input_size=self.embed_dim,hidden_size=self.hid_dim,num_layers=self.n_layers)
def forward(self, inputs, lengths):
#Inputs are size (LxBx1)
#Forward embedding layer
emb = self.embed(inputs)
#Embeddings are size (LxBxself.embed_dim)
#Pack the sequences for GRU
packed = torch.nn.utils.rnn.pack_padded_sequence(emb, lengths)
#Forward the GRU
packed_rec, self.hidden = self.lstm(packed,self.hidden)
#Unpack the sequences
rec, _ = torch.nn.utils.rnn.pad_packed_sequence(packed_rec)
#Hidden outputs are size (LxBxself.hidden_size)
#Get last embeddings
out = rec[lengths-1,list(range(rec.shape[1])),:]
#Outputs are size (Bxself.hid_dim)
return out
def init_hidden(self, bsz):
#Initialise the hidden state
weight = next(self.parameters())
self.hidden = (weight.new_zeros(self.n_layers, bsz, self.hid_dim),weight.new_zeros(self.n_layers, bsz, self.hid_dim))
def detach_hidden(self):
#Detach the hidden state
self.hidden=(self.hidden[0].detach(),self.hidden[1].detach())
def cpu_hidden(self):
#Set the hidden state to CPU
self.hidden=(self.hidden[0].detach().cpu(),self.hidden[1].detach().cpu())
class Predictor(nn.Module):
def __init__(self, **kwargs):
super(Predictor, self).__init__()
self.hid_dim = kwargs['hidden_size']*2
self.out_dim = 3
#Define the output layer and softmax
self.linear = nn.Linear(self.hid_dim,self.out_dim)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self,input1,input2):
#Outputs are size (Bxself.hid_dim)
inputs = torch.cat((input1,input2),dim=1)
out = self.softmax(self.linear(inputs))
return out
def train_model(trainset,trainlabels,encoder,predictor,optimizer,criterion,**kwargs):
trainlen = trainset.shape[2]
nbatches = math.ceil(trainlen/kwargs['batch_size'])
total_loss = 0
total_backs = 0
with tqdm(total=nbatches,disable=(kwargs['verbose']<2)) as pbar:
encoder = encoder.train()
for b in range(nbatches):
#Data batch
X1 = trainset[0,:,b*kwargs['batch_size']:min(trainlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])
mask1 = torch.clamp(len(kwargs['vocab'])-X1,max=1)
seq_length1 = torch.sum(mask1,dim=0)
ordered_seq_length1, dec_index1 = seq_length1.sort(descending=True)
max_seq_length1 = torch.max(seq_length1)
X1 = X1[:,dec_index1]
X1 = X1[0:max_seq_length1]
rev_dec_index1 = list(range(seq_length1.shape[0]))
for i,j in enumerate(dec_index1):
rev_dec_index1[j] = i
X2 = trainset[1,:,b*kwargs['batch_size']:min(trainlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])
mask2 = torch.clamp(len(kwargs['vocab'])-X2,max=1)
seq_length2 = torch.sum(mask2,dim=0)
ordered_seq_length2, dec_index2 = seq_length2.sort(descending=True)
max_seq_length2 = torch.max(seq_length2)
X2 = X2[:,dec_index2]
X2 = X2[0:max_seq_length2]
rev_dec_index2 = list(range(seq_length2.shape[0]))
for i,j in enumerate(dec_index2):
rev_dec_index2[j] = i
Y = trainlabels[b*kwargs['batch_size']:min(trainlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])
#Forward pass
encoder.init_hidden(X1.size(1))
embeddings1 = encoder(X1,ordered_seq_length1)
encoder.detach_hidden()
encoder.init_hidden(X2.size(1))
embeddings2 = encoder(X2,ordered_seq_length2)
embeddings1 = embeddings1[rev_dec_index1]
embeddings2 = embeddings2[rev_dec_index2]
posteriors = predictor(embeddings1,embeddings2)
loss = criterion(posteriors,Y)
#Backpropagate
optimizer.zero_grad()
loss.backward()
optimizer.step()
#Estimate the latest loss
if total_backs == 100:
total_loss = total_loss*0.99+loss.detach().cpu().numpy()
else:
total_loss += loss.detach().cpu().numpy()
total_backs += 1
encoder.detach_hidden()
pbar.set_description(f'Training epoch. Loss {total_loss/(total_backs+1):.2f}')
pbar.update()
return total_loss/(total_backs+1)
def evaluate_model(testset,encoder,predictor,**kwargs):
testlen = testset.shape[2]
nbatches = math.ceil(testlen/kwargs['batch_size'])
predictions = np.zeros((testlen,))
with torch.no_grad():
encoder = encoder.eval()
for b in range(nbatches):
#Data batch
X1 = testset[0,:,b*kwargs['batch_size']:min(testlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])
mask1 = torch.clamp(len(kwargs['vocab'])-X1,max=1)
seq_length1 = torch.sum(mask1,dim=0)
ordered_seq_length1, dec_index1 = seq_length1.sort(descending=True)
max_seq_length1 = torch.max(seq_length1)
X1 = X1[:,dec_index1]
X1 = X1[0:max_seq_length1]
rev_dec_index1 = list(range(seq_length1.shape[0]))
for i,j in enumerate(dec_index1):
rev_dec_index1[j] = i
X2 = testset[1,:,b*kwargs['batch_size']:min(testlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])
mask2 = torch.clamp(len(kwargs['vocab'])-X2,max=1)
seq_length2 = torch.sum(mask2,dim=0)
ordered_seq_length2, dec_index2 = seq_length2.sort(descending=True)
max_seq_length2 = torch.max(seq_length2)
X2 = X2[:,dec_index2]
X2 = X2[0:max_seq_length2]
rev_dec_index2 = list(range(seq_length2.shape[0]))
for i,j in enumerate(dec_index2):
rev_dec_index2[j] = i
#Forward pass
encoder.init_hidden(X1.size(1))
embeddings1 = encoder(X1,ordered_seq_length1)
encoder.init_hidden(X2.size(1))
embeddings2 = encoder(X2,ordered_seq_length2)
embeddings1 = embeddings1[rev_dec_index1]
embeddings2 = embeddings2[rev_dec_index2]
posteriors = predictor(embeddings1,embeddings2)
#posteriors = model(X,ordered_seq_length)
estimated = torch.argmax(posteriors,dim=1)
predictions[b*kwargs['batch_size']:min(testlen,(b+1)*kwargs['batch_size'])] = estimated.detach().cpu().numpy()
return predictions
#Arguments
args = {
'cv_percentage': 0.1,
'epochs': 20,
'batch_size': 128,
'embedding_size': 16,
'hidden_size': 64,
'num_layers': 1,
'learning_rate': 0.01,
'seed': 0,
'start_token': '<s>',
'end_token': '<\s>',
'unk_token': '<UNK>',
'verbose': 1,
'characters': False,
'min_count': 15,
'device': torch.device(('cuda:0' if torch.cuda.is_available() else 'cpu'))
}
#Read data
train_data = pd.read_csv('/kaggle/input/contradictory-my-dear-watson/train.csv')
test_data = pd.read_csv('/kaggle/input/contradictory-my-dear-watson/test.csv')
#Extract only English language cases
train_data = train_data.loc[train_data['language']=='English']
test_data = test_data.loc[test_data['language']=='English']
#Extract premises and hypothesis
train_premise = [normalise(v) for v in train_data.premise.values]
train_hypothesis = [normalise(v) for v in train_data.hypothesis.values]
test_premise = [normalise(v) for v in test_data.premise.values]
test_hypothesis = [normalise(v) for v in test_data.hypothesis.values]
train_targets = train_data.label.values
print('Training: {0:d} pairs in English. Evaluation: {1:d} pairs in English'.format(len(train_premise),len(test_premise)))
print('Label distribution in training set: {0:s}'.format(str({i:'{0:.2f}%'.format(100*len(np.where(train_targets==i)[0])/len(train_targets)) for i in [0,1,2]})))
batch_sizes = [64,128,256]
min_counts = [5,15,25]
it_idx = 0
valid_predictions = dict()
test_predictions = dict()
valid_accuracies = dict()
for batch_size in batch_sizes:
for min_count in min_counts:
args['batch_size'] = batch_size
args['min_count'] = min_count
random_init(**args)
#Make vocabulary and load data
args['vocab'] = read_vocabulary(train_premise+train_hypothesis, **args)
#print('Vocabulary size: {0:d} tokens'.format(len(args['vocab'])))
trainset, validset, trainlabels, validlabels = load_data(train_premise, train_hypothesis, train_targets, cv=True, **args)
testset, _ = load_data(test_premise, test_hypothesis, None, cv=False, **args)
#Create model, optimiser and criterion
encoder = LSTMEncoder(**args).to(args['device'])
predictor = Predictor(**args).to(args['device'])
optimizer = torch.optim.Adam(list(encoder.parameters())+list(predictor.parameters()),lr=args['learning_rate'])
criterion = nn.NLLLoss(reduction='mean').to(args['device'])
#Train epochs
best_acc = 0.0
for ep in range(1,args['epochs']+1):
loss = train_model(trainset,trainlabels,encoder,predictor,optimizer,criterion,**args)
val_pred = evaluate_model(validset,encoder,predictor,**args)
test_pred = evaluate_model(testset,encoder,predictor,**args)
acc = 100*len(np.where((val_pred-validlabels.numpy())==0)[0])/validset.shape[2]
if acc >= best_acc:
best_acc = acc
best_epoch = ep
best_loss = loss
valid_predictions[it_idx] = val_pred
valid_accuracies[it_idx] = acc
test_predictions[it_idx] = test_pred
print('Run {0:d}. Best epoch: {1:d} of {2:d}. Training loss: {3:.2f}, validation accuracy: {4:.2f}%, test label distribution: {5:s}'.format(it_idx+1,best_epoch,args['epochs'],best_loss,best_acc,str({i:'{0:.2f}%'.format(100*len(np.where(test_pred==i)[0])/len(test_pred)) for i in [0,1,2]})))
it_idx += 1
#Do the score combination
best_epochs = np.argsort([valid_accuracies[ep] for ep in range(it_idx)])[::-1]
val_pred = np.array([valid_predictions[ep] for ep in best_epochs[0:5]])
val_pred = np.argmax(np.array([np.sum((val_pred==i).astype(int),axis=0) for i in [0,1,2]]),axis=0)
test_pred = np.array([test_predictions[ep] for ep in best_epochs[0:5]])
test_pred = np.argmax(np.array([np.sum((test_pred==i).astype(int),axis=0) for i in [0,1,2]]),axis=0)
acc = 100*len(np.where((val_pred-validlabels.numpy())==0)[0])/validset.shape[2]
print('Ensemble. Cross-validation accuracy: {0:.2f}%, test label distribution: {1:s}'.format(acc,str({i:'{0:.2f}%'.format(100*len(np.where(test_pred==i)[0])/len(test_pred)) for i in [0,1,2]})))
#Set all predictions to the majority category
df_out = pd.DataFrame({'id': pd.read_csv('/kaggle/input/contradictory-my-dear-watson/test.csv')['id'], 'prediction': np.argmax([len(np.where(train_targets==i)[0]) for i in [0,1,2]])})
#Set only English language cases to the predicted labels
df_out.loc[df_out['id'].isin(test_data['id']),'prediction']=test_pred
df_out.to_csv('/kaggle/working/submission.csv'.format(it_idx,acc),index=False)
```
| github_jupyter |
```
import sys
import pickle
from scipy import signal
from scipy import stats
import numpy as np
from sklearn.model_selection import ShuffleSplit
import socket
import time
import math
from collections import OrderedDict
import matplotlib.pyplot as plt
sys.path.append('D:\Diamond\code')
from csp_james_2 import *
sys.path.append('D:\Diamond\code')
from thesis_funcs_19_03 import *
import csv
import datetime
from random import randint
import random
import matplotlib.image as mpimg
%matplotlib auto
```
# Define classes and how many trials per class
```
C_OVR = [0,1] #MI classes, [0,1,2,3] for left hand, right hand, feet, tongue
_classes = C_OVR*10 #*trials per MI class
random.shuffle(_classes) #randomize sequence of MI classes
fileroot = 'E:\\Diamond\\own_expo\\'
filewrite = open(fileroot + 'record.txt','w')
filewrite.write('')
filewrite.close()
file_cross = open(fileroot + 'cross_sign.txt','w')
file_cross.write('0')
file_cross.close()
endgame = open(fileroot + 'endgame.txt','w')
endgame.write('0')
endgame.close()
filewrite = open(fileroot + 'record.txt','a')
plt.ioff()
end = 0
%matplotlib auto
plt.ioff()
n = 0
reso = 0.001
reso1 = 0.2
pltpause = 0.05
######################### Connect to Openvibe aquisition server (AS) ########################################################
# host and port of tcp tagging server
HOST = '127.0.0.1' #local machine address
PORT = 15361 #port to connect to AS
# transform a value into an array of byte values in little-endian order.
def to_byte(value, length):
for x in range(length):
yield value%256
value//=256
# connect
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
#padding of zeros to keep length of tag consistant
padding=[0]*8
#############################################################################################################################
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
fig.canvas.draw()
img = mpimg.imread('E:\\Diamond\\cues\\black.png')
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
t0 = datetime.datetime.now()
filewrite.write('rest,' + str(t0) + '\n')
cross = t0 + datetime.timedelta(0,randint(0,5)/10 + 6)
cue = cross + datetime.timedelta(0,1)
rest = cue + datetime.timedelta(0,4)
cross_exed = 0
cue_exed = 0
rest_exed = 0
while n < len(_classes):
d_cross = (datetime.datetime.now() - cross).total_seconds()
if cross_exed == 0 and (abs(d_cross) <= reso or d_cross>=reso):
#if cross_exed == 0 and np.abs(cross - datetime.datetime.now()).total_seconds() < reso:
img = mpimg.imread('E:\\Diamond\\cues\\fixation.png')
ax.clear()
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
filewrite.write('cross,' + str(datetime.datetime.now()) + '\n')
file_cross = open(fileroot + 'cross_sign.txt','w')
file_cross.write('1')
file_cross.close()
print ('cross', datetime.datetime.now())
print ('_classes', _classes[n])
cross_exed = 1
"""
elif cross_exed == 0 and (datetime.datetime.now()-cross).total_seconds() < 0.2 and (datetime.datetime.now()-cross).total_seconds() > reso:
ax.clear()
img = mpimg.imread('E:\\Diamond\\cues\\fixation.png')
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
filewrite.write('cross,' + str(datetime.datetime.now()) + '\n')
print ('cross1', datetime.datetime.now())
print ('_classes', _classes[n])
cross_exed = 1
file_cross = open(fileroot + 'cross_sign.txt','w')
file_cross.write('1')
file_cross.close()
"""
d_cue = (datetime.datetime.now() - cue).total_seconds()
#if cue_exed == 0 and np.abs(cue - datetime.datetime.now()).total_seconds() < reso:
if cue_exed == 0 and (abs(d_cue) <= reso or d_cue>=reso):
if _classes[n] == 0:
img = mpimg.imread('E:\\Diamond\\cues\\left_hand.png')
EVENT_ID = 0x441 #LEFT HAND (1089) EVENT_IDs are used to tag eeg streams in openvibe, IDs are pre-defined at http://openvibe.inria.fr/stimulation-codes/
elif _classes[n] == 1:
img = mpimg.imread('E:\\Diamond\\cues\\right_hand.png')
EVENT_ID = 0x442 #RIGHT HAND (1090)
elif _classes[n] == 2:
img = mpimg.imread('E:\\Diamond\\cues\\feet.png')
EVENT_ID = 0x303 #FOOT (FEET) (771)
elif _classes[n] == 3:
img = mpimg.imread('E:\\Diamond\\cues\\tongue.jpg')
EVENT_ID = 0x304 #TONGUE (772)
ax.clear()
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
#same time tag for record.txt and openvibe eeg streams
time_datetime = datetime.datetime.now()
time_unix = time.time()
filewrite.write('cue,' + str(_classes[n]) + ',' + str(time_datetime) + '\n')
#event_id to byte format
event_id=list(to_byte(EVENT_ID, 8))
# timestamp can be either the posix time in ms, or 0 to let the acquisition server timestamp the tag itself.
timestamp=list(to_byte(int(time_unix*1000), 8))
#send tag to openvibe, tag is padding + event_id + timestamp in unix time, in uint64 format
s.sendall(bytearray(padding+event_id+timestamp))
print ('cue', datetime.datetime.now())
print ('_classes', _classes[n])
cue_exed = 1
"""
elif cue_exed == 0 and (datetime.datetime.now()-cue).total_seconds() < 0.2 and (datetime.datetime.now()-cue).total_seconds() > reso:
if _classes[n] == 0:
img = mpimg.imread('E:\\Diamond\\cues\\left_hand.png')
elif _classes[n] == 1:
img = mpimg.imread('E:\\Diamond\\cues\\right_hand.png')
elif _classes[n] == 2:
img = mpimg.imread('E:\\Diamond\\cues\\feet.png')
elif _classes[n] == 3:
img = mpimg.imread('E:\\Diamond\\cues\\tongue.jpg')
ax.clear()
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
filewrite.write('cue,' + str(_classes[n]) + ',' + str(datetime.datetime.now()) + '\n')
print ('cue1', datetime.datetime.now())
print ('_classes', _classes[n])
cue_exed = 1
"""
d_rest = (datetime.datetime.now() - rest).total_seconds()
#if rest_exed == 0 and np.abs(rest - datetime.datetime.now()).total_seconds() < reso:
if rest_exed == 0 and (abs(d_rest) <= reso or d_rest>=reso):
img = mpimg.imread('E:\\Diamond\\cues\\black.png')
ax.clear()
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
filewrite.write('rest,' + str(datetime.datetime.now()) + '\n')
print ('rest', datetime.datetime.now())
print ('_classes', _classes[n])
rest_exed = 1
"""
elif rest_exed == 0 and (datetime.datetime.now()-rest).total_seconds() < 0.2 and (datetime.datetime.now()-rest).total_seconds() > reso:
img = mpimg.imread('E:\\Diamond\\cues\\black.png')
ax.clear()
ax.imshow(img)
fig.canvas.draw()
plt.pause(pltpause)
filewrite.write('rest,' + str(datetime.datetime.now()) + '\n')
print ('rest1', datetime.datetime.now())
print ('_classes', _classes[n])
rest_exed = 1
"""
if cross_exed == 1 and cue_exed==1 and rest_exed == 1:
cross = rest + datetime.timedelta(0,randint(0,5)/10 + 6)
cue = cross + datetime.timedelta(0,1)
rest = cue + datetime.timedelta(0,4)
cross_exed = 0
cue_exed = 0
rest_exed = 0
n = n +1
print (n)
filewrite.close()
s.close()
end = 1
if end == 1:
endgame = open(fileroot + 'endgame.txt','w')
endgame.write('1')
endgame.close()
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
fig.canvas.draw()
t0 = datetime.datetime.now()
img = mpimg.imread('E:\\Diamond\\cues\\feet.png')
ax.imshow(img)
fig.show()
plotted = 0
while (datetime.datetime.now()-t0).total_seconds() < 5:
if plotted == 0 and (datetime.datetime.now()-t0).total_seconds() > 2:
ax.clear()
img = mpimg.imread('E:\\Diamond\\cues\\feet.png')
print('shit')
ax.imshow(img)
plotted = 1
fig.canvas.draw()
#plt.ion()
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
fig.canvas.draw()
img1 = mpimg.imread('E:\\Diamond\\cues\\feet.png')
img2 = mpimg.imread('E:\\Diamond\\cues\\black.png')
ax.imshow(img1)
fig.canvas.draw()
plt.pause(0.1)
ax.clear()
ax.imshow(img2)
fig.canvas.draw()
%matplotlib auto
%matplotlib
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
img = mpimg.imread('E:\\Diamond\\cues\\feet.png')
ax.imshow(img)
fig.show()
img
_classes
import matplotlib.image as mpimg
img = mpimg.imread('E:\\Diamond\\cues\\fixation.png')
plt.figure(1)
plt.imshow(img)
plt.clf()
img = mpimg.imread('E:\\Diamond\\cues\\feet.png')
plt.imshow(img)
rest
timer = threading.Timer(10000, print('cue'))
timer.start()
cue
t0
np.abs((datetime.datetime.now() - (datetime.datetime.now()+datetime.timedelta(0,4))).total_seconds())
x=datetime.datetime.today()
y=x.replace(day=x.day+1, hour=1, minute=0, second=0, microsecond=0)
x
y
t0 + datetime.timedelta(0,60)
t0
t0 = datetime.datetime.now()
dt = (datetime.datetime.now() - t0)
while dt.total_seconds() <= 5:
dt = (datetime.datetime.now() - t0)
print (dt.total_seconds(), datetime.datetime.now())
t0
t0.
```
| github_jupyter |
# Bounding Box Visualizer
```
try:
import cv2
except ImportError:
cv2 = None
COLORS = [
"#6793be", "#990000", "#00ff00", "#ffbcc9", "#ffb9c7", "#fdc6d1",
"#fdc9d3", "#6793be", "#73a4d4", "#9abde0", "#9abde0", "#8fff8f", "#ffcfd8", "#808080", "#808080",
"#ffba00", "#6699ff", "#009933", "#1c1c1c", "#08375f", "#116ebf", "#e61d35", "#106bff", "#8f8fff",
"#8fff8f", "#dbdbff", "#dbffdb", "#dbffff", "#ffdbdb", "#ffc2c2", "#ffa8a8", "#ff8f8f", "#e85e68",
"#123456", "#5cd38c", "#1d1f5f", "#4e4b04", "#495a5b", "#489d73", "#9d4872", "#d49ea6", "#ff0080",
"#6793be", "#990000", "#fececf", "#ffbcc9", "#ffb9c7", "#fdc6d1",
"#fdc9d3", "#6793be", "#73a4d4", "#9abde0", "#9abde0", "#8fff8f", "#ffcfd8", "#808080", "#808080",
"#ffba00", "#6699ff", "#009933", "#1c1c1c", "#08375f", "#116ebf", "#e61d35", "#106bff", "#8f8fff",
"#8fff8f", "#dbdbff", "#dbffdb", "#dbffff", "#ffdbdb", "#ffc2c2", "#ffa8a8", "#ff8f8f", "#e85e68",
"#123456", "#5cd38c", "#1d1f5f", "#4e4b04", "#495a5b", "#489d73", "#9d4872", "#d49ea6", "#ff0080"
]
def hex_to_rgb(color_hex):
color_hex = color_hex.lstrip('#')
color_rgb = tuple(int(color_hex[i:i+2], 16) for i in (0, 2, 4))
return color_rgb
def annotate_image(image, detection):
""" Annotate images with object detection results
# Arguments:
image: numpy array representing the image used for detection
detection: `DetectionResult` result from SKIL on the same image
# Return value:
annotated image as numpy array
"""
if cv2 is None:
raise Exception("OpenCV is not installed.")
objects = detection.get('objects')
if objects:
for detect in objects:
confs = detect.get('confidences')
max_conf = max(confs)
max_index = confs.index(max_conf)
classes = detect.get('predictedClasses')
max_class = classes[max_index]
class_number = detect.get('predictedClassNumbers')[max_index]
h = detect.get('height')
w = detect.get('width')
center_x = detect.get('centerX')
center_y = detect.get('centerY')
color_hex = COLORS[class_number]
b,g,r = hex_to_rgb(color_hex)
color_rgb = (r,g,b)
# bounding box
xmin, ymin = int(center_x - w/2), int(center_y - h/2)
xmax, ymax = int(center_x + w/2), int(center_y + h/2)
upper = (xmin, ymin)
lower = (xmax, ymax)
cv2.rectangle(image, lower, upper, color_rgb, thickness=3)
# bounding box label: class_name: confidence
text = max_class + ": " + str(int(100*max(confs)))+"%"
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 0.7
# get text size
size = cv2.getTextSize(text, font, fontScale+0.1, thickness=2)
text_width = size[0][0]
text_height = size[0][1]
# text-box background
cv2.rectangle(image,
(xmin-2, ymin),
(xmin+text_width, ymin-35), color_rgb, thickness=-1)
cv2.putText(image, text, (xmin, ymin-10), font, fontScale, color=0, thickness=2)
return image
import json
import matplotlib.pyplot as plt
%matplotlib inline
with open('detections/img-5.json') as FILE:
detections = json.load(FILE)
print(json.dumps(detections['objects'][0], indent=4))
image = annotate_image(cv2.imread("images/img-5.jpg"), detections)
cv2.imwrite('images/annotated.jpg', image)
plt.figure(figsize=(8,8))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
image.shape
for k, detection in enumerate(detections['objects']):
predicted = detection['predictedClasses'][0]
confidence = detection['confidences'][0]
print('{}: [{}, {:.5}]'.format(k+1, predicted, confidence))
len(COLORS)
```
| github_jupyter |
# Chapter 8 Lists
### 8.1 A list is a sequence
```
# list of integers
[10, 20, 30, 40]
# list of strings
['frog','toad','salamander','newt']
# mixed list
[10,'twenty',30.0,[40, 45]]
cheeses = ['Cheddar','Mozzarella','Gouda','Swiss']
numbers = [27, 42]
empty = []
print(cheeses, numbers, empty)
```
### 8.2 Lists are mutable
```
numbers = [27, 62]
numbers[1] = 42
print(numbers)
cheeses = ['Cheddar','Mozzarella','Gouda','Swiss']
'Swiss' in cheeses
'Brie' in cheeses
```
### 8.3 Traversing a list
```
cheeses = ['Cheddar','Mozzarella','Gouda','Swiss']
for cheese in cheeses:
print(cheese)
numbers = [10, 20, 30, 40]
for i in range(len(numbers)):
numbers[i] = numbers[i] + 2
print(numbers)
empty = []
for x in empty:
print('This never happens.')
len(['spam', 1, ['sun','moon','stars'], [1,2,3]])
```
### 8.4 List operations
```
a = [1,2,3]
b = [4,5,6]
c = a + b
print(c)
[0]*4
[1,2,3]*3
```
### 8.5 List slices
```
t = ['a','b','c','d','e','f']
t[1:3]
t[:4]
t[3:]
t[:]
t[1:3] = ['x','y']
print(t)
```
### 8.6 List methods
```
t = ['a','b','c']
t.append('d')
print(t)
t1 = ['a','b','c']
t2 = ['d','e']
t1.extend(t2)
print(t1)
t = ['d','c','e','b','a']
t.sort()
print(t)
```
### 8.7 Deleting elements
```
t = ['a','b','c']
x = t.pop(1)
print(t)
print(x)
t = ['a','b','c']
del t[1]
print(t)
t = ['a','b','c']
x = t.remove('b')
print(t)
t = ['a','b','c','b']
x = t.remove('b') #removes first instance
print(t)
t = ['a', 'b', 'c', 'd', 'e','f']
del t[1:5]
print(t)
```
### 8.8 Lists and functions
```
nums = [3, 41, 12, 9, 74, 15]
print(len(nums))
print(max(nums))
print(min(nums))
print(sum(nums))
print(sum(nums)/len(nums))
numlist = list()
while(True):
inp = input('Enter a number: ')
if inp == 'done': break
try:
value = float(inp)
numlist.append(value)
except:
print('That was not a number. Continue...')
continue
average = sum(numlist)/len(numlist)
print('Average:', average)
```
### 8.9 Lists and strings
```
s = 'spam'
t = list(s)
print(t)
s = 'That there’s some good in this world, Mr. Frodo… and it’s worth fighting for.'
t = s.split()
print(t)
print(t[3])
s = 'spam-spam-spam'
delimiter = '-'
s.split(delimiter)
t = ['That', 'there’s', 'some', 'good', 'in', 'this', 'world,', 'Mr.', 'Frodo…', 'and', 'it’s', 'worth', 'fighting', 'for.']
delimiter = ' '
delimiter.join(t)
```
### 8.10 Parsing lines
```
fhand = open('mbox-short.txt')
for line in fhand:
line = line.rstrip()
if not line.startswith('From '): continue
words = line.split()
print(words[2])
```
### 8.11 Objects and values
```
# same object, two variables pointing to the same object
a = 'banana'
b = 'banana'
a is b
# same value because a and b point to the same object
a == b
# two separate objects
a = [1,2,3]
b = [1,2,3]
a is b
# same value
a == b
```
### 8.12 Aliasing
```
# b points to the same object as a
a = [1,2,3]
b = a
b is a
b[0] = 17
print(a)
```
### 8.13 List arguments
```
def delete_head(t):
del t[0]
letters = ['a','b','c']
delete_head(letters)
print(letters)
t1 = [1,2]
t2 = t1.append(3)
print(t1)
print(t2)
t1 = [1,2]
t3 = t1 + [3]
print(t3)
# changes t within the scope of the function but not
# the value of the variable passed to it
def bad_delete_head(t):
t = t[1:]
def tail(t):
return t[1:]
letters = ['a','b','c']
rest = tail(letters)
print(rest)
```
| github_jupyter |
## Creating a column chart for your dashboard
In this chapter, you will start to put together your own dashboard.
Your first step is to create a basic column chart showing fatalities, injured, and uninjured statistics for the states of Australia over the last 100 years.
Instructions
1. In `A1` of `Sheet1`, use a formula that refers to the heading in your `Shark Attacks` dataset. This can take the form of `='Sheet Name'!A1`.
2. In your `Shark Attacks` dataset, select the `State` column and the `Fatal`, `Injured`, and `Uninjured` statistics, and then create a chart.
3. Copy and paste this chart to `Sheet 1` and change the chart to a column chart.
## Format chart, axis titles and series
Your next task is to apply some basic formatting to the same chart to jazz it up a bit and make it a bit more pleasing to the reader's eye.
Instructions
1. Alter the title of your chart so that it now reads "Fatal, Injured, and Uninjured Statistics".
2. The color of the title text is now a light grey. Let's change the color to black and make the font **bold**.
1. Double-clicking the chart title will allow you to adapt the text and the formatting options in the Chart Editor to the right.
3. While you're at it, change the series colors accordingly: `Fatal`: red, `Injured`: blue, and `Uninjured`: green.
1. You can double-click each series to again open up the Chart Editor, and then select a new color for the series.
## Removing a series
Taking things a little further, in the next task you will manipulate the look of your chart a bit more and remove the Uninjured statistical data.
Instructions
1. Remove the `Uninjured` series from your chart.
## Changing the plotted range
It's just as easy to change a range as it is to delete it. For this task, have a go at changing the range of your chart so it now only showcases the top 3 states' fatal statistics.
Instructions
1. Remove the `Injured` series from your chart so you are **only** plotting the `Fatal` series.
2. Change the data range so that you are only plotting the first **three states** with the highest number of fatalities.
1. To do this, you will need to use the chart editor to adjust the existing data ranges `'Shark Attacks'!A1:A9` and `'Shark Attacks'!C1:C9` so that the chart only displays fatalities from `NSW`, `QLD`, and `WA`.
3. Finally, change the chart title to 'Top 3 States Number of Fatalities'.
## Using named ranges
In this task you are going find an existing range and insert a blank row within the range.
Instructions
1. Select Data then Named ranges and click on the `SharkStats` Named range to see the highlighted range.
2. Insert a blank row after row `2`.
## Summing using a named range
In addition to being a handy way of keeping track of a range of cells, named ranges can also be used in formulas.
For example, using the formula `=AVERAGE(Total)` would return the average of the totals contained within the `Total` named range.
In this task you will remove a blank row and use the named range `Total` in a formula.
Instructions
1. Remove the blank row you inserted in the last exercise.
2. In `B10`, use the `SUM()` function to aggregate the `Total` named range.
## Averaging using a named range
In this task you will use a named range within a formula to find an average.
Instructions
1. In `C11` use the `Fatalities` named range instead of cell references to average the number of fatalities.
| github_jupyter |
## Fibonacci Rabbits
Fibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that:
1. A single newly born pair of rabbits (one male, one female) are put in a field;
2. Rabbits are able to mate at the age of one month so that at the end of its second month a female can produce another pair of rabbits;
3. Rabbits never die and a mating pair always produces one new pair (one male, one female) every month from the second month on.
The puzzle that Fibonacci posed was: how many pairs will there be in a given time period?
```
def populate_iterative(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):
"""
This function calculates the population of infant and mature rabbit population of
Fibonacci rabbits after a given amount of time(remaining_months) by utilizing a
while loop. The function assumes there is an infant pair at month=0.
"""
while remaining_months > 0:
month_count += 1
remaining_months -= 1
infant_num, mature_num = mature_num, infant_num + mature_num
print(f"Month {month_count}: {infant_num:2} infant, {mature_num:2} mature")
print(f"Total population: {(infant_num + mature_num)}")
return infant_num + mature_num
def populate_iterative_timeit(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):
"""
This function calculates the population of infant and mature rabbit population of
Fibonacci rabbits after a given amount of time(remaining_months) by utilizing a
while loop. The function assumes there is an infant pair at month=0.
"""
while remaining_months > 0:
month_count += 1
remaining_months -= 1
infant_num, mature_num = mature_num, infant_num + mature_num
return infant_num + mature_num
def populate_recursive(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):
"""
This function calculates the population of infant and mature rabbit population of
Fibonacci rabbits after a given amount of time(remaining_months) recursively.
The function assumes there is an infant pair at month=0.
"""
if remaining_months > 0:
month_count += 1
infant_num, mature_num = mature_num, infant_num + mature_num
print(f"Month {month_count}: {infant_num:2} infant, {mature_num:2} mature")
remaining_months -= 1
return populate_recursive(remaining_months, infant_num, mature_num, month_count)
else:
print(f"Total population: {(infant_num + mature_num)}")
return infant_num + mature_num
def populate_recursive_timeit(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):
"""
This function calculates the population of infant and mature rabbit population of
Fibonacci rabbits after a given amount of time(remaining_months) recursively.
The function assumes there is an infant pair at month=0.
"""
if remaining_months > 0:
month_count += 1
infant_num, mature_num = mature_num, infant_num + mature_num
remaining_months -= 1
return populate_recursive_timeit(remaining_months, infant_num, mature_num, month_count)
else:
return infant_num + mature_num
populate_iterative(24)
populate_recursive(24)
%timeit populate_iterative_timeit(24)
%timeit populate_recursive_timeit(24)
```
Iteration algorithm is faster than recursive one.
| github_jupyter |
# Analyze Data Quality with SageMaker Processing Jobs and Spark
Typically a machine learning (ML) process consists of few steps. First, gathering data with various ETL jobs, then pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.
Often, distributed data processing frameworks such as Spark are used to process and analyze data sets in order to detect data quality issues and prepare them for model training.
In this notebook we'll use Amazon SageMaker Processing with a library called [**Deequ**](https://github.com/awslabs/deequ), and leverage the power of Spark with a managed SageMaker Processing Job to run our data processing workloads.
Here are some great resources on Deequ:
* Blog Post: https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/
* Research Paper: https://assets.amazon.science/4a/75/57047bd343fabc46ec14b34cdb3b/towards-automated-data-quality-management-for-machine-learning.pdf


# Amazon Customer Reviews Dataset
https://s3.amazonaws.com/amazon-reviews-pds/readme.html
### Dataset Columns:
- `marketplace`: 2-letter country code (in this case all "US").
- `customer_id`: Random identifier that can be used to aggregate reviews written by a single author.
- `review_id`: A unique ID for the review.
- `product_id`: The Amazon Standard Identification Number (ASIN). `http://www.amazon.com/dp/<ASIN>` links to the product's detail page.
- `product_parent`: The parent of that ASIN. Multiple ASINs (color or format variations of the same product) can roll up into a single parent.
- `product_title`: Title description of the product.
- `product_category`: Broad product category that can be used to group reviews (in this case digital videos).
- `star_rating`: The review's rating (1 to 5 stars).
- `helpful_votes`: Number of helpful votes for the review.
- `total_votes`: Number of total votes the review received.
- `vine`: Was the review written as part of the [Vine](https://www.amazon.com/gp/vine/help) program?
- `verified_purchase`: Was the review from a verified purchase?
- `review_headline`: The title of the review itself.
- `review_body`: The text of the review.
- `review_date`: The date the review was written.
```
ingest_create_athena_table_tsv = False
%store -r ingest_create_athena_table_tsv
if not ingest_create_athena_table_tsv:
print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print('[ERROR] YOU HAVE TO RUN THE NOTEBOOKS IN THE INGEST FOLDER FIRST.')
print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
else:
print('[OK]')
import sagemaker
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
```
# Pull the Spark-Deequ Docker Image
```
public_image_uri='docker.io/datascienceonaws/spark-deequ:1.0.0'
!docker pull $public_image_uri
```
# Push the Image to a Private Docker Repo
```
private_docker_repo = 'spark-deequ'
private_docker_tag = '1.0.0'
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
region = boto3.session.Session().region_name
private_image_uri = '{}.dkr.ecr.{}.amazonaws.com/{}:{}'.format(account_id, region, private_docker_repo, private_docker_tag)
print(private_image_uri)
!docker tag $public_image_uri $private_image_uri
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
```
# Ignore `spark-deequ does not exist` error below
```
!aws ecr describe-repositories --repository-names $private_docker_repo || aws ecr create-repository --repository-name $private_docker_repo
```
# Ignore ^^ `spark-deequ does not exist` ^^ error above
```
!docker push $private_image_uri
```
# Run the Analysis Job using a SageMaker Processing Job
Next, use the Amazon SageMaker Python SDK to submit a processing job. Use the Spark container that was just built with our Spark script.
# Review the Spark preprocessing script.
```
!pygmentize preprocess-deequ.py
!pygmentize preprocess-deequ.scala
from sagemaker.processing import ScriptProcessor
processor = ScriptProcessor(base_job_name='spark-amazon-reviews-analyzer',
image_uri=private_image_uri,
command=['/opt/program/submit'],
role=role,
instance_count=2, # instance_count needs to be > 1 or you will see the following error: "INFO yarn.Client: Application report for application_ (state: ACCEPTED)"
instance_type='ml.r5.2xlarge',
env={
'mode': 'jar',
'main_class': 'Main'
})
s3_input_data = 's3://{}/amazon-reviews-pds/tsv/'.format(bucket)
print(s3_input_data)
!aws s3 ls $s3_input_data
```
## Setup Output Data
```
from time import gmtime, strftime
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
output_prefix = 'amazon-reviews-spark-analyzer-{}'.format(timestamp_prefix)
processing_job_name = 'amazon-reviews-spark-analyzer-{}'.format(timestamp_prefix)
print('Processing job name: {}'.format(processing_job_name))
s3_output_analyze_data = 's3://{}/{}/output'.format(bucket, output_prefix)
print(s3_output_analyze_data)
```
## Start the Spark Processing Job
_Notes on Invoking from Lambda:_
* However, if we use the boto3 SDK (ie. with a Lambda), we need to copy the `preprocess.py` file to S3 and specify the everything include --py-files, etc.
* We would need to do the following before invoking the Lambda:
!aws s3 cp preprocess.py s3://<location>/sagemaker/spark-preprocess-reviews-demo/code/preprocess.py
!aws s3 cp preprocess.py s3://<location>/sagemaker/spark-preprocess-reviews-demo/py_files/preprocess.py
* Then reference the s3://<location> above in the --py-files, etc.
* See Lambda example code in this same project for more details.
_Notes on not using ProcessingInput and Output:_
* Since Spark natively reads/writes from/to S3 using s3a://, we can avoid the copy required by ProcessingInput and ProcessingOutput (FullyReplicated or ShardedByS3Key) and just specify the S3 input and output buckets/prefixes._"
* See https://github.com/awslabs/amazon-sagemaker-examples/issues/994 for issues related to using /opt/ml/processing/input/ and output/
* If we use ProcessingInput, the data will be copied to each node (which we don't want in this case since Spark already handles this)
```
from sagemaker.processing import ProcessingOutput
processor.run(code='preprocess-deequ.py',
arguments=['s3_input_data', s3_input_data,
's3_output_analyze_data', s3_output_analyze_data,
],
# See https://github.com/aws/sagemaker-python-sdk/issues/1341
# for why we need to specify a null-output
outputs=[
ProcessingOutput(s3_upload_mode='EndOfJob',
output_name='null-output',
source='/opt/ml/processing/output')
],
logs=True,
wait=False
)
from IPython.core.display import display, HTML
processing_job_name = processor.jobs[-1].describe()['ProcessingJobName']
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format(region, processing_job_name)))
from IPython.core.display import display, HTML
processing_job_name = processor.jobs[-1].describe()['ProcessingJobName']
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After a Few Minutes</b>'.format(region, processing_job_name)))
from IPython.core.display import display, HTML
s3_job_output_prefix = output_prefix
display(HTML('<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview">S3 Output Data</a> After The Spark Job Has Completed</b>'.format(bucket, s3_job_output_prefix, region)))
```
# Monitor the Processing Job
```
running_processor = sagemaker.processing.ProcessingJob.from_processing_name(processing_job_name=processing_job_name,
sagemaker_session=sagemaker_session)
processing_job_description = running_processor.describe()
print(processing_job_description)
running_processor.wait()
```
# _Please Wait Until the ^^ Processing Job ^^ Completes Above._
# Inspect the Processed Output
## These are the quality checks on our dataset.
## _The next cells will not work properly until the job completes above._
```
!aws s3 ls --recursive $s3_output_analyze_data/
```
## Copy the Output from S3 to Local
* dataset-metrics/
* constraint-checks/
* success-metrics/
* constraint-suggestions/
```
!aws s3 cp --recursive $s3_output_analyze_data ./amazon-reviews-spark-analyzer/ --exclude="*" --include="*.csv"
```
## Analyze Constraint Checks
```
import glob
import pandas as pd
import os
def load_dataset(path, sep, header):
data = pd.concat([pd.read_csv(f, sep=sep, header=header) for f in glob.glob('{}/*.csv'.format(path))], ignore_index = True)
return data
df_constraint_checks = load_dataset(path='./amazon-reviews-spark-analyzer/constraint-checks/', sep='\t', header=0)
df_constraint_checks[['check', 'constraint', 'constraint_status', 'constraint_message']]
```
## Analyze Dataset Metrics
```
df_dataset_metrics = load_dataset(path='./amazon-reviews-spark-analyzer/dataset-metrics/', sep='\t', header=0)
df_dataset_metrics
```
## Analyze Success Metrics
```
df_success_metrics = load_dataset(path='./amazon-reviews-spark-analyzer/success-metrics/', sep='\t', header=0)
df_success_metrics
```
## Analyze Constraint Suggestions
```
df_constraint_suggestions = load_dataset(path='./amazon-reviews-spark-analyzer/constraint-suggestions/', sep='\t', header=0)
df_constraint_suggestions.columns=['column_name', 'description', 'code']
df_constraint_suggestions
```
# Save for the Next Notebook(s)
```
%store df_dataset_metrics
%%javascript
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
```
| github_jupyter |
<p><font size="6"><b> CASE - Observation data - analysis</b></font></p>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:[email protected]>, <mailto:[email protected]>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-whitegrid')
```
## 1. Reading in the enriched observations data
<div class="alert alert-success">
**EXERCISE**
- Read in the `survey_data_completed.csv` file and save the resulting `DataFrame` as variable `survey_data_processed` (if you did not complete the previous notebook, a version of the csv file is available in the `data` folder).
- Interpret the 'eventDate' column directly as python `datetime` objects and make sure the 'occurrenceID' column is used as the index of the resulting DataFrame (both can be done at once when reading the csv file using parameters of the `read_csv` function)
- Inspect the first five rows of the DataFrame and the data types of each of the data columns. Verify that the 'eventDate' indeed has a datetime data type.
<details><summary>Hints</summary>
- All read functions in Pandas start with `pd.read_...`.
- To check the documentation of a function, use the keystroke combination of SHIFT + TAB when the cursor is on the function.
- Remember `.head()` and `.info()`?
</details>
</div>
```
survey_data_processed = pd.read_csv("data/survey_data_completed.csv",
parse_dates=['eventDate'], index_col="occurrenceID")
survey_data_processed.head()
survey_data_processed.info()
```
## 2. Tackle missing values (NaN) and duplicate values
See [pandas_08_missing_values.ipynb](pandas_08_missing_values.ipynb) for an overview of functionality to work with missing values.
<div class="alert alert-success">
**EXERCISE**
How many records in the data set have no information about the `species`? Use the `isna()` method to find out.
<details><summary>Hints</summary>
- Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN
- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.
</details>
```
survey_data_processed['species'].isna().sum()
```
<div class="alert alert-success">
**EXERCISE**
How many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate.
<details><summary>Hints</summary>
- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.
</details>
```
survey_data_processed.duplicated().sum()
```
<div class="alert alert-success">
**EXERCISE**
- Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark.
- Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records.
<details><summary>Hints</summary>
- Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data.
- `sort_values()` can work with a single columns name as well as a list of names.
</details>
```
duplicate_observations = survey_data_processed[survey_data_processed.duplicated(keep=False)]
duplicate_observations.sort_values(["eventDate", "verbatimLocality"]).head(9)
```
<div class="alert alert-success">
**EXERCISE**
- Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `survey_data_unique`. Use the `drop duplicates()` method from Pandas.
- How many observations are still left in the data set?
<details><summary>Hints</summary>
- `keep=First` is the default option for `drop_duplicates`
- The number of rows in a DataFrame is equal to the `len`gth
</details>
```
survey_data_unique = survey_data_processed.drop_duplicates()
len(survey_data_unique)
```
<div class="alert alert-success">
**EXERCISE**
Use the `dropna()` method to find out:
- For how many observations (rows) we have all the information available (i.e. no NaN values in any of the columns)?
- For how many observations (rows) we do have the `species_ID` data available ?
<details><summary>Hints</summary>
- `dropna` by default removes by default all rows for which _any_ of the columns contains a `NaN` value.
- To specify which specific columns to check, use the `subset` argument
</details>
```
len(survey_data_unique.dropna()), len(survey_data_unique.dropna(subset=['species']))
```
<div class="alert alert-success">
**EXERCISE**
Filter the `survey_data_unique` data and select only those records that do not have a `species` while having information on the `sex`. Store the result as variable `not_identified`.
<details><summary>Hints</summary>
- To combine logical operators element-wise in Pandas, use the `&` operator.
- Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values.
</details>
```
mask = survey_data_unique['species'].isna() & survey_data_unique['sex'].notna()
not_identified = survey_data_unique[mask]
not_identified.head()
```
__NOTE!__
The `DataFrame` we will use in the further analyses contains species information:
```
survey_data = survey_data_unique.dropna(subset=['species']).copy()
survey_data['name'] = survey_data['genus'] + ' ' + survey_data['species']
```
<div class="alert alert-info">
**INFO**
For biodiversity studies, absence values (knowing that something is not present) are useful as well to normalize the observations, but this is out of scope for these exercises.
</div>
## 3. Select subsets of the data
```
survey_data['taxa'].value_counts()
#survey_data.groupby('taxa').size()
```
<div class="alert alert-success">
**EXERCISE**
- Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection.
<details><summary>Hints</summary>
- You do not have to combine three different conditions, but use the `isin` operator with a list of names.
</details>
```
non_rodent_species = survey_data[survey_data['taxa'].isin(['Rabbit', 'Bird', 'Reptile'])]
non_rodent_species.head()
len(non_rodent_species)
```
<div class="alert alert-success">
**EXERCISE**
Select the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`.
<details><summary>Hints</summary>
- Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other.
- If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`)
</details>
```
r_species = survey_data[survey_data['name'].str.lower().str.startswith('r')]
r_species.head()
len(r_species)
r_species["name"].value_counts()
```
<div class="alert alert-success">
**EXERCISE**
Select the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>.
<details><summary>Hints</summary>
- Logical operators like `==`, `!=`, `>`,... can still be used.
</details>
```
non_bird_species = survey_data[survey_data['taxa'] != 'Bird']
non_bird_species.head()
len(non_bird_species)
```
<div class="alert alert-success">
**EXERCISE**
Select the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 using the `eventDate` column. Call the resulting variable `birds_85_89`.
<details><summary>Hints</summary>
- No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine)
</details>
```
birds_85_89 = survey_data[(survey_data["eventDate"] >= "1985-01-01")
& (survey_data["eventDate"] <= "1989-12-31 23:59")
& (survey_data['taxa'] == 'Bird')]
birds_85_89.head()
```
Alternative solution:
```
# alternative solution
birds_85_89 = survey_data[(survey_data["eventDate"].dt.year >= 1985)
& (survey_data["eventDate"].dt.year <= 1989)
& (survey_data['taxa'] == 'Bird')]
birds_85_89.head()
```
<div class="alert alert-success">
**EXERCISE**
- Drop the observations for which no 'weight' (`wgt` column) information is available.
- On the filtered data, compare the median weight for each of the species (use the `name` column)
- Sort the output from high to low median weight (i.e. descending)
__Note__ You can do this all in a single line statement, but don't have to do it as such!
<details><summary>Hints</summary>
- You will need `dropna`, `groupby`, `median` and `sort_values`.
</details>
```
# Multiple lines
obs_with_weight = survey_data.dropna(subset=["wgt"])
median_weight = obs_with_weight.groupby(['name'])["wgt"].median()
median_weight.sort_values(ascending=False)
# Single line statement
(survey_data
.dropna(subset=["wgt"])
.groupby(['name'])["wgt"]
.median()
.sort_values(ascending=False)
)
```
## 4. Species abundance
<div class="alert alert-success">
**EXERCISE**
Which 8 species (use the `name` column to identify the different species) have been observed most over the entire data set?
<details><summary>Hints</summary>
- Pandas provide a function to combine sorting and showing the first n records, see [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nlargest.html)...
</details>
```
survey_data.groupby("name").size().nlargest(8)
survey_data['name'].value_counts()[:8]
```
<div class="alert alert-success">
**EXERCISE**
- What is the number of different species in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`.
- Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'.
<details><summary>Hints</summary>
- _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups.
- `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes.
</details>
```
n_species_per_plot = survey_data.groupby(["verbatimLocality"])["name"].nunique()
fig, ax = plt.subplots(figsize=(6, 6))
n_species_per_plot.plot(kind="barh", ax=ax, color="lightblue")
ax.set_ylabel("plot number")
# Alternative option:
# inspired on the pivot table we already had:
# species_per_plot = survey_data.reset_index().pivot_table(
# index="name", columns="verbatimLocality", values="occurrenceID", aggfunc='count')
# n_species_per_plot = species_per_plot.count()
```
<div class="alert alert-success">
**EXERCISE**
- What is the number of plots (`verbatimLocality`) each of the species have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high.
- Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable).
<details><summary>Hints</summary>
- Use the previous exercise to solve this one.
</details>
```
n_plots_per_species = survey_data.groupby(["name"])["verbatimLocality"].nunique().sort_values()
fig, ax = plt.subplots(figsize=(8, 8))
n_plots_per_species.plot(kind="barh", ax=ax, color='0.4')
ax.set_xlabel("Number of plots");
ax.set_ylabel("");
```
<div class="alert alert-success">
**EXERCISE**
- Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named "count".
- Use `pivot` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`.
<details><summary>Hints</summary>
- _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time.
- If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method.
- `reset_index()` is useful function to convert multiple indices into columns again.
</details>
```
n_plot_sex = survey_data.groupby(["sex", "verbatimLocality"]).size().rename("count").reset_index()
n_plot_sex.head()
pivoted = n_plot_sex.pivot(columns="sex", index="verbatimLocality", values="count")
pivoted.head()
```
To check, we can use the variable `pivoted` to plot the result:
```
pivoted.plot(kind='bar', figsize=(12, 6), rot=0)
```
<div class="alert alert-success">
**EXERCISE**
Recreate the previous plot with the `catplot` function from the Seaborn library starting from `n_plot_sex`.
<details><summary>Hints</summary>
- Check the `kind` argument of the `catplot` function to figure out to specify you want a barplot with given x and y values.
- To link a column to different colors, use the `hue` argument
</details>
```
sns.catplot(data=n_plot_sex, x="verbatimLocality", y="count",
hue="sex", kind="bar", height=3, aspect=3)
```
<div class="alert alert-success">
**EXERCISE**
Recreate the previous plot with the `catplot` function from the Seaborn library directly starting from `survey_data`.
<details><summary>Hints</summary>
- Check the `kind`argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value.
- To link a column to different colors, use the `hue` argument
</details>
```
sns.catplot(data=survey_data, x="verbatimLocality",
hue="sex", kind="count", height=3, aspect=3)
```
<div class="alert alert-success">
**EXERCISE**
- Make a summary table with the number of records of each of the species in each of the plots (also called `verbatimLocality`). Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name.
- Using the Seaborn <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html">documentation</a> to make a heatmap.
<details><summary>Hints</summary>
- Make sure to pass the correct columns to respectively the `index`, `columns`, `values` and `aggfunc` parameters of the `pivot_table` function. You can use the `datasetName` to count the number of observations for each name/locality combination (when counting rows, the exact column doesn't matter).
</details>
```
species_per_plot = survey_data.pivot_table(index="name",
columns="verbatimLocality",
values="datasetName",
aggfunc='count')
# alternative ways to calculate this
#species_per_plot = survey_data.groupby(['name', 'verbatimLocality']).size().unstack(level=-1)
#pecies_per_plot = pd.crosstab(survey_data['name'], survey_data['verbatimLocality'])
fig, ax = plt.subplots(figsize=(8,8))
sns.heatmap(species_per_plot, ax=ax, cmap='Greens')
```
## 5. Observations over time
<div class="alert alert-success">
**EXERCISE**
Make a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method.
<details><summary>Hints</summary>
- You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use.
- `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year.
</details>
```
survey_data.resample('A', on='eventDate').size().plot()
```
To evaluate the intensity or number of occurrences during different time spans, a heatmap is an interesting representation.
<div class="alert alert-success">
**EXERCISE**
- Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations.
- Using the seaborn <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html">documentation</a>, make a heatmap starting from the `heatmap_prep` variable.
<details><summary>Hints</summary>
- The `.dt` accessor can be used to get the `year`, `month`,... from a `datetime` column
- Use `pivot_table` and provide the years to `index` and the months to `columns`. Do not forget to `count` the number for each combination (`aggfunc`).
- Seaborn has an `heatmap` function which requires a short-form DataFrame, comparable to giving each element in a table a color value.
</details>
```
heatmap_prep = survey_data.pivot_table(index=survey_data['eventDate'].dt.year,
columns=survey_data['eventDate'].dt.month,
values='species', aggfunc='count')
fig, ax = plt.subplots(figsize=(10, 8))
ax = sns.heatmap(heatmap_prep, cmap='Reds')
```
Remark that we started from a `tidy` data format (also called *long* format) and converted to *short* format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.
## (OPTIONAL SECTION) 6. Evolution of species during monitoring period
*In this section, all plots can be made with the embedded Pandas plot function, unless specificly asked*
<div class="alert alert-success">
**EXERCISE**
Plot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years.
<details><summary>Hints</summary>
- _...for each month of..._ requires `groupby`.
- `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years)
</details>
```
merriami = survey_data[survey_data["name"] == "Dipodomys merriami"]
fig, ax = plt.subplots()
merriami.groupby(merriami['eventDate'].dt.month).size().plot(kind="barh", ax=ax)
ax.set_xlabel("number of occurrences");
ax.set_ylabel("Month of the year");
```
<div class="alert alert-success">
**EXERCISE**
Plot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time during the monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale
<details><summary>Hints</summary>
- `isin` is useful to select from within a list of elements.
- `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters!
- `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html) as it might be helpful for this exercise.
</details>
```
subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii',
'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]
month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size()
species_evolution = month_evolution.unstack(level=0)
axs = species_evolution.plot(subplots=True, figsize=(14, 8), sharey=True)
```
<div class="alert alert-success">
**EXERCISE**
Recreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable.
<details><summary>Hints</summary>
- We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively.
- To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter.
- Using `height` and `widht`, the figure size can be optimized.
</details>
Uncomment the next cell (calculates `month_evolution`, the intermediate result of the previous excercise):
```
# Given as solution..
subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii',
'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]
month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size().rename("counts")
month_evolution = month_evolution.reset_index()
month_evolution.head()
```
Plotting with seaborn:
```
sns.relplot(data=month_evolution, x='eventDate', y="counts",
row="name", kind="line", hue="name", height=2, aspect=5)
```
<div class="alert alert-success">
**EXERCISE**
Plot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets.
<details><summary>Hints</summary>
- Combine `resample` and `groupby`!
- Check out the previous exercise for the plot function.
- Pass the `sharey=False` to the `facet_kws` argument as a dictionary.
</details>
```
year_evolution = survey_data.groupby("taxa").resample('A', on='eventDate').size()
year_evolution.name = "counts"
year_evolution = year_evolution.reset_index()
sns.relplot(data=year_evolution, x='eventDate', y="counts",
col="taxa", col_wrap=2, kind="line", height=2, aspect=5,
facet_kws={"sharey": False})
```
<div class="alert alert-success">
**EXERCISE**
The observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`dayofweek`) the number of observations and make a bar plot.
<details><summary>Hints</summary>
- Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...?
</details>
```
fig, ax = plt.subplots()
survey_data.groupby(survey_data["eventDate"].dt.dayofweek).size().plot(kind='barh', color='#66b266', ax=ax)
import calendar
xticks = ax.set_yticklabels(calendar.day_name)
```
Nice work!
| github_jupyter |
# Classify Images using Residual Network with 50 layers (ResNet-50)
## Import Turi Create
Please follow the repository README instructions to install the Turi Create package.
**Note**: Turi Create is currently only compatible with Python 2.7
```
import turicreate as turi
```
## Reference the dataset path
```
url = "data/food_images"
```
## Label the dataset
In the following block of code we will labels the image in the dataset of **Egg** and **Soup** images. Then we will export it as an `SFrame` data object to use it for training the image classification model.
1. The first line of code loads the folder images content using the `image_analysis` property.
2. The second line creates a _foodType_ key for each image in the dataset to specify whether it's an **Egg** or **Soup** based on which folder it's located in.
3. The third line exports the analyzed data as an `SFrame` object in order to use it while creating our image classifier.
4. The fourth line simply visualises the new labeled image into a large list.
**Note**:- You do not have to run the following block of code everytime you create a classifer, unless you changed/edited the dataset.
```
data = turi.image_analysis.load_images(url)
data["foodType"] = data["path"].apply(lambda path: "Eggs" if "eggs" in path else "Soup")
data.save("egg_or_soup.sframe")
data.explore()
```
## Load the labeled SFrame
In the following line of code we are loading the `SFrame` object that contains the images in our dataset with their labels.
```
dataBuffer = turi.SFrame("egg_or_soup.sframe")
```
## Create training and test data using our existing dataset
Here, we're randomly splitting the data.
- 90% of the data in the `SFrame` object will be used for training the image classifier.
- 10% of the data in the `SFrame` object will be used for testing the image classifier.
```
trainingBuffers, testingBuffers = dataBuffer.random_split(0.9)
```
## Train the image classifier
In the following line of code, we will create an image classifier and we'll feed it with the training data we have.
In this example, the image classifer's architecture will be a state-of-the-art Residual Network with 50 layers, also known as **ResNet-50**.
Check out the official paper here: https://arxiv.org/abs/1512.03385.
```
model = turi.image_classifier.create(trainingBuffers, target="foodType", model="resnet-50")
```
## Evaluate the test data to determine the model accuracy
```
evaluations = model.evaluate(testingBuffers)
print evaluations["accuracy"]
```
## Save the Turi Create model to retrieve it later
```
model.save("egg_or_soup.model")
```
## Export the image classification model for Core ML
```
model.export_coreml("EggSoupClassifier.mlmodel")
```
| github_jupyter |
<img src="https://upload.wikimedia.org/wikipedia/commons/4/47/Logo_UTFSM.png" width="200" alt="utfsm-logo" align="left"/>
# MAT281
### Aplicaciones de la Matemática en la Ingeniería
## Módulo 02
## Laboratorio Clase 06: Desarrollo de Algoritmos
### Instrucciones
* Completa tus datos personales (nombre y rol USM) en siguiente celda.
* La escala es de 0 a 4 considerando solo valores enteros.
* Debes _pushear_ tus cambios a tu repositorio personal del curso.
* Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_cYY_lab_apellido_nombre.zip` a [email protected].
* Se evaluará:
- Soluciones
- Código
- Que Binder esté bien configurado.
- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error.
* __La entrega es al final de esta clase.__
__Nombre__: Cristóbal Montecino
__Rol__: 201710019-2
## Ejercicio 1 (2 ptos.):
Utilizando los datos del Gasto Fiscal Neto de Chile, crea una nueva columna del tipo `datetime` llamada `dt_date` utilizando `anio`, `mes` y el día primero de cada mes.
```
import os
import numpy as np
import pandas as pd
```
Utilizaremos como ejemplo un dataset de gasto fiscal neto en Chile, obtenidos de una [datathon de DataCampfire](https://datacampfire.com/datathon/).
```
gasto_raw = pd.read_csv(os.path.join("data", "gasto_fiscal.csv"), sep=";")
gasto_raw.head()
```
Pasos a seguir:
1. Renombra la columna `anio` por `year`.
2. Crea la columna `month` utilizando el diccionario `es_month_dict` definido abajo. Hint: Usar un mapeo.
3. Crea la columna `day` en que todos los registros sean igual a `1`.
4. Crea la columna `dt_date` con la función `pd.to_datetime`. Lee la documentación!
5. Finalmente, elimina las columnas `year`, `mes`, `month`, `day`.
```
es_month_dict = {
'enero': 1,
'febrero': 2,
'marzo': 3,
'abril': 4,
'mayo': 5,
'junio': 6,
'julio': 7,
'agosto': 8,
'septiembre': 9,
'octubre': 10,
'noviembre': 11,
'diciembre': 12
}
gasto = (
gasto_raw.rename(columns={'anio': 'year'})
.assign(
month=lambda x: x["mes"].str.lower().map(es_month_dict),
day=1,
dt_date=lambda x: pd.to_datetime(x.loc[:, ['year', 'month', 'day']]),
).drop(columns=['year', 'mes', 'month', 'day'])
)
gasto.head()
```
## Ejercicio 2 (1 pto.)
Pivotea el dataframe `gasto_raw` tal que:
- Los índices sean los ministerios (partidas).
- Las columnas sean los años.
- Cada celda sea la suma de los montos.
- Rellenar las celdas vacías con `""`.
¿Cuáles son las combinaciones de Año-Ministerio que no tienen gasto?
```
gasto_raw['anio'].sort_values().unique()
gasto_raw.pivot_table(
index='partida',
columns='anio',
values='monto',
aggfunc='sum',
fill_value='',
)
```
__Respuesta__:
|Ministerio|Años|
|-|-|
|Ministerio De Energía|2009|
|Ministerio De La Mujer Y La Equidad De Género|2009-2015|
|Ministerio Del Deporte|2009-2013|
|Ministerio Del Medio Ambiente|2009|
|Servicio Electoral|2009 - 2016|
## Ejercicio 3 (1 pto.)
Realiza los benchmarks del archivo `benchmark_loop.py` que se encuentra en el directorio `fast_pandas`.
¿Cuál forma dirías que es la más eficiente?
Utiliza el comando mágico `%load` y edita de tal manera que el módulo `Benchmarker` se importe correctamente.
```
# %load fast_pandas/benchmark_loop.py
from fast_pandas.Benchmarker import Benchmarker
def iterrows_function(df):
for index, row in df.iterrows():
pass
def itertuples_function(df):
for row in df.itertuples():
pass
def df_values(df):
for row in df.values:
pass
params = {
"df_generator": 'pd.DataFrame(np.random.randint(1, df_size, (df_size, 4)), columns=list("ABCD"))',
"functions_to_evaluate": [df_values, itertuples_function, iterrows_function],
"title": "Benchmark for iterating over all rows",
"user_df_size_powers": [2, 3, 4, 5, 6],
"user_loop_size_powers": [2, 2, 1, 1, 1],
}
benchmark = Benchmarker(**params)
benchmark.benchmark_all()
benchmark.print_results()
benchmark.plot_results()
```
__Respuesta__: df_values
| github_jupyter |
# requests
## 实例引入
```
import requests
response = requests.get('https://www.baidu.com/')
print(type(response))
print(response.status_code)
print(type(response.text))
print(response.text)
print(response.cookies)
```
## 各种请求方式
```
import requests
requests.post('http://httpbin.org/post')
requests.put('http://httpbin.org/put')
requests.delete('http://httpbin.org/delete')
requests.head('http://httpbin.org/get')
requests.options('http://httpbin.org/get')
```
# 请求
## 基本GET请求
### 基本写法
```
import requests
response = requests.get('http://httpbin.org/get')
print(response.text)
```
### 带参数GET请求
```
import requests
response = requests.get("http://httpbin.org/get?name=germey&age=22")
print(response.text)
import requests
data = {
'name': 'germey',
'age': 22
}
response = requests.get("http://httpbin.org/get", params=data)
print(response.text)
```
### 解析json
```
import requests
import json
response = requests.get("http://httpbin.org/get")
print(type(response.text))
print(response.json())
print(json.loads(response.text))
print(type(response.json()))
```
### 获取二进制数据
```
import requests
response = requests.get("https://github.com/favicon.ico")
print(type(response.text), type(response.content))
print(response.text)
print(response.content)
import requests
response = requests.get("https://github.com/favicon.ico")
with open('favicon.ico', 'wb') as f:
f.write(response.content)
f.close()
```
### 添加headers
```
import requests
response = requests.get("https://www.zhihu.com/explore")
print(response.text)
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}
response = requests.get("https://www.zhihu.com/explore", headers=headers)
print(response.text)
```
## 基本POST请求
```
import requests
data = {'name': 'germey', 'age': '22'}
response = requests.post("http://httpbin.org/post", data=data)
print(response.text)
import requests
data = {'name': 'germey', 'age': '22'}
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}
response = requests.post("http://httpbin.org/post", data=data, headers=headers)
print(response.json())
```
# 响应
## reponse属性
```
import requests
response = requests.get('http://www.jianshu.com')
print(type(response.status_code), response.status_code)
print(type(response.headers), response.headers)
print(type(response.cookies), response.cookies)
print(type(response.url), response.url)
print(type(response.history), response.history)
```
## 状态码判断
```
import requests
response = requests.get('http://www.jianshu.com/hello.html')
exit() if not response.status_code == requests.codes.not_found else print('404 Not Found')
import requests
response = requests.get('http://www.jianshu.com')
exit() if not response.status_code == 200 else print('Request Successfully')
```
下面的状态码,可以直接用string去表示,例如404可以是requests.codes.not_found,200可以是requests.codes.ok
```
100: ('continue',),
101: ('switching_protocols',),
102: ('processing',),
103: ('checkpoint',),
122: ('uri_too_long', 'request_uri_too_long'),
200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '✓'),
201: ('created',),
202: ('accepted',),
203: ('non_authoritative_info', 'non_authoritative_information'),
204: ('no_content',),
205: ('reset_content', 'reset'),
206: ('partial_content', 'partial'),
207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),
208: ('already_reported',),
226: ('im_used',),
# Redirection.
300: ('multiple_choices',),
301: ('moved_permanently', 'moved', '\\o-'),
302: ('found',),
303: ('see_other', 'other'),
304: ('not_modified',),
305: ('use_proxy',),
306: ('switch_proxy',),
307: ('temporary_redirect', 'temporary_moved', 'temporary'),
308: ('permanent_redirect',
'resume_incomplete', 'resume',), # These 2 to be removed in 3.0
# Client Error.
400: ('bad_request', 'bad'),
401: ('unauthorized',),
402: ('payment_required', 'payment'),
403: ('forbidden',),
404: ('not_found', '-o-'),
405: ('method_not_allowed', 'not_allowed'),
406: ('not_acceptable',),
407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),
408: ('request_timeout', 'timeout'),
409: ('conflict',),
410: ('gone',),
411: ('length_required',),
412: ('precondition_failed', 'precondition'),
413: ('request_entity_too_large',),
414: ('request_uri_too_large',),
415: ('unsupported_media_type', 'unsupported_media', 'media_type'),
416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),
417: ('expectation_failed',),
418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),
421: ('misdirected_request',),
422: ('unprocessable_entity', 'unprocessable'),
423: ('locked',),
424: ('failed_dependency', 'dependency'),
425: ('unordered_collection', 'unordered'),
426: ('upgrade_required', 'upgrade'),
428: ('precondition_required', 'precondition'),
429: ('too_many_requests', 'too_many'),
431: ('header_fields_too_large', 'fields_too_large'),
444: ('no_response', 'none'),
449: ('retry_with', 'retry'),
450: ('blocked_by_windows_parental_controls', 'parental_controls'),
451: ('unavailable_for_legal_reasons', 'legal_reasons'),
499: ('client_closed_request',),
# Server Error.
500: ('internal_server_error', 'server_error', '/o\\', '✗'),
501: ('not_implemented',),
502: ('bad_gateway',),
503: ('service_unavailable', 'unavailable'),
504: ('gateway_timeout',),
505: ('http_version_not_supported', 'http_version'),
506: ('variant_also_negotiates',),
507: ('insufficient_storage',),
509: ('bandwidth_limit_exceeded', 'bandwidth'),
510: ('not_extended',),
511: ('network_authentication_required', 'network_auth', 'network_authentication'),
```
# 高级操作
## 文件上传
```
import requests
files = {'file': open('favicon.ico', 'rb')}
response = requests.post("http://httpbin.org/post", files=files)
print(response.text)
```
## 获取cookie
```
import requests
response = requests.get("https://www.baidu.com")
print(response.cookies)
for key, value in response.cookies.items():
print(key + '=' + value)
```
## 会话维持
模拟登录
```
使用两次requests.get() 相当于打开了两次浏览器窗口,下面的例子第一次设置了cookies,第二次又重新打开了一个新对象去取cookies,所以取不到值。
import requests
requests.get('http://httpbin.org/cookies/set/number/123456789')
response = requests.get('http://httpbin.org/cookies')
print(response.text)
使用requests.Session()创建出的对象,用这个对象发起两次get请求,可以维持同一个对象通信。
import requests
s = requests.Session()
s.get('http://httpbin.org/cookies/set/number/123456789')
response = s.get('http://httpbin.org/cookies')
print(response.text)
```
## 证书验证
```
import requests
response = requests.get('https://www.12306.cn')
print(response.status_code)
import requests
# from requests.packages import urllib3
# urllib3.disable_warnings()
response = requests.get('https://www.12306.cn', verify=False)
print(response.status_code)
import requests
response = requests.get('https://www.12306.cn', cert=('/path/server.crt', '/path/key'))
print(response.status_code)
```
## 代理设置
```
import requests
proxies = {
"http": "http://127.0.0.1:9743",
"https": "https://127.0.0.1:9743",
}
response = requests.get("https://www.taobao.com", proxies=proxies)
print(response.status_code)
import requests
proxies = {
"http": "http://user:[email protected]:9743/",
}
response = requests.get("https://www.taobao.com", proxies=proxies)
print(response.status_code)
pip3 install 'requests[socks]'
import requests
proxies = {
'http': 'socks5://127.0.0.1:9742',
'https': 'socks5://127.0.0.1:9742'
}
response = requests.get("https://www.taobao.com", proxies=proxies)
print(response.status_code)
```
## 超时设置
```
import requests
from requests.exceptions import ReadTimeout
try:
response = requests.get("http://httpbin.org/get", timeout = 0.5)
print(response.status_code)
except ReadTimeout:
print('Timeout')
```
## 认证设置
```
当网站需要输入用户名和密码才可以访问的时候,下面两种方式都可以模拟账号密码为参数的请求。
import requests
from requests.auth import HTTPBasicAuth
r = requests.get('http://120.27.34.24:9001', auth=HTTPBasicAuth('user', '123'))
print(r.status_code)
import requests
r = requests.get('http://120.27.34.24:9001', auth=('user', '123'))
print(r.status_code)
```
## 异常处理
[requests exceptions](https://requests.kennethreitz.org/en/master/api/#exceptions)
```
import requests
from requests.exceptions import ReadTimeout, ConnectionError, RequestException
try:
response = requests.get("http://httpbin.org/get", timeout = 0.5)
print(response.status_code)
except ReadTimeout:
print('Timeout')
except ConnectionError:
print('Connection error')
except RequestException:
print('Error')
```
| github_jupyter |
# More To Come. Stay Tuned. !!
If there are any suggestions/changes you would like to see in the Kernel please let me know :). Appreciate every ounce of help!
**This notebook will always be a work in progress**. Please leave any comments about further improvements to the notebook! Any feedback or constructive criticism is greatly appreciated!. **If you like it or it helps you , you can upvote and/or leave a comment :).**|
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import IPython.display as ipd # To play sound in the notebook
from tqdm import tqdm_notebook
import wave
from scipy.io import wavfile
SAMPLE_RATE = 44100
import seaborn as sns # for making plots with seaborn
color = sns.color_palette()
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.offline as offline
offline.init_notebook_mode()
import plotly.tools as tls
# Math
import numpy as np
from scipy.fftpack import fft
from scipy import signal
from scipy.io import wavfile
import librosa
import os
print(os.listdir("../input"))
INPUT_LIB = '../input/'
audio_train_files = os.listdir('../input/audio_train')
audio_test_files = os.listdir('../input/audio_test')
train = pd.read_csv('../input/train.csv')
submission = pd.read_csv("../input/sample_submission.csv", index_col='fname')
train_audio_path = '../input/audio_train/'
filename = '/001ca53d.wav' # Hi-hat
sample_rate, samples = wavfile.read(str(train_audio_path) + filename)
#sample_rate = 16000
print(samples)
print("Size of training data",train.shape)
train.head()
submission.head()
def clean_filename(fname, string):
file_name = fname.split('/')[1]
if file_name[:2] == '__':
file_name = string + file_name
return file_name
def load_wav_file(name, path):
_, b = wavfile.read(path + name)
assert _ == SAMPLE_RATE
return b
train_data = pd.DataFrame({'file_name' : train['fname'],
'target' : train['label']})
train_data['time_series'] = train_data['file_name'].apply(load_wav_file,
path=INPUT_LIB + 'audio_train/')
train_data['nframes'] = train_data['time_series'].apply(len)
train_data.head()
print("Size of training data after some preprocessing : ",train_data.shape)
# missing data in training data set
total = train_data.isnull().sum().sort_values(ascending = False)
percent = (train_data.isnull().sum()/train_data.isnull().count()).sort_values(ascending = False)
missing_train_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_train_data.head()
```
There is no missing data in training dataset
# Manually verified Audio
```
temp = train['manually_verified'].value_counts()
labels = temp.index
sizes = (temp / temp.sum())*100
trace = go.Pie(labels=labels, values=sizes, hoverinfo='label+percent')
layout = go.Layout(title='Manually varification of labels(0 - No, 1 - Yes)')
data = [trace]
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
```
* Approximately 40 % labels are manually varified.
```
plt.figure(figsize=(12,8))
sns.distplot(train_data.nframes.values, bins=50, kde=False)
plt.xlabel('nframes', fontsize=12)
plt.title("Histogram of #frames")
plt.show()
plt.figure(figsize=(17,8))
boxplot = sns.boxplot(x="target", y="nframes", data=train_data)
boxplot.set(xlabel='', ylabel='')
plt.title('Distribution of audio frames, per label', fontsize=17)
plt.xticks(rotation=80, fontsize=17)
plt.yticks(fontsize=17)
plt.xlabel('Label name')
plt.ylabel('nframes')
plt.show()
print("Total number of labels in training data : ",len(train_data['target'].value_counts()))
print("Labels are : ", train_data['target'].unique())
plt.figure(figsize=(15,8))
audio_type = train_data['target'].value_counts().head(30)
sns.barplot(audio_type.values, audio_type.index)
for i, v in enumerate(audio_type.values):
plt.text(0.8,i,v,color='k',fontsize=12)
plt.xticks(rotation='vertical')
plt.xlabel('Frequency')
plt.ylabel('Label Name')
plt.title("Top 30 labels with their frequencies in training data")
plt.show()
```
### Total number of labels are 41
```
temp = train_data.sort_values(by='target')
temp.head()
```
## Now look at some labels waveform :
1. Acoustic_guitar
2. Applause
3. Bark
## 1. Acoustic_guitar
```
print("Acoustic_guitar : ")
fig, ax = plt.subplots(10, 4, figsize = (12, 16))
for i in range(40):
ax[i//4, i%4].plot(temp['time_series'][i])
ax[i//4, i%4].set_title(temp['file_name'][i][:-4])
ax[i//4, i%4].get_xaxis().set_ticks([])
fig.savefig("AudioWaveform", dpi=900)
```
## 2. Applause
```
print("Applause : ")
fig, ax = plt.subplots(10, 4, figsize = (12, 16))
for i in range(40):
ax[i//4, i%4].plot(temp['time_series'][i+300])
ax[i//4, i%4].set_title(temp['file_name'][i+300][:-4])
ax[i//4, i%4].get_xaxis().set_ticks([])
```
## 3. Bark
```
print("Bark : ")
fig, ax = plt.subplots(10, 4, figsize = (12, 16))
for i in range(40):
ax[i//4, i%4].plot(temp['time_series'][i+600])
ax[i//4, i%4].set_title(temp['file_name'][i+600][:-4])
ax[i//4, i%4].get_xaxis().set_ticks([])
from wordcloud import WordCloud
wordcloud = WordCloud(max_font_size=50, width=600, height=300).generate(' '.join(train_data.target))
plt.figure(figsize=(15,8))
plt.imshow(wordcloud)
plt.title("Wordcloud for Labels", fontsize=35)
plt.axis("off")
plt.show()
#fig.savefig("LabelsWordCloud", dpi=900)
```
# Spectrogram
```
def log_specgram(audio, sample_rate, window_size=20,
step_size=10, eps=1e-10):
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
freqs, times, spec = signal.spectrogram(audio,
fs=sample_rate,
window='hann',
nperseg=nperseg,
noverlap=noverlap,
detrend=False)
return freqs, times, np.log(spec.T.astype(np.float32) + eps)
freqs, times, spectrogram = log_specgram(samples, sample_rate)
fig = plt.figure(figsize=(18, 8))
ax2 = fig.add_subplot(211)
ax2.imshow(spectrogram.T, aspect='auto', origin='lower',
extent=[times.min(), times.max(), freqs.min(), freqs.max()])
ax2.set_yticks(freqs[::40])
ax2.set_xticks(times[::40])
ax2.set_title('Spectrogram of Hi-hat ' + filename)
ax2.set_ylabel('Freqs in Hz')
ax2.set_xlabel('Seconds')
```
# Specgtrogram of "Hi-Hat" in 3d
If we use spectrogram as an input features for NN, we have to remember to normalize features.
```
mean = np.mean(spectrogram, axis=0)
std = np.std(spectrogram, axis=0)
spectrogram = (spectrogram - mean) / std
data = [go.Surface(z=spectrogram.T)]
layout = go.Layout(
title='Specgtrogram of "Hi-Hat" in 3d',
scene = dict(
yaxis = dict(title='Frequencies', range=freqs),
xaxis = dict(title='Time', range=times),
zaxis = dict(title='Log amplitude'),
),
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
```
# More To Come. Stayed Tuned !!
| github_jupyter |
### DemIntro02:
# Rational Expectations Agricultural Market Model
#### Preliminary task:
Load required modules
```
from compecon.quad import qnwlogn
from compecon.tools import discmoments
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('dark')
%matplotlib notebook
```
Generate yield distribution
```
sigma2 = 0.2 ** 2
y, w = qnwlogn(25, -0.5 * sigma2, sigma2)
```
## Compute rational expectations equilibrium using function iteration, iterating on acreage planted
```
A = lambda aa, pp: 0.5 + 0.5 * np.dot(w, np.maximum(1.5 - 0.5 * aa * y, pp))
ptarg = 1
a = 1
print('{:^6} {:^10} {:^10}\n{}'.format('iter', 'a', "|a' - a|",'-' * 27))
for it in range(50):
aold = a
a = A(a, ptarg)
print('{:^6} {:^10.4f} {:^10.1e}'.format(it, a, np.linalg.norm(a - aold)))
if np.linalg.norm(a - aold) < 1.e-8:
break
```
Intermediate outputs
```
q = a * y # quantity produced in each state
p = 1.5 - 0.5 * a * y # market price in each state
f = np.maximum(p, ptarg) # farm price in each state
r = f * q # farm revenue in each state
g = (f - p) * q #government expenditures
```
Print results
```
varnames = ['Market Price', 'Farm Price', 'Farm Revenue', 'Government Expenditures']
xavg, xstd = discmoments(w, np.vstack((p, f, r, g)))
print('\n{:^24} {:^8} {:^8}\n{}'.format('Variable', 'Expect', 'Std Dev','-'*42))
for varname, av, sd in zip(varnames, xavg, xstd):
print('{:24} {:8.4f} {:8.4f}'.format(varname, av, sd))
```
## Generate fixed-point mapping
```
aeq = a
a = np.linspace(0, 2, 100)
g = np.array([A(k, ptarg) for k in a])
```
Graph rational expectations equilibrium
```
fig1 = plt.figure(figsize=[6, 6])
ax = fig1.add_subplot(111, title='Rational expectations equilibrium', aspect=1,
xlabel='Acreage Planted', xticks=[0, aeq, 2], xticklabels=['0', '$a^{*}$', '2'],
ylabel='Rational Acreage Planted', yticks=[0, aeq, 2],yticklabels=['0', '$a^{*}$', '2'])
ax.plot(a, g, 'b', linewidth=4)
ax.plot(a, a, ':', color='grey', linewidth=2)
ax.plot([0, aeq, aeq], [aeq, aeq, 0], 'r--', linewidth=3)
ax.plot([aeq], [aeq], 'ro', markersize=12)
ax.text(0.05, 0, '45${}^o$', color='grey')
ax.text(1.85, aeq - 0.15,'$g(a)$', color='blue')
fig1.show()
```
## Compute rational expectations equilibrium as a function of the target price
```
nplot = 50
ptarg = np.linspace(0, 2, nplot)
a = 1
Ep, Ef, Er, Eg, Sp, Sf, Sr, Sg = (np.empty(nplot) for k in range(8))
for ip in range(nplot):
for it in range(50):
aold = a
a = A(a, ptarg[ip])
if np.linalg.norm((a - aold) < 1.e-10):
break
q = a * y # quantity produced
p = 1.5 - 0.5 * a * y # market price
f = np.maximum(p, ptarg[ip]) # farm price
r = f * q # farm revenue
g = (f - p) * q # government expenditures
xavg, xstd = discmoments(w, np.vstack((p, f, r, g)))
Ep[ip], Ef[ip], Er[ip], Eg[ip] = tuple(xavg)
Sp[ip], Sf[ip], Sr[ip], Sg[ip] = tuple(xstd)
zeroline = lambda y: plt.axhline(y[0], linestyle=':', color='gray', hold=True)
```
Graph expected prices vs target price
```
fig2 = plt.figure(figsize=[8, 6])
ax1 = fig2.add_subplot(121, title='Expected price',
xlabel='Target price', xticks=[0, 1, 2],
ylabel='Expectation', yticks=[0.5, 1, 1.5, 2], ylim=[0.5, 2.0])
zeroline(Ep)
ax1.plot(ptarg, Ep, linewidth=4, label='Market Price')
ax1.plot(ptarg, Ef, linewidth=4, label='Farm Price')
ax1.legend(loc='upper left')
```
Graph expected prices vs target price
```
ax2 = fig2.add_subplot(122, title='Price variabilities',
xlabel='Target price', xticks=[0, 1, 2],
ylabel='Standard deviation', yticks=[0, 0.1, 0.2]) #plt.ylim(0.5, 2.0)
zeroline(Sf)
ax2.plot(ptarg, Sp, linewidth=4, label='Market Price')
ax2.plot(ptarg, Sf, linewidth=4, label='Farm Price')
ax2.legend(loc='upper left')
fig2.show()
```
Graph expected farm revenue vs target price
```
fig3 = plt.figure(figsize=[12, 6])
ax1 = fig3.add_subplot(131, title='Expected revenue',
xlabel='Target price', xticks=[0, 1, 2],
ylabel='Expectation', yticks=[1, 2, 3], ylim=[0.8, 3.0])
zeroline(Er)
ax1.plot(ptarg, Er, linewidth=4)
```
Graph standard deviation of farm revenue vs target price
```
ax2 = fig3.add_subplot(132, title='Farm Revenue Variability',
xlabel='Target price', xticks=[0, 1, 2],
ylabel='Standard deviation', yticks=[0, 0.2, 0.4])
zeroline(Sr)
ax2.plot(ptarg, Sr, linewidth=4)
```
Graph expected government expenditures vs target price
```
ax3 = fig3.add_subplot(133, title='Expected Government Expenditures',
xlabel='Target price', xticks=[0, 1, 2],
ylabel='Expectation', yticks=[0, 1, 2], ylim=[-0.05, 2.0])
zeroline(Eg)
ax3.plot(ptarg, Eg, linewidth=4)
plt.show()
```
| github_jupyter |
```
from numpy import array
import datetime as dt
from matplotlib import pyplot as plt
from sklearn import model_selection
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
import math
df=pd.read_csv("/content/drive/MyDrive/stocks.csv" )
df["Date"] = pd.to_datetime(df['Date'],format="%Y-%m-%d")
df.shape
df.head()
df.tail()
start = dt.datetime(2012,1,12)
end = dt.datetime(2020,8,5)
Goog=df['GOOG']
Goog.head()
Goog.isna().sum()
```
#'IBM' Stock analysis for monthly , weekly as well as daily stock prices.
```
df['IBM_week'] = df.IBM.rolling(7).mean().shift()
df['IBM_month'] = df.IBM.rolling(30).mean()
plt.figure(figsize=(15,10))
plt.grid(True)
plt.plot(df['IBM_week'],label='Weekly')
plt.plot(df['IBM_month'], label='Monthly')
plt.plot(df['IBM'],label='IBM')
plt.legend(loc=2)
```
#APPL stocks
```
df['AAPL_week'] = df.IBM.rolling(7).mean()
df['AAPL_month'] = df.IBM.rolling(30).mean()
plt.figure(figsize=(15,10))
plt.grid(True)
plt.plot(df['AAPL_week'],label='Weekly')
plt.plot(df['AAPL_month'], label='Monthly')
plt.plot(df['AAPL'],label='AAPL')
plt.legend(loc=2)
from statsmodels.tsa.stattools import adfuller
def adf_test(dataset):
dftest = adfuller(dataset, autolag = 'AIC')
print("1. ADF : ",dftest[0])
print("2. P-Value : ", dftest[1])
print("3. Num Of Lags : ", dftest[2])
print("4. Num Of Observations Used For ADF Regression and Critical Values Calculation :", dftest[3])
print("5. Critical Values :")
for key, val in dftest[4].items():
print("\t",key, ": ", val)
adf_test(Goog)
#define function for kpss test
from statsmodels.tsa.stattools import kpss
#define KPSS
def kpss_test(timeseries):
print ('Results of KPSS Test:')
kpsstest = kpss(timeseries, regression='c')
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
for key,value in kpsstest[3].items():
kpss_output['Critical Value (%s)'%key] = value
print (kpss_output)
kpss_test(Goog)
goog=Goog.values
goog
#dataframe with only apple col
data = df.filter(['GOOG'])
#convert df into numpy arr
dataset = data.values
training_data_len = math.ceil(len(dataset)*0.8)
training_data_len
#scaling
scaler = MinMaxScaler(feature_range=(0,1))
#transforming values to 0&1 before it is passed into the nw
scaled_data = scaler.fit_transform(dataset)
scaled_data=scaled_data.flatten()
scaled_data
# split a univariate sequence
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
n_steps=5
X, y = split_sequence(scaled_data, n_steps)
# summarize the data
for i in range(len(X)):
print(X[i], y[i])
n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))
regressor = Sequential()
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (n_steps, n_features)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics='acc')
# fit model
regressor.fit(X, y, epochs=200,validation_split=0.33,verbose=1)
x=Goog.iloc[1758:]
# demonstrate prediction
x_input = array([0.91975247, 0.92645011, 0.94698485, 0.94263605, 0.94388409])
x_input = x_input.reshape((1, n_steps, n_features))
yhat = regressor.predict(x_input, verbose=0)
print(yhat)
```
#Stacked LSTM
```
df1=df.reset_index()['AAPL']
import matplotlib.pyplot as plt
plt.plot(df1)
from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler(feature_range=(0,1))
df1=scaler.fit_transform(np.array(df1).reshape(-1,1))
print(df1)
##splitting dataset into train and test split
training_size=int(len(df1)*0.65)
test_size=len(df1)-training_size
train_data,test_data=df1[0:training_size,:],df1[training_size:len(df1),:1]
training_size,test_size
import numpy
# convert an array of values into a dataset matrix
def create_dataset(dataset, time_step=1):
dataX, dataY = [], []
for i in range(len(dataset)-time_step-1):
a = dataset[i:(i+time_step), 0] ###i=0, 0,1,2,3-----99 100
dataX.append(a)
dataY.append(dataset[i + time_step, 0])
return numpy.array(dataX), numpy.array(dataY)
# reshape into X=t,t+1,t+2,t+3 and Y=t+4
time_step = 100
X_train, y_train = create_dataset(train_data, time_step)
X_test, ytest = create_dataset(test_data, time_step)
print(X_train.shape), print(y_train.shape)
print(X_test.shape), print(ytest.shape)
# reshape input to be [samples, time steps, features] which is required for LSTM
X_train =X_train.reshape(X_train.shape[0],X_train.shape[1] , 1)
X_test = X_test.reshape(X_test.shape[0],X_test.shape[1] , 1)
model=Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(100,1)))
model.add(LSTM(50,return_sequences=True))
model.add(LSTM(50))
model.add(Dense(1))
model.compile(loss='mean_squared_error',optimizer='adam')
model.fit(X_train,y_train,validation_data=(X_test,ytest),epochs=100,batch_size=64,verbose=1)
### Lets Do the prediction and check performance metrics
train_predict=model.predict(X_train)
test_predict=model.predict(X_test)
##Transformback to original form
train_predict=scaler.inverse_transform(train_predict)
test_predict=scaler.inverse_transform(test_predict)
### Calculate RMSE performance metrics
import math
from sklearn.metrics import mean_squared_error
math.sqrt(mean_squared_error(y_train,train_predict))
### Plotting
# shift train predictions for plotting
look_back=100
trainPredictPlot = numpy.empty_like(df1)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(df1)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(train_predict)+(look_back*2)+1:len(df1)-1, :] = test_predict
# plot baseline and predictions
plt.plot(scaler.inverse_transform(df1))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()
len(test_data)
x_input=test_data[756:].reshape(1,-1)
x_input.shape
temp_input=list(x_input)
temp_input=temp_input[0].tolist()
# demonstrate prediction for next 10 days
from numpy import array
lst_output=[]
n_steps=100
i=0
while(i<30):
if(len(temp_input)>100):
#print(temp_input)
x_input=np.array(temp_input[1:])
print("{} day input {}".format(i,x_input))
x_input=x_input.reshape(1,-1)
x_input = x_input.reshape((1, n_steps, 1))
#print(x_input)
yhat = model.predict(x_input, verbose=0)
print("{} day output {}".format(i,yhat))
temp_input.extend(yhat[0].tolist())
temp_input=temp_input[1:]
#print(temp_input)
lst_output.extend(yhat.tolist())
i=i+1
else:
x_input = x_input.reshape((1, n_steps,1))
yhat = model.predict(x_input, verbose=0)
print(yhat[0])
temp_input.extend(yhat[0].tolist())
print(len(temp_input))
lst_output.extend(yhat.tolist())
i=i+1
print(lst_output)
```
| github_jupyter |
# Baseline model classification
The purpose of this notebook is to make predictions for all six categories on the given dataset using some set of rules.
<br>Let's assume that human labellers have labelled these comments based on the certain kind of words present in the comments. So it is worth exploring the comments to check the kind of words used under every category and how many times that word occurred in that category. So in this notebook, six datasets are created from the main dataset, to make the analysis easy for each category. After this, counting and storing the most frequently used words under each category is done. For each category, then we are checking the presence of `top n` words from the frequently used word dictionary, in the comments, to make the prediction.
### 1. Import libraries and load data
For preparation lets import the required libraries and the data
```
import os
dir_path = os.path.dirname(os.getcwd())
import numpy as np
import pandas as pd
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import re
import string
import operator
import pickle
import sys
sys.path.append(os.path.join(dir_path, "src"))
from clean_comments import clean
train_path = os.path.join(dir_path, 'data', 'raw', 'train.csv')
## Load dataset
df = pd.read_csv(train_path)
```
### <br>2. Datasets for each category
Dataset with toxic comments
```
#extract dataset with toxic label
df_toxic = df[df['toxic'] == 1]
#Reseting the index
df_toxic.set_index(['id'], inplace = True)
df_toxic.reset_index(level =['id'], inplace = True)
```
Dataset of severe toxic comments
```
#extract dataset with Severe toxic label
df_severe_toxic = df[df['severe_toxic'] == 1]
#Reseting the index
df_severe_toxic.set_index(['id'], inplace = True)
df_severe_toxic.reset_index(level =['id'], inplace = True)
```
Dataset with obscene comment
```
#extract dataset with obscens label
df_obscene = df[df['obscene'] == 1]
#Reseting the index
df_obscene.set_index(['id'], inplace = True)
df_obscene.reset_index(level =['id'], inplace = True)
#df_obscene =df_obscene.drop('comment_text', axis=1)
```
Dataset with comments labeled as "identity_hate"
```
df_identity_hate = df[df['identity_hate'] == 1]
#Reseting the index
df_identity_hate.set_index(['id'], inplace = True)
df_identity_hate.reset_index(level =['id'], inplace = True)
```
Dataset with all the threat comments
```
df_threat = df[df['threat'] == 1]
#Reseting the index
df_threat.set_index(['id'], inplace = True)
df_threat.reset_index(level =['id'], inplace = True)
```
Dataset of comments with "Insult" label
```
df_insult = df[df['insult'] == 1]
#Reseting the index
df_insult.set_index(['id'], inplace = True)
df_insult.reset_index(level =['id'], inplace = True)
```
Dataset with comments which have all six labels
```
df_6 = df[(df['toxic']==1) & (df['severe_toxic']==1) &
(df['obscene']==1) & (df['threat']==1)&
(df['insult']==1)& (df['identity_hate']==1)]
df_6.set_index(['id'], inplace = True)
df_6.reset_index(level =['id'], inplace = True)
# df6 = df_6.drop('comment_text', axis=1)
```
### <br> 3. Preperation of vocab
```
### frequent_words function take dataset as an input and returns two arguments -
### all_words and counts.
### all_words gives all the words occuring in the provided dataset
### counts gives dictionary with keys as a words those exists in the entire dataset and values
### as a count of existance of these words in the dataset.
def frequent_words(data):
all_word = []
counts = dict()
for i in range (0,len(data)):
### Load input
input_str = data.comment_text[i]
### Clean input data
processed_text = clean(input_str)
### perform tokenization
tokened_text = word_tokenize(processed_text)
### remove stop words
comment_word = []
for word in tokened_text:
if word not in stopwords.words('english'):
comment_word.append(word)
#print(len(comment_word))
all_word.extend(comment_word)
for word in all_word:
if word in counts:
counts[word] += 1
else:
counts[word] = 1
return all_word, counts
## descend_order_dict funtion takes dataframe as an input and outputs sorted vocab dictionary
## with the values sorted in descending order (keys are words and values are word count)
def descend_order_dict(data):
all_words, word_count = frequent_words(data)
sorted_dict = dict( sorted(word_count.items(), key=operator.itemgetter(1),reverse=True))
return sorted_dict
label_sequence = df.columns.drop("id")
label_sequence = label_sequence.drop("comment_text").tolist()
label_sequence
```
#### <br>Getting the vocab used in each category in descending order its count
For **`toxic`** category
```
descend_order_toxic_dict = descend_order_dict(df_toxic)
```
These are the words most frequently used in toxic comments
<br>For **`severe_toxic`** category
```
descend_order_severe_toxic_dict =descend_order_dict(df_severe_toxic)
```
These are the words most frequently used in severe toxic comments
<br>For **`obscene`** category
```
descend_order_obscene_dict = descend_order_dict(df_obscene)
```
These are the words most frequently used in obscene comments
<br>For **`threat`** category
```
descend_order_threat_dict = descend_order_dict(df_threat)
```
These are the words most frequently used in severe threat comments
<br>For **`insult`** category
```
descend_order_insult_dict = descend_order_dict(df_insult)
```
These are the words most frequently used in comments labeled as an insult
<br>For **`identity_hate`** category
```
descend_order_id_hate_dict = descend_order_dict(df_identity_hate)
```
These are the most frequently used words in the comments labeled as identity_hate
<br> For comments when all categories are 1
```
descend_order_all_label_dict = descend_order_dict(df_6)
```
These are the most frequently used words in the comments labeled as identity_hate
#### <br> Picking up the top n words from the descend vocab dictionary
In this code, top 3 words are considered to make the prediction.
```
# list(descend_order_all_label_dict.keys())[3]
## combining descend vocab dictionaries of all the categories in one dictionary
## with categories as their keys
all_label_descend_vocab = {'toxic':descend_order_toxic_dict,
'severe_toxic':descend_order_severe_toxic_dict,
'obscene':descend_order_obscene_dict,
'threat':descend_order_threat_dict,
'insult':descend_order_insult_dict,
'id_hate':descend_order_id_hate_dict
}
## this function takes two arguments - all_label_freq_word and top n picks
## and outputs a dictionary with categories as keys and list of top 3 words as their values.
def dict_top_n_words(all_label_descend_vocab, n):
count = dict()
for label, words in all_label_descend_vocab.items():
word_list = []
for i in range (0,n):
word_list.append(list(words.keys())[i])
count[label] = word_list
return count
### top 3 words from all the vocabs
dict_top_n_words(all_label_descend_vocab,3)
```
### <br>4. Performance check of baseline Model
```
## Check if the any word from the top 3 words from the six categories exist in the comments
def word_intersection(input_str, n, all_words =all_label_descend_vocab):
toxic_pred = []
severe_toxic_pred = []
obscene_pred = []
threat_pred = []
insult_pred = []
id_hate_pred = []
rule_based_pred = [toxic_pred, severe_toxic_pred, obscene_pred, threat_pred,
insult_pred,id_hate_pred ]
# top_n_words = dict_top_n_words[all_label_freq_word,n]
for count,ele in enumerate(list(dict_top_n_words(all_label_descend_vocab,3).values())):
for word in ele:
if (word in input_str):
rule_based_pred[count].append(word)
#print(rule_based_pred)
for i in range (0,len(rule_based_pred)):
if len(rule_based_pred[i])== 0:
rule_based_pred[i]= 0
else:
rule_based_pred[i]= 1
return rule_based_pred
### Test
word_intersection(df['comment_text'][55], 3)
```
<br>Uncomment the below cell to get the prediction on the dataset but it is already saved in `rule_base_pred.pkl` in list form to save time
```
## store the values of predictions by running the word_intersection function on
## all the comments
# rule_base_pred = df['comment_text'].apply(lambda x: word_intersection(x,3))
```
After running above cell, we get the predictions on the entire dataset for each category in `rule_base_pred`, the orginal type of `rule_base_pred` is pandas.core.series.Series. This pandas series is converted into list and saved for future use. This `.pkl` fine can be loaded by running below cell.
```
### save rule_base_pred
# file_name = "rule_base_pred.pkl"
# open_file = open(file_name, "wb")
# pickle.dump(rule_base_pred, open_file)
# # open_file.close()
# open_file = open("rule_base_pred.pkl", "rb")
# pred_rule = pickle.load(open_file)
# open_file.close()
### Open the saved rule_base_pred.pkl
pkl_file = os.path.join(dir_path, 'model', 'rule_base_pred.pkl')
open_file = open(pkl_file, "rb")
pred_rule = pickle.load(open_file)
open_file.close()
## true prediction
y_true = df.drop(['id', 'comment_text'], axis=1)
## check the type
type(y_true), type(pred_rule)
```
<br>Uncomment pred_rule line in below cell to convert the type of predictions from panda series to list,if not using saved `rule_base_pred.pkl`
```
### Change the type to list
pred_true = y_true.values.tolist()
# pred_rule = rule_base_pred.values.tolist()
```
#### Compute accuracy of Baseline Model
```
## Accuracy check for decent and not-decent comments classification
count = 0
for i in range(0, len(df)):
if pred_true[i] == pred_rule[i]:
count = count+1
print("Overall accuracy of rule based classifier : {}".format((count/len(df))*100))
```
Based on the rule implimented here, baseline classifier is classifying decent and not-decent comments with the **accuracy of 76.6%**.Now we have to see if AI based models giver better performance than this.
```
## Category wise accuracy check
mean = []
for j in range(0, len(pred_true[0])):
count = 0
for i in range(0, len(df)):
if pred_true[i][j] == pred_rule[i][j]:
count = count+1
mean.append(count/len(df)*100)
print("Accuracy of rule based classifier in predicting {} comments : {}".format(label_sequence[j],(count/len(df))*100))
print("Mean accuracy : {}".format(np.array(mean).mean()))
```
Mean accuracy of our *rule-based-model* is 92.7%<br>
Minimum accuracy for predicting `toxic `, `severe_toxic `, `obscene `, `threat `, `insult `, or `identity_hate ` class of the Baseline model is more that 88%.
<br>Accuracies for:
<ol>
<li>`toxic `: 89.4%</li>
<li>`severe_toxic `: 88.2%</li>
<li>`obscene `: 96.3%</li>
<li>`threat `: 87.8%</li>
<li>`insult `: 95.8%</li>
<li>`identity_hate `: 98.3%</li>
</ol>
<br>In my opinion this model is doing quite good. As we know the dataset have more samples for toxic comments as compared to rest of the categories but this model still managed to predict with 89.4% of accuracy by just considering the top 3 words from its very large vocabulary. It may perform better if we consider more than 3 words from its vocab, because top 3 words not necessarily a true representaion of this category.
<br>On the other hand, `obscene `, `insult `, and `identity_hate ` have very good accuracy rates, seems like human labellers looked for these top 3 words to label comments under these categories.
<br>For `threat ` category, the model should perform well as the number of sample for this category is just 478, that means it has smaller vocab comparative to other classes. but seems like human labellers looked at more than these top 3 words of its vocab. It could be checked by tweaking the number of top n words.
```
yp=np.array([np.array(xi) for xi in pred_rule])
type(yp)
# type(y[0])
yp.shape
yt=np.array([np.array(xi) for xi in pred_true])
type(yt)
yt.shape
from sklearn.metrics import jaccard_score
print("Jaccard score is : {}".format(jaccard_score(yt,yp, average= 'weighted')))
```
Our `rule based model` is really bad seeing jaccard similarity
| github_jupyter |
# Manual Labeling Data Preparation
Generate the pixels that will be used for train, test, and validation. This keeps pixels a certain distance and ensures they're spatially comprehensive.
```
import rasterio
import random
import matplotlib.pyplot as plt
import os
import sys
import datetime
from sklearn.utils import class_weight
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
module_path = os.path.abspath(os.path.join('rcnn'))
if module_path not in sys.path:
sys.path.append(module_path)
import utilities as util
import importlib
import rnn_tiles
import rnn_pixels
import numpy as np
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point
importlib.reload(rnn_pixels)
importlib.reload(rnn_tiles)
importlib.reload(util)
```
Load in the training data
```
lc_labels = rasterio.open('/deep_data/recurrent_data/NLCD_DATA/landcover/NLCD_2011_Land_Cover_L48_20190424.img')
canopy_labels = rasterio.open('/deep_data/recurrent_data/NLCD_DATA/canopy/CONUSCartographic_2_8_16/Cartographic/nlcd2011_usfs_conus_canopy_cartographic.img')
```
### Ingest the landsat imagery stacked into yearly seasonal tiles
We don't really need to do this here but the code is just copied from the rcnn code
```
tiles = {}
landsat_datasets = {}
tiles['028012'] = ['20110103', '20110308', '20110730', '20110831', '20111103']
tiles['029011'] = ['20110103', '20110308', '20110730', '20110831', '20111018']
tiles['028011'] = ['20110103', '20110308', '20110831', '20111018', '20111103']
for tile_number, dates in tiles.items():
tile_datasets = []
l8_image_paths = []
for date in dates:
l8_image_paths.append('/deep_data/recurrent_data/tile{}/combined/combined{}.tif'.format(tile_number, date))
for fp in l8_image_paths:
tile_datasets.append(rasterio.open(fp))
landsat_datasets[tile_number] = tile_datasets
tile_size = 13
class_count = 6
tile_list = ['028012', '029011', '028011']
class_dict = util.indexed_dictionary
#px = rnn_pixels.make_pixels(tile_size, tile_list)
#clean_px = rnn_pixels.make_clean_pix(tile_list, tile_size, landsat_datasets,lc_labels, canopy_labels, 100, buffer_pix=1)
```
### Testing for Runtime and Memory Usage
```
%load_ext line_profiler
%lprun -f rnn_pixels.tvt_pix_locations rnn_pixels.tvt_pix_locations(landsat_datasets, lc_labels, canopy_labels, tile_size, tile_list, clean_pixels, 10, 10, 10, class_dict)
w_tile_gen = rnn_tiles.rnn_tile_gen(landsat_datasets, lc_labels, canopy_labels, tile_size, class_count)
w_generator = w_tile_gen.tile_generator(clean_pixels, batch_size=1, flatten=True, canopy=True)
%lprun -f w_tile_gen.tile_generator w_tile_gen.tile_generator(clean_pixels[:2], batch_size=1, flatten=True, canopy=True)
%timeit w_tile_gen.tile_generator(clean_pixels[:2], batch_size=1, flatten=True, canopy=True)
```
### Generate Data for TVT
```
px = rnn_pixels.make_pixels(1, tile_list)
len(px)
px
2500000 / 2500 * 45 / 60 / 60
print(datetime.datetime.now())
px_to_use = px
clean_pixels = rnn_pixels.delete_bad_tiles(landsat_datasets,lc_labels, canopy_labels, px_to_use, tile_size, buffer_pix=1)
print(datetime.datetime.now())
print(len(clean_pixels))
print(datetime.datetime.now())
px_to_use = px
clean_pixels = rnn_pixels.delete_bad_tiles(landsat_datasets,lc_labels, canopy_labels, px_to_use, tile_size, buffer_pix=1)
print(datetime.datetime.now())
print(len(clean_pixels))
#clean_pixels_subset = clean_pixels[:10000]
print(datetime.datetime.now())
tvt_pixels = rnn_pixels.tvt_pix_locations(landsat_datasets, lc_labels,
canopy_labels, tile_size, tile_list, clean_pixels, 150, 150, 1500, class_dict)
print(datetime.datetime.now())
print('test:', len(tvt_pixels[0]), 'val:',len(tvt_pixels[1]), 'train:',len(tvt_pixels[2]))
print(datetime.datetime.now())
tvt_pixels = rnn_pixels.tvt_pix_locations(landsat_datasets, lc_labels,
canopy_labels, tile_size, tile_list, clean_pixels, 150, 150, 1500, class_dict)
print(datetime.datetime.now())
print('test:', len(tvt_pixels[0]), 'val:',len(tvt_pixels[1]), 'train:',len(tvt_pixels[2]))
```
#### See if Data is Actually Balanced
```
class_count = 6
pixels = tvt_pixels[2]
# gets balanced pixels locations
w_tile_gen = rnn_tiles.rnn_tile_gen(landsat_datasets, lc_labels, canopy_labels, tile_size, class_count)
w_generator = w_tile_gen.tile_generator(pixels, batch_size=1, flatten=True, canopy=True)
total_labels = list()
count = 0
#buckets = {0:[], 1:[], 2:[], 3:[], 4:[], 5:[]}
buckets = {0:[], 1:[], 2:[], 3:[], 4:[], 5:[]}
while count < len(pixels):
image_b, label_b = next(w_generator)
#print(image_b['tile_input'].shape)
buckets[np.argmax(label_b["landcover"])].append({
"pixel_loc" : pixels[count][0],
"tile_name" : pixels[count][1],
"landcover" : np.argmax(label_b["landcover"]),
"canopy" : float(label_b["canopy"])
}) # appends pixels to dictionary
count+=1
count = 0
for z, j in buckets.items():
print(z, len(j))
count += len(j)
print(count)
# a run with 10,000,000 pixels
4 hours for delete bad pixels and 2.5 hours to create tvt
ended up at
0 1500
1 1500
2 1500
3 1500
4 204
5 1500
7704
```
run through the pixels, buffer each pixel and add it to a geopandas dataset, convert that CRS to 4326 then save that geopandas dataset as a shapefile
```
#count_per_class = 3000
count_per_class = 1500 # 1667 * 6 ~= 10,000
pixel_coords = []
for lc_class in buckets.keys():
for i, pixel in enumerate(buckets[lc_class]):
landsat_ds = landsat_datasets[pixel["tile_name"]][0] # get the stack of ls datasets from that location and take the first
x, y = landsat_ds.xy(pixel["pixel_loc"][0], pixel["pixel_loc"][1])
pixel_coords.append({
"x" : x,
"y" : y,
"row" : pixel["pixel_loc"][0],
"col" : pixel["pixel_loc"][1],
"label" : pixel["landcover"],
"canopy" : pixel["canopy"],
"tile_name" : pixel["tile_name"]
})
if i > count_per_class:
break
# create a dataframe from the pixel coordinates
df = pd.DataFrame(pixel_coords)
df.hist(column="label")
landsat_datasets["029011"][0].crs
gdf = gpd.GeoDataFrame(df, geometry = gpd.points_from_xy(df.x, df.y), crs=landsat_datasets["028011"][0].crs)
gdf.plot()
# buffer by 15 meters to make 30x30 pixel
buffer = gdf.buffer(15)
envelope = buffer.envelope
gdf.geometry = envelope
gdf.head()
gdf = gdf.to_crs("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs")
gdf.head()
gdf.hist(column="label", bins=11)
gdf.to_file("train_buffered_points140520.shp",driver='ESRI Shapefile')
reopened_gdf = gpd.read_file("buffered_points.shp")
reopened_gdf.head()
reopened_gdf.hist(column="canopy")
reopened_gdf.crs
reopened_gdf.head()
for index, row in df.iterrows():
print(row['tile_name'])
print(row['canopy'])
print(row['geometry'])
print(row['geometry'].centroid)
break
pd.cut(reopened_gdf['canopy'], 10).value_counts().sort_index()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/hansong0219/Advanced-DeepLearning-Study/blob/master/UNET/UNET_Build.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import os
import sys
from tensorflow.keras.layers import Input, Dropout, Concatenate
from tensorflow.keras.layers import Conv2DTranspose, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import LeakyReLU, Activation
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import plot_model
from tensorflow.keras.losses import BinaryCrossentropy
import matplotlib.pyplot as plt
import tensorflow as tf
def down_sample(layer_inputs,filters, size, apply_batchnorm=True):
initializer = tf.random_normal_initializer(0.,0.02)
d = Conv2D(filters, size, strides=2,padding='same', kernel_initializer=initializer, use_bias=False)(layer_inputs)
if apply_batchnorm:
d = BatchNormalization()(d)
d = LeakyReLU(alpha=0.2)(d)
return d
def up_sample(layer_inputs, skip_input,filters, size, dropout_rate=0):
initializer = tf.random_normal_initializer(0.,0.02)
u = Conv2DTranspose(filters, size, strides=2,padding='same', kernel_initializer=initializer,use_bias=False)(layer_inputs)
if dropout_rate:
u = Dropout(dropout_rate)(u)
u = tf.keras.layers.ReLU()(u)
u = Concatenate()([u, skip_input])
return u
def Build_UNET():
input_shape = (256,256,3)
output_channel = 3
inputs = Input(shape=input_shape,name="inputs")
d1 = down_sample(inputs, 64, 4, apply_batchnorm=False) #(128,128,3)
d2 = down_sample(d1, 128, 4) #(64,64,128)
d3 = down_sample(d2, 256, 4)
d4 = down_sample(d3, 512, 4)
d5 = down_sample(d4, 512, 4)
d6 = down_sample(d5, 512, 4)
d7 = down_sample(d6, 512, 4)
d8 = down_sample(d7, 512, 4)
u7 = up_sample(d8, d7, 512, 4, dropout_rate = 0.5)
u6 = up_sample(u7, d6, 512, 4, dropout_rate = 0.5)
u5 = up_sample(u6, d5, 512, 4, dropout_rate = 0.5)
u4 = up_sample(u5, d4, 512, 4)
u3 = up_sample(u4, d3, 256, 4)
u2 = up_sample(u3, d2, 128, 4)
u1 = up_sample(u2, d1, 64, 4)
initializer = tf.random_normal_initializer(0.,0.02)
outputs = Conv2DTranspose(output_channel,
kernel_size=4,
strides=2,
padding='same',
kernel_initializer=initializer,
activation='tanh')(u1)
return Model(inputs, outputs)
unet = Build_UNET()
optimizer = Adam(1e-4, beta_1=0.5)
unet.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
unet.summary()
plot_model(unet, show_shapes=True, dpi=64)
loss=BinaryCrossentropy(from_logits=True)
optimizer = Adam(1e-4, beta_1=0.5)
unet.compile(optimizer=optimizer, loss='mse', metrics=['accuracy'])
```
| github_jupyter |
```
import torch
import torchphysics as tp
import math
import numpy as np
import pytorch_lightning as pl
print('Tutorial zu TorchPhysics:')
print('https://torchphysics.readthedocs.io/en/latest/tutorial/tutorial_start.html')
from IPython.display import Image, Math, Latex
from IPython.core.display import HTML
Image(filename='bearing.png',width = 500, height = 250)
# First define all parameters:
h_0 = 16.e-06 #m = 16 um
dh = 14e-06 #m = 14 um
D = 0.01 #m = 10 mm
L = np.pi*D # Länge von Gebiet
u_m = 0.26 #m/s 0.26
beta = 2.2*1e-08 # 2.2e-08 m^2/N
nu_0 = 1.5e-03 # Pa·s = 1.5 mPa·s
# lower and upper bounds of parameters
nu0 = 1.0e-03 # Viskosität
nu1 = 2.5e-03
um0 = 0.2 # Geschwindigkeit
um1 = 0.4
dh0 = 10e-6 # Spaltvariaton
dh1 = 15e-6
p_0 = 1e+5 # 1e+5 N/m^2 = 1 bar
p_rel = 0 # Relativdruck
p_skal = 100000 #Skalierungsdruck für (-1,1) Bereich
# define h:
def h(x, dh): # <- hier jetzt auch dh als input
return h_0 + dh * torch.cos(2*x/D) # x in [0,pi*D]
# and compute h':
def h_x(x, dh): # <- hier jetzt auch dh als input
return -2.0*dh/D * torch.sin(2*x/D) # x in [0,pi*D]
# define the function of the viscosity.
# Here we need torch.tensors, since the function will be evaluated in the pde.
# At the beginng the model will have values close to 0,
# therefore the viscosity will also be close to zero.
# This will make the pde condition unstable, because we divide by nu.
# For now set values smaller then 1e-06 to 1e-06
def nu_func(nu, p):
out = nu * torch.exp(beta * p*p_skal)
return torch.clamp(out, min=1e-06)
def nu_x_func(nu,p):
out = nu* beta*p_skal*torch.exp(beta*p*p_skal)
return out
# Variables:
x = tp.spaces.R1('x')
nu = tp.spaces.R1('nu')
um = tp.spaces.R1('um')
dh = tp.spaces.R1('dh')
# output
p = tp.spaces.R1('p')
A_x = tp.domains.Interval(x, 0, L)
A_nu = tp.domains.Interval(nu, nu0, nu1)
A_um = tp.domains.Interval(um, um0, um1)
A_dh = tp.domains.Interval(dh, dh0, dh1)
#inner_sampler = tp.samplers.AdaptiveRejectionSampler(A_x*A_nu*A_um*A_dh, n_points = 50000)
inner_sampler = tp.samplers.RandomUniformSampler(A_x*A_nu*A_um*A_dh, n_points = 10000)
# density: 4 Punkte pro Einheitsvolumen
# Boundaries
boundary_v_sampler = tp.samplers.RandomUniformSampler(A_x.boundary*A_nu*A_um*A_dh, n_points = 5000)
#tp.utils.scatter(nu*um*dh, inner_sampler, boundary_v_sampler)
model = tp.models.Sequential(
tp.models.NormalizationLayer(A_x*A_nu*A_um*A_dh),
tp.models.FCN(input_space=x*nu*um*dh, output_space=p, hidden=(50,50,50))
)
display(Math(r'h(x)\frac{d^2 \tilde{p}}{d x^2} +\left( 3 \frac{dh}{dx} - \frac{h}{\nu} \frac{d \nu}{d x} \
\right) \frac{d \tilde{p}}{d x} = \frac{6 u_m \nu}{p_0 h^2} \frac{d h}{d x}\quad \mbox{with} \
\quad \tilde{p}=\frac{p}{p_{skal}} '))
from torchphysics.utils import grad
# Alternativ tp.utils.grad
def pde(nu, p, x, um, dh): # <- brauchen jetzt dh und um auch als input
# evaluate the viscosity and their first derivative
nu = nu_func(nu,p)
nu_x = nu_x_func(nu,p)
# implement the PDE:
# right hand site
rs = 6*um*nu #<- hier jetzt um statt u_m, da deine Variable so heißt
# h und h_x mit Input dh:
h_out = h(x, dh) # nur einmal auswerten
h_x_out = h_x(x, dh) # nur einmal auswerten
#out = h_out * grad(grad(p,x),x)- rs*h_x_out/h_out/h_out/p_skal
out = h_out*grad(grad(p,x),x) + (3*h_x_out -h_out/nu*nu_x)*grad(p,x) - rs*h_x_out/h_out/h_out/p_skal
return out
pde_condition = tp.conditions.PINNCondition(module=model,
sampler=inner_sampler,
residual_fn=pde,
name='pde_condition')
# Hier brauchen wir immer nur den output des modells, da die Bedingung nicht
# von nu, um oder dh abhängt.
def bc_fun(p):
return p-p_rel
boundary_condition = tp.conditions.PINNCondition(module = model,
sampler = boundary_v_sampler,
residual_fn = bc_fun,
name = 'pde_bc')
opt_setting = tp.solver.OptimizerSetting(torch.optim.AdamW, lr=1e-2) #SGD, LBFGS
solver = tp.solver.Solver((pde_condition, boundary_condition),optimizer_setting = opt_setting)
trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,
num_sanity_val_steps=0,
benchmark=True,
log_every_n_steps=1,
max_steps=1000,
#logger=False, zur Visualisierung im tensorboard
checkpoint_callback=False
)
trainer.fit(solver)
opt_setting = tp.solver.OptimizerSetting(torch.optim.LBFGS, lr=1e-2) #SGD, LBFGS
solver = tp.solver.Solver((pde_condition, boundary_condition),optimizer_setting = opt_setting)
trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,
num_sanity_val_steps=0,
benchmark=True,
log_every_n_steps=1,
max_steps=100,
#logger=False, zur Visualisierung im tensorboard
checkpoint_callback=False
)
trainer.fit(solver)
import matplotlib.pyplot as plt
solver = solver.to('cpu')
print('nu0= ',nu0,' nu1= ',nu1)
print('dh0= ',dh0, 'dh1= ', dh1, 'm')
print('um0= ', um0, 'um1= ',um1, 'm/s')
# Parameter definieren für Plot
nu_plot = 2.0e-3
um_plot = 0.4
dh_plot = 14.25e-06
print('Minimale Spalthöhe =', h_0-dh_plot)
plot_sampler = tp.samplers.PlotSampler(plot_domain=A_x, n_points=600, device='cpu',
data_for_other_variables={'nu':nu_plot,
'um':um_plot,'dh':dh_plot})
if nu0<=nu_plot and nu_plot<=nu1 and dh0<=dh_plot and dh_plot<=dh1 and um0<=um_plot and um_plot<=um1:
fig = tp.utils.plot(model,lambda p:p,plot_sampler)
else:
print('Ausserhalb des Trainingsbereiches')
print('Skalierungsfaktor = ', p_skal)
plt.savefig(f'p_{um}.png', dpi=300)
import xlsxwriter
#erstellen eines Workbook Objektes mit dem Dateinamen "Gleitlager_***.xlsx"
workbook = xlsxwriter.Workbook('Gleitlager.xlsx')
worksheet = workbook.add_worksheet('Tabelle_1')
worksheet.write('Ergebnistabelle Gleitlager')
worksheet.write('nu', 'dh', 'um')
workbook.close()
import winsound
frequency = 2500 # Set Frequency To 2500 Hertz
duration = 1000 # Set Duration To 1000 ms == 1 second
winsound.Beep(frequency, duration)
```
| github_jupyter |
Work looking at https://www.bexar.org/2988/Online-District-Clerk-Criminal-Records
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
%matplotlib inline
Bexar_Criminal_AB_df = pd.read_csv(r'http://edocs.bexar.org/cc/DC_cjjorad_a_b.csv',header=0)
Bexar_Criminal_C_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_c.csv',header=0)
Bexar_Criminal_DF_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_d_f.csv',header=0)
Bexar_Criminal_G_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_g.csv',header=0)
Bexar_Criminal_HK_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_h_k.csv',header=0)
Bexar_Criminal_L_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_l.csv',header=0)
Bexar_Criminal_M_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_m.csv',header=0)
Bexar_Criminal_NQ_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_n_q.csv',header=0)
Bexar_Criminal_R_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_r.csv',header=0)
Bexar_Criminal_S_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_s.csv',header=0)
Bexar_Criminal_TV_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_t_v.csv',header=0)
Bexar_Criminal_WZ_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_w_z.csv',header=0)
dir()
pd.set_option('display.max_columns', None)
combined_list = [Bexar_Criminal_AB_df, Bexar_Criminal_C_df, Bexar_Criminal_DF_df, Bexar_Criminal_G_df, Bexar_Criminal_HK_df, Bexar_Criminal_L_df, Bexar_Criminal_M_df, Bexar_Criminal_NQ_df, Bexar_Criminal_R_df, Bexar_Criminal_S_df, Bexar_Criminal_TV_df, Bexar_Criminal_WZ_df]
combined_df = pd.concat(combined_list, axis=0, ignore_index=True)
combined_df.info()
combined_df.loc[:, 'BIRTHDATE_dt'] = pd.to_datetime(combined_df['BIRTHDATE'], errors='coerce')
help(pd.to_datetime)
combined_df.BIRTHDATE_dt.min()
combined_df.BIRTHDATE_dt.max()
combined_df[combined_df.BIRTHDATE_dt == '1965']
combined_df[(combined_df['ADDR-ZIP-CODE'] == 78230) & (combined_df['BIRTHDATE_dt'] > '1980')]
combined_df.columns
combined_df[combined_df['ADDR-STREET'].str.contains('TERRACE PLACE')]
```
https://search.bexar.org/Case/CaseDetail?r=5ae98979-5190-4b60-b939-cb5ddef960cd&cs=2007CR11191W&ct=&&p=1_2007CR11191W+++D1871272998100000
```
combined_df.loc[:, 'OFFENSE-DATE_dt'] = pd.to_datetime(combined_df['OFFENSE-DATE'], errors='coerce')
```
https://www.krem.com/article/news/crime/police-say-guns-drugs-pipe-bombs-found-in-north-san-antonio-home-2-arrested/273-073a5835-3afc-4a01-9742-234a84bfed85
```
combined_df[combined_df['FULL-NAME'].str.contains('HOTTLE')]
combined_df[(combined_df['OFFENSE-DATE_dt'] =='2019-07-17')]
combined_df[combined_df['FULL-NAME'].str.contains('FERRILL')]
combined_df[(combined_df['BIRTHDATE_dt'] == '1997-09-29')]
combined_df[combined_df['FULL-NAME'].str.contains('LEDBETTER')]
combined_df[(combined_df['OFFENSE-DATE_dt'] =='2008-07-26')]
```
https://search.bexar.org/Case/CaseDetail?r=103d31d3-30d0-4a6d-bb7e-3a67205e3f52&cs=2009CR2391A&ct=&=&full=y&p=1_2009CR2391A%20%20%20%20D4371336448100000#events
```
x = combined_df[(combined_df['BIRTHDATE_dt'] == '1985-10-25') & (combined_df['FULL-NAME'].str.contains('ROBLES'))]
x.sort_values(by=['OFFENSE-DATE_dt'])
combined_df['OFFENSE-DATE_dt'].max()
len(combined_df)
combined_df[combined_df['OFFENSE-DATE_dt'] > '2021-8-1']
combined_df[combined_df['FULL-NAME'].str.contains('NICEFORO')]
```
| github_jupyter |
# Quantum device tuning via hypersurface sampling
**NOTE: DUE TO MULTIPROCESSING PACKAGE THE CURRENT IMPLEMENTATION ONLY WORKS ON UNIX/LINUX OPERATING SYSTEMS [TO RUN ON WINDOWS FOLLOW THIS GUIDE](Resources/Running_on_windows.ipynb)**
Quantum devices used to implement spin qubits in semiconductors are challenging to tune and characterise. Often the best approaches to tuning such devices is manual tuning or a simple heuristic algorithm which is not flexible across devices. This repository contains the statistical tuning approach detailed in https://arxiv.org/abs/2001.02589 with some additional functionality, a quick animated explanation of the approach detailed in the paper is available [here](Resources/Algorithm_overview/README.ipynb). This approach is promising as it make few assumptions about the device being tuned and hence can be applied to many systems without alteration. **For instructions on how to run a simple fake environment to see how the algorithm works see [this README.md](Playground/README.ipynb)**
## Dependencies
The required packages required to run the algorithm are:
```
scikit-image
scipy
numpy
matplotlib
GPy
mkl
pyDOE
```
# Using the algorithm
Using the algorithm varies depending on what measurement software you use in your lab or what you want to achieve. Specifically if your lab utilises pygor then you should call a different function to initiate the tuning. If you are unable to access a lab then you can still create a virtual environment to test the algorithm in using the Playground module. Below is documentation detailing how to run the algorithm for each of these situations.
## Without pygor
To use the algorithm without pygor you must create the following:
- jump
- measure
- check
- config_file
Below is an **EXAMPLE** of how jump, check, and measure **COULD** be defined for a 5 gate device with 2 investigation (in this case plunger) gates.
<ins>jump:</ins>
Jump should be a function that takes an array of values and sets them to the device. It should also accept a flag that details whether the investigation gates (typically plunger gates) should be used.
```python
def jump(params,inv=False):
if inv:
labels = ["dac4","dac6"] #plunger gates
else:
labels = ["dac3","dac4","dac5","dac6","dac7"] #all gates
assert len(params) == len(labels) #params needs to be the same length as labels
for i in range(len(params)):
set_value_to_dac(labels[i],params[i]) #function that takes dac key and value and sets dac to that value
return params
```
<ins>measure:</ins>
measure should be a function that returns the measured current on the device.
```python
def measure():
current = get_value_from_daq() #receive a single current measurement from the daq
return current
```
<ins>check:</ins>
check should be a function that returns the state of all relevant dac channels.
```python
def check(inv=True):
if inv:
labels = ["dac4","dac6"] #plunger gates
else:
labels = ["dac3","dac4","dac5","dac6","dac7"] #all gates
dac_state = [None]*len(labels)
for i in range(len(labels)):
dac_state[i] = get_current_dac_state(labels[i]) #function that takes dac key and returns state that channel is in
return dac_state
```
<ins>config_file:</ins>
config_file should be a string that specifies the file path of a .json file containing a json object that specifies the desired settings the user wants to use for tuning. An example string would be "config.json". For information on what the config file should contain see the json config section.
### How to run
To run tuning without pygor once the above has been defined call the following:
```python
from AutoDot.tune import tune_from_file
tune_from_file(jump,measure,check,config_file)
```
## With pygor
To use the algorithm without pygor you must create the following:
<ins>config_file:</ins>
config_file should be a string that specifies the file path of a .json file containing a json object that specifies the desired settings the user wants to use for tuning. An example string would be "config.json". For information on what the config file should contain see the json config section. Additional fields are required to specify pygor location and setup.
### How to run
To run tuning with pygor once the above has been defined call the following:
```python
from AutoDot.tune import tune_with_pygor_from_file
tune_with_pygor_from_file(config_file)
```
## With playground (environment)
To use the algorithm using the playground you must create the following:
<ins>config_file:</ins>
config_file should be a string that specifies the file path of a .json file containing a json object that specifies the desired settings the user wants to use for tuning. An example string would be "config.json".
The config you must supply the field "playground" then in this field you must specify the basic shapes you want to build your environment out of. Provided is a [demo config file](mock_device_demo_config.json) and a [README](Playground/README.ipynb) detailing how it works and what a typical run looks like.
### How to run
To run tuning with pygor once the above has been defined call the following:
```python
from AutoDot.tune import tune_with_playground_from_file
tune_with_playground_from_file(config_file)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cccaaannn/machine_learning_colab/blob/master/feature_selection/data_mining_hw3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Feature selection methods

download-unzip data
```
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00320/student.zip
!unzip student.zip
```
imports
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import stats
```
load data
```
df = pd.read_csv("student-mat.csv", index_col=0, delimiter = ";")
```
print head
```
df.head(5)
```
Options
```
p_value_threshold = 0.05
```
Pearson
```
num_cols = df.drop(["G3","G2","G1"], axis=1)._get_numeric_data().columns
pearson_corrs = []
for col in num_cols:
s = stats.pearsonr(df[col], df['G3'])
if(s[1] < p_value_threshold):
pearson_corrs.append((col, s[0], s[1]))
pearson_corrs = sorted(pearson_corrs, reverse=True, key=lambda tup: abs(tup[1]))
for i in pearson_corrs:
print("columns: {0:<12} correlation: {1:<22} p-values: {2:<22}".format(i[0], i[1], i[2]))
```
Spearman
```
num_cols = df.drop(["G3","G2","G1"], axis=1)._get_numeric_data().columns
spearmar_corrs = []
for col in num_cols:
s = stats.spearmanr(df[col], df['G3'])
if(s[1] < p_value_threshold):
spearmar_corrs.append((col, s[0], s[1]))
spearmar_corrs = sorted(spearmar_corrs, reverse=True, key=lambda tup: abs(tup[1]))
for i in spearmar_corrs:
print("columns: {0:<12} correlation: {1:<22} p-values: {2:<22}".format(i[0], i[1], i[2]))
```
Kendall
```
num_cols = df.drop(["G3","G2","G1"], axis=1)._get_numeric_data().columns
kendall_corrs = []
for col in num_cols:
s = stats.kendalltau(df[col], df['G3'])
if(s[1] < p_value_threshold):
kendall_corrs.append((col, s[0], s[1]))
kendall_corrs = sorted(kendall_corrs, reverse=True, key=lambda tup: abs(tup[1]))
for i in kendall_corrs:
print("columns: {0:<12} correlation: {1:<22} p-values: {2:<22}".format(i[0], i[1], i[2]))
```
f-value
```
cols = df.columns
num_cols = df._get_numeric_data().columns
categorical_cols = list(set(cols) - set(num_cols))
f_value_corrs = []
for categorical_col in categorical_cols:
groups = []
column_categories = df[categorical_col].unique()
for column_category in column_categories:
groups.append(df[df[categorical_col] == column_category].age)
f, p = stats.f_oneway(*groups)
if(p < p_value_threshold):
f_value_corrs.append((categorical_col, f, p, ", ".join(list(column_categories))))
f_value_corrs = sorted(f_value_corrs, reverse=True, key=lambda tup: abs(tup[1]))
for i in f_value_corrs:
print("columns: {0:<12} correlation: {1:<20} p-values: {2:<22} categories: {3:<22}".format(i[0], i[1], i[2], i[3]))
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
import gc
import os
import sys
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error,mean_absolute_error
from sklearn.ensemble import RandomForestRegressor
from timeit import default_timer as timer
import lightgbm as lgb
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
# 通过类型转换节省内存空间
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB --> {:.2f} MB (Decreased by {:.1f}%)'.format(
start_mem, end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
```
# 加载数据集
```
def state(message,start = True, time = 0):
if(start):
print(f'Working on {message} ... ')
else :
print(f'Working on {message} took ({round(time , 3)}) Sec \n')
# Import dataset
df_train = pd.read_csv('../input/train_V2.csv')
df_test = pd.read_csv('../input/test_V2.csv')
# Reduce memory use
df_train=reduce_mem_usage(df_train)
df_test=reduce_mem_usage(df_test)
# Show some data
df_train.head()
df_train.describe()
```
# 对数据进行简单清洗
```
# 由于百分比是按照本局的最差名次来计算的,而不是小队的数量,并且本局最差名次与小队数量存在冗余,因此删除
# 由于最远击杀距离统计并不准确 rankPoints官方建议谨慎使用,因此删除
df_train = df_train.drop(['longestKill', 'numGroups', 'rankPoints'], axis=1)
df_test = df_test.drop(['longestKill', 'numGroups', 'rankPoints'], axis=1)
# 删除缺失值
df_train[df_train['winPlacePerc'].isnull()]
df_train.drop(2744604, inplace=True)
```
# 特征工程
```
def feature_engineering(df,is_train=True):
if is_train:
df = df[df['maxPlace'] > 1]
state('totalDistance')
s = timer()
df['totalDistance'] = df['rideDistance'] + df["walkDistance"] + df["swimDistance"]
e = timer()
state('totalDistance', False, e - s)
state('killPlace_over_maxPlace')
s = timer()
df['killPlace_over_maxPlace'] = df['killPlace'] / df['maxPlace']
e = timer()
state('killPlace_over_maxPlace', False, e - s)
state('healsandboosts')
s = timer()
df['healsandboosts'] = df['heals'] + df['boosts']
e = timer()
state('healsandboosts', False, e - s)
target = 'winPlacePerc'
features = list(df.columns)
# 去掉标称属性特征
features.remove("Id")
features.remove("matchId")
features.remove("groupId")
features.remove("matchDuration")
features.remove("matchType")
y = None
if is_train:
y = np.array(df.groupby(['matchId', 'groupId'])[target].agg('mean'), dtype=np.float64)
# 从特征中去掉百分比排名(预测目标)
features.remove(target)
# 统计同场比赛中同组内的各个特征的平均值及其在该场比赛下的百分比
print("get group mean feature")
agg = df.groupby(['matchId', 'groupId'])[features].agg('mean')
agg_rank = agg.groupby(['matchId'])[features].rank(pct=True).reset_index()
#创建一个以matchId和groupId为索引的新数据集
if is_train:
df_out = agg.reset_index()[['matchId', 'groupId']]
else:
df_out = df[['matchId', 'groupId']]
# 将新特征与df_out根据matchId和groupId合并
df_out = df_out.merge(agg.reset_index(), suffixes=["", ""], how='left', on=['matchId', 'groupId'])
df_out = df_out.merge(agg_rank, suffixes=["_mean", "_mean_rank"], how='left', on=['matchId', 'groupId'])
# 统计同场比赛中同组内的各个特征的中值及其在该场比赛下的百分比
print("get group median feature")
agg = df.groupby(['matchId','groupId'])[features].agg('median')
agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()
# 将新特征与df_out根据matchId和groupId合并
df_out = df_out.merge(agg.reset_index(), suffixes=["", ""], how='left', on=['matchId', 'groupId'])
df_out = df_out.merge(agg_rank, suffixes=["_median", "_median_rank"], how='left', on=['matchId', 'groupId'])
# 统计同场比赛中同组内的各个特征的最大值及其在该场比赛下的百分比
print("get group max feature")
agg = df.groupby(['matchId','groupId'])[features].agg('max')
agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()
# 将新特征与df_out根据matchId和groupId合并
df_out = df_out.merge(agg.reset_index(), suffixes=["", ""], how='left', on=['matchId', 'groupId'])
df_out = df_out.merge(agg_rank, suffixes=["_max", "_max_rank"], how='left', on=['matchId', 'groupId'])
# 统计同场比赛中同组内的各个特征的最小值及其在该场比赛下的百分比
print("get group min feature")
agg = df.groupby(['matchId','groupId'])[features].agg('min')
agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()
# 将新特征与df_out根据matchId和groupId合并
df_out = df_out.merge(agg.reset_index(), suffixes=["", ""], how='left', on=['matchId', 'groupId'])
df_out = df_out.merge(agg_rank, suffixes=["_min", "_min_rank"], how='left', on=['matchId', 'groupId'])
# 统计同场比赛中同组内的各个特征的和及其在该场比赛下的百分比
print("get group max feature")
agg = df.groupby(['matchId','groupId'])[features].agg('sum')
agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()
# 将新特征与df_out根据matchId和groupId合并
print("get group sum feature")
df_out = df_out.merge(agg.reset_index(), suffixes=["", ""], how='left', on=['matchId', 'groupId'])
df_out = df_out.merge(agg_rank, suffixes=["_sum", "_sum_rank"], how='left', on=['matchId', 'groupId'])
# 统计同场比赛中每个小组的人员数量
print("get group size feature")
agg = df.groupby(['matchId','groupId']).size().reset_index(name='group_size')
# 将Group_size特征与df_out根据matchId和groupId合并
df_out = df_out.merge(agg, how='left', on=['matchId', 'groupId'])
# 统计同场比赛下的特征平均值
print("get match mean feature")
agg = df.groupby(['matchId'])[features].agg('mean').reset_index()
# 将新特征与df_out根据matchId合并
df_out = df_out.merge(agg, suffixes=["", "_match_mean"], how='left', on=['matchId'])
# 统计同场比赛中小组数量
print("get match size feature")
agg = df.groupby(['matchId']).size().reset_index(name='match_size')
# 将新特征与df_out根据matchId合并
df_out = df_out.merge(agg, how='left', on=['matchId'])
# 删除matchId和groupId
df_out.drop(["matchId", "groupId"], axis=1, inplace=True)
df_out = reduce_mem_usage(df_out)
X = np.array(df_out, dtype=np.float64)
del df, df_out, agg, agg_rank
gc.collect()
return X, y
x_train, y_train = feature_engineering(df_train,True)
x_test, _ = feature_engineering(df_test,False)
```
# 建立模型
```
# 将数据集划分为训练集和验证集
random_seed=1
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size = 0.05, random_state=random_seed)
```
## Random Forest
```
RF = RandomForestRegressor(n_estimators=10, min_samples_leaf=3, max_features=0.5, n_jobs=-1)
%%time
RF.fit(x_train, y_train)
mae_train_RF = mean_absolute_error(RF.predict(x_train), y_train)
mae_val_RF = mean_absolute_error(RF.predict(x_val), y_val)
print('mae train RF: ', mae_train_RF)
print('mae val RF: ', mae_val_RF)
```
## LightGBM
```
def run_lgb(train_X, train_y, val_X, val_y, x_test):
params = {"objective" : "regression",
"metric" : "mae",
'n_estimators':20000,
'early_stopping_rounds':200,
"num_leaves" : 31,
"learning_rate" : 0.05,
"bagging_fraction" : 0.7,
"bagging_seed" : 0,
"num_threads" : 4,
"colsample_bytree" : 0.7
}
lgtrain = lgb.Dataset(train_X, label=train_y)
lgval = lgb.Dataset(val_X, label=val_y)
model = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval], early_stopping_rounds=200, verbose_eval=1000)
pred_test_y = model.predict(x_test, num_iteration=model.best_iteration)
return pred_test_y, model
%%time
# 训练模型
pred_test_lgb, model = run_lgb(x_train, y_train, x_val, y_val, x_test)
mae_train_lgb = mean_absolute_error(model.predict(x_train, num_iteration=model.best_iteration), y_train)
mae_val_lgb = mean_absolute_error(model.predict(x_val, num_iteration=model.best_iteration), y_val)
print('mae train lgb: ', mae_train_lgb)
print('mae val lgb: ', mae_val_lgb)
```
## DNN
```
def run_DNN(x_train, y_train, x_val, y_val, x_test):
NN_model = Sequential()
NN_model.add(Dense(x_train.shape[1], input_dim = x_train.shape[1], activation='relu'))
NN_model.add(Dense(136, activation='relu'))
NN_model.add(Dense(136, activation='relu'))
NN_model.add(Dense(136, activation='relu'))
NN_model.add(Dense(136, activation='relu'))
NN_model.add(Dense(1, activation='linear'))
NN_model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['mean_absolute_error'])
NN_model.summary()
checkpoint_name = 'Weights-{epoch:03d}--{val_loss:.5f}.hdf5'
checkpoint = ModelCheckpoint(checkpoint_name, monitor='val_loss', verbose = 1, save_best_only = True, mode ='auto')
callbacks_list = [checkpoint]
NN_model.fit(x=x_train,
y=y_train,
batch_size=1000,
epochs=30,
verbose=1,
callbacks=callbacks_list,
validation_split=0.15,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None)
pred_test_y = NN_model.predict(x_test)
pred_test_y = pred_test_y.reshape(-1)
return pred_test_y, NN_model
%%time
# 训练模型
pred_test_DNN, model = run_DNN(x_train, y_train, x_val, y_val, x_test)
mae_train_DNN = mean_absolute_error(model.predict(x_train), y_train)
mae_val_DNN = mean_absolute_error(model.predict(x_val), y_val)
print('mae train dnn: ', mae_train_DNN)
print('mae val dnn: ', mae_val_DNN)
```
# 使用训练好的模型进行预测
## Random Forest
```
pred_test_RF = RF.predict(x_test)
df_test['winPlacePerc_RF'] = pred_test_RF
submission = df_test[['Id', 'winPlacePerc_RF']]
submission.to_csv('../output/submission_RF.csv', index=False)
```
## LightGBM
```
df_test['winPlacePerc_lgb'] = pred_test_lgb
submission = df_test[['Id', 'winPlacePerc_lgb']]
submission.to_csv('../output/submission_lgb.csv', index=False)
```
## DNN
```
df_test['winPlacePerc_DNN'] = pred_test_DNN
submission = df_test[['Id', 'winPlacePerc_DNN']]
submission.to_csv('../output/submission_DNN.csv', index=False)
```
## 根据验证集上的MAE值为模型划分权重进行集成(RF + DNN + LightGBM)
```
weight_DNN = (1 - mae_val_DNN) / (3 - mae_val_DNN - mae_val_RF - mae_val_lgb)
weight_RF = (1 - mae_val_RF) / (3 - mae_val_DNN - mae_val_RF - mae_val_lgb)
weight_lgb = (1 - mae_val_lgb) / (3 - mae_val_DNN - mae_val_RF - mae_val_lgb)
df_test['winPlacePerc'] = df_test.apply(lambda x: x['winPlacePerc_RF'] * weight_RF + x['winPlacePerc_DNN'] * weight_DNN + x['winPlacePerc_lgb'] * weight_lgb, axis=1)
submission = df_test[['Id', 'winPlacePerc']]
submission.to_csv('../output/submission.csv', index=False)
```
| github_jupyter |
# Class Diagrams
This is a simple viewer for class diagrams. Customized towards the book.
**Prerequisites**
* _Refer to earlier chapters as notebooks here, as here:_ [Earlier Chapter](Debugger.ipynb).
```
import bookutils
```
## Synopsis
<!-- Automatically generated. Do not edit. -->
To [use the code provided in this chapter](Importing.ipynb), write
```python
>>> from debuggingbook.ClassDiagram import <identifier>
```
and then make use of the following features.
The function `display_class_hierarchy()` function shows the class hierarchy for the given class (or list of classes).
* The keyword parameter `public_methods`, if given, is a list of "public" methods to be used by clients (default: all methods with docstrings).
* The keyword parameter `abstract_classes`, if given, is a list of classes to be displayed as "abstract" (i.e. with a cursive class name).
```python
>>> display_class_hierarchy(D_Class, abstract_classes=[A_Class])
```

## Getting a Class Hierarchy
```
import inspect
```
Using `mro()`, we can access the class hierarchy. We make sure to avoid duplicates created by `class X(X)`.
```
# ignore
from typing import Callable, Dict, Type, Set, List, Union, Any, Tuple, Optional
def class_hierarchy(cls: Type) -> List[Type]:
superclasses = cls.mro()
hierarchy = []
last_superclass_name = ""
for superclass in superclasses:
if superclass.__name__ != last_superclass_name:
hierarchy.append(superclass)
last_superclass_name = superclass.__name__
return hierarchy
```
Here's an example:
```
class A_Class:
"""A Class which does A thing right.
Comes with a longer docstring."""
def foo(self) -> None:
"""The Adventures of the glorious Foo"""
pass
def quux(self) -> None:
"""A method that is not used."""
pass
class A_Class(A_Class):
# We define another function in a separate cell.
def second(self) -> None:
pass
class B_Class(A_Class):
"""A subclass inheriting some methods."""
VAR = "A variable"
def foo(self) -> None:
"""A WW2 foo fighter."""
pass
def bar(self, qux: Any = None, bartender: int = 42) -> None:
"""A qux walks into a bar.
`bartender` is an optional attribute."""
pass
class C_Class:
"""A class injecting some method"""
def qux(self) -> None:
pass
class D_Class(B_Class, C_Class):
"""A subclass inheriting from multiple superclasses.
Comes with a fairly long, but meaningless documentation."""
def foo(self) -> None:
B_Class.foo(self)
class D_Class(D_Class):
pass # An incremental addiiton that should not impact D's semantics
class_hierarchy(D_Class)
```
## Getting a Class Tree
We can use `__bases__` to obtain the immediate base classes.
```
D_Class.__bases__
```
`class_tree()` returns a class tree, using the "lowest" (most specialized) class with the same name.
```
def class_tree(cls: Type, lowest: Type = None) -> List[Tuple[Type, List]]:
ret = []
for base in cls.__bases__:
if base.__name__ == cls.__name__:
if not lowest:
lowest = cls
ret += class_tree(base, lowest)
else:
if lowest:
cls = lowest
ret.append((cls, class_tree(base)))
return ret
class_tree(D_Class)
class_tree(D_Class)[0][0]
assert class_tree(D_Class)[0][0] == D_Class
```
`class_set()` flattens the tree into a set:
```
def class_set(classes: Union[Type, List[Type]]) -> Set[Type]:
if not isinstance(classes, list):
classes = [classes]
ret = set()
def traverse_tree(tree: List[Tuple[Type, List]]) -> None:
for (cls, subtrees) in tree:
ret.add(cls)
for subtree in subtrees:
traverse_tree(subtrees)
for cls in classes:
traverse_tree(class_tree(cls))
return ret
class_set(D_Class)
assert A_Class in class_set(D_Class)
assert B_Class in class_set(D_Class)
assert C_Class in class_set(D_Class)
assert D_Class in class_set(D_Class)
class_set([B_Class, C_Class])
```
### Getting Docs
```
A_Class.__doc__
A_Class.__bases__[0].__doc__
A_Class.__bases__[0].__name__
D_Class.foo
D_Class.foo.__doc__
A_Class.foo.__doc__
def docstring(obj: Any) -> str:
doc = inspect.getdoc(obj)
return doc if doc else ""
docstring(A_Class)
docstring(D_Class.foo)
def unknown() -> None:
pass
docstring(unknown)
import html
import re
def escape(text: str) -> str:
text = html.escape(text)
assert '<' not in text
assert '>' not in text
text = text.replace('{', '{')
text = text.replace('|', '|')
text = text.replace('}', '}')
return text
escape("f(foo={})")
def escape_doc(docstring: str) -> str:
DOC_INDENT = 0
docstring = "
".join(
' ' * DOC_INDENT + escape(line).strip()
for line in docstring.split('\n')
)
return docstring
print(escape_doc("'Hello\n {You|Me}'"))
```
## Getting Methods and Variables
```
inspect.getmembers(D_Class)
def class_items(cls: Type, pred: Callable) -> List[Tuple[str, Any]]:
def _class_items(cls: Type) -> List:
all_items = inspect.getmembers(cls, pred)
for base in cls.__bases__:
all_items += _class_items(base)
return all_items
unique_items = []
items_seen = set()
for (name, item) in _class_items(cls):
if name not in items_seen:
unique_items.append((name, item))
items_seen.add(name)
return unique_items
def class_methods(cls: Type) -> List[Tuple[str, Callable]]:
return class_items(cls, inspect.isfunction)
def defined_in(name: str, cls: Type) -> bool:
if not hasattr(cls, name):
return False
defining_classes = []
def search_superclasses(name: str, cls: Type) -> None:
if not hasattr(cls, name):
return
for base in cls.__bases__:
if hasattr(base, name):
defining_classes.append(base)
search_superclasses(name, base)
search_superclasses(name, cls)
if any(cls.__name__ != c.__name__ for c in defining_classes):
return False # Already defined in superclass
return True
assert not defined_in('VAR', A_Class)
assert defined_in('VAR', B_Class)
assert not defined_in('VAR', C_Class)
assert not defined_in('VAR', D_Class)
def class_vars(cls: Type) -> List[Any]:
def is_var(item: Any) -> bool:
return not callable(item)
return [item for item in class_items(cls, is_var)
if not item[0].startswith('__') and defined_in(item[0], cls)]
class_methods(D_Class)
class_vars(B_Class)
```
We're only interested in
* functions _defined_ in that class
* functions that come with a docstring
```
def public_class_methods(cls: Type) -> List[Tuple[str, Callable]]:
return [(name, method) for (name, method) in class_methods(cls)
if method.__qualname__.startswith(cls.__name__)]
def doc_class_methods(cls: Type) -> List[Tuple[str, Callable]]:
return [(name, method) for (name, method) in public_class_methods(cls)
if docstring(method) is not None]
public_class_methods(D_Class)
doc_class_methods(D_Class)
def overloaded_class_methods(classes: Union[Type, List[Type]]) -> Set[str]:
all_methods: Dict[str, Set[Callable]] = {}
for cls in class_set(classes):
for (name, method) in class_methods(cls):
if method.__qualname__.startswith(cls.__name__):
all_methods.setdefault(name, set())
all_methods[name].add(cls)
return set(name for name in all_methods if len(all_methods[name]) >= 2)
overloaded_class_methods(D_Class)
```
## Drawing Class Hierarchy with Method Names
```
from inspect import signature
import warnings
def display_class_hierarchy(classes: Union[Type, List[Type]],
public_methods: Optional[List] = None,
abstract_classes: Optional[List] = None,
include_methods: bool = True,
include_class_vars: bool =True,
include_legend: bool = True,
project: str = 'fuzzingbook',
log: bool = False) -> Any:
"""Visualize a class hierarchy.
`classes` is a Python class (or a list of classes) to be visualized.
`public_methods`, if given, is a list of methods to be shown as "public" (bold).
(Default: all methods with a docstring)
`abstract_classes`, if given, is a list of classes to be shown as "abstract" (cursive).
(Default: all classes with an abstract method)
`include_methods`: if True, include all methods (default)
`include_legend`: if True, include a legend (default)
"""
from graphviz import Digraph
if project == 'debuggingbook':
CLASS_FONT = 'Raleway, Helvetica, Arial, sans-serif'
CLASS_COLOR = '#6A0DAD' # HTML 'purple'
else:
CLASS_FONT = 'Patua One, Helvetica, sans-serif'
CLASS_COLOR = '#B03A2E'
METHOD_FONT = "'Fira Mono', 'Source Code Pro', 'Courier', monospace"
METHOD_COLOR = 'black'
if isinstance(classes, list):
starting_class = classes[0]
else:
starting_class = classes
classes = [starting_class]
title = starting_class.__name__ + " class hierarchy"
dot = Digraph(comment=title)
dot.attr('node', shape='record', fontname=CLASS_FONT)
dot.attr('graph', rankdir='BT', tooltip=title)
dot.attr('edge', arrowhead='empty')
edges = set()
overloaded_methods: Set[str] = set()
drawn_classes = set()
def method_string(method_name: str, public: bool, overloaded: bool,
fontsize: float = 10.0) -> str:
method_string = f'<font face="{METHOD_FONT}" point-size="{str(fontsize)}">'
if overloaded:
name = f'<i>{method_name}()</i>'
else:
name = f'{method_name}()'
if public:
method_string += f'<b>{name}</b>'
else:
method_string += f'<font color="{METHOD_COLOR}">' \
f'{name}</font>'
method_string += '</font>'
return method_string
def var_string(var_name: str, fontsize: int = 10) -> str:
var_string = f'<font face="{METHOD_FONT}" point-size="{str(fontsize)}">'
var_string += f'{var_name}'
var_string += '</font>'
return var_string
def is_overloaded(method_name: str, f: Any) -> bool:
return (method_name in overloaded_methods or
(docstring(f) is not None and "in subclasses" in docstring(f)))
def is_abstract(cls: Type) -> bool:
if not abstract_classes:
return inspect.isabstract(cls)
return (cls in abstract_classes or
any(c.__name__ == cls.__name__ for c in abstract_classes))
def is_public(method_name: str, f: Any) -> bool:
if public_methods:
return (method_name in public_methods or
f in public_methods or
any(f.__qualname__ == m.__qualname__
for m in public_methods))
return bool(docstring(f))
def class_vars_string(cls: Type, url: str) -> str:
cls_vars = class_vars(cls)
if len(cls_vars) == 0:
return ""
vars_string = f'<table border="0" cellpadding="0" ' \
f'cellspacing="0" ' \
f'align="left" tooltip="{cls.__name__}" href="#">'
for (name, var) in cls_vars:
if log:
print(f" Drawing {name}")
var_doc = escape(f"{name} = {repr(var)}")
tooltip = f' tooltip="{var_doc}"'
href = f' href="{url}"'
vars_string += f'<tr><td align="left" border="0"' \
f'{tooltip}{href}>'
vars_string += var_string(name)
vars_string += '</td></tr>'
vars_string += '</table>'
return vars_string
def class_methods_string(cls: Type, url: str) -> str:
methods = public_class_methods(cls)
# return "<br/>".join([name + "()" for (name, f) in methods])
if len(methods) == 0:
return ""
methods_string = f'<table border="0" cellpadding="0" ' \
f'cellspacing="0" ' \
f'align="left" tooltip="{cls.__name__}" href="#">'
for public in [True, False]:
for (name, f) in methods:
if public != is_public(name, f):
continue
if log:
print(f" Drawing {name}()")
if is_public(name, f) and not docstring(f):
warnings.warn(f"{f.__qualname__}() is listed as public,"
f" but has no docstring")
overloaded = is_overloaded(name, f)
method_doc = escape(name + str(inspect.signature(f)))
if docstring(f):
method_doc += ":
" + escape_doc(docstring(f))
# Tooltips are only shown if a href is present, too
tooltip = f' tooltip="{method_doc}"'
href = f' href="{url}"'
methods_string += f'<tr><td align="left" border="0"' \
f'{tooltip}{href}>'
methods_string += method_string(name, public, overloaded)
methods_string += '</td></tr>'
methods_string += '</table>'
return methods_string
def display_class_node(cls: Type) -> None:
name = cls.__name__
if name in drawn_classes:
return
drawn_classes.add(name)
if log:
print(f"Drawing class {name}")
if cls.__module__ == '__main__':
url = '#'
else:
url = cls.__module__ + '.ipynb'
if is_abstract(cls):
formatted_class_name = f'<i>{cls.__name__}</i>'
else:
formatted_class_name = cls.__name__
if include_methods or include_class_vars:
vars = class_vars_string(cls, url)
methods = class_methods_string(cls, url)
spec = '<{<b><font color="' + CLASS_COLOR + '">' + \
formatted_class_name + '</font></b>'
if include_class_vars and vars:
spec += '|' + vars
if include_methods and methods:
spec += '|' + methods
spec += '}>'
else:
spec = '<' + formatted_class_name + '>'
class_doc = escape('class ' + cls.__name__)
if docstring(cls):
class_doc += ':
' + escape_doc(docstring(cls))
else:
warnings.warn(f"Class {cls.__name__} has no docstring")
dot.node(name, spec, tooltip=class_doc, href=url)
def display_class_trees(trees: List[Tuple[Type, List]]) -> None:
for tree in trees:
(cls, subtrees) = tree
display_class_node(cls)
for subtree in subtrees:
(subcls, _) = subtree
if (cls.__name__, subcls.__name__) not in edges:
dot.edge(cls.__name__, subcls.__name__)
edges.add((cls.__name__, subcls.__name__))
display_class_trees(subtrees)
def display_legend() -> None:
fontsize = 8.0
label = f'<b><font color="{CLASS_COLOR}">Legend</font></b><br align="left"/>'
for item in [
method_string("public_method",
public=True, overloaded=False, fontsize=fontsize),
method_string("private_method",
public=False, overloaded=False, fontsize=fontsize),
method_string("overloaded_method",
public=False, overloaded=True, fontsize=fontsize)
]:
label += '• ' + item + '<br align="left"/>'
label += f'<font face="Helvetica" point-size="{str(fontsize + 1)}">' \
'Hover over names to see doc' \
'</font><br align="left"/>'
dot.node('Legend', label=f'<{label}>', shape='plain', fontsize=str(fontsize + 2))
for cls in classes:
tree = class_tree(cls)
overloaded_methods = overloaded_class_methods(cls)
display_class_trees(tree)
if include_legend:
display_legend()
return dot
display_class_hierarchy(D_Class, project='debuggingbook', log=True)
display_class_hierarchy(D_Class, project='fuzzingbook')
```
Here is a variant with abstract classes and logging:
```
display_class_hierarchy([A_Class, B_Class],
abstract_classes=[A_Class],
public_methods=[
A_Class.quux,
], log=True)
```
## Synopsis
The function `display_class_hierarchy()` function shows the class hierarchy for the given class (or list of classes).
* The keyword parameter `public_methods`, if given, is a list of "public" methods to be used by clients (default: all methods with docstrings).
* The keyword parameter `abstract_classes`, if given, is a list of classes to be displayed as "abstract" (i.e. with a cursive class name).
```
display_class_hierarchy(D_Class, abstract_classes=[A_Class])
```
## Exercises
Enjoy!
| github_jupyter |
# Bayesian GAN
Bayesian GAN (Saatchi and Wilson, 2017) is a Bayesian formulation of Generative Adversarial Networks (Goodfellow, 2014) where we learn the **distributions** of the generator parameters $\theta_g$ and the discriminator parameters $\theta_d$ instead of optimizing for point estimates. The benefits of the Bayesian approach include the flexibility to model **multimodality** in the parameter space, as well as the ability to **prevent mode collapse** in the maximum likelihood (non-Bayesian) case.
We learn Bayesian GAN via an approximate inference algorithm called **Stochastic Gradient Hamiltonian Monte Carlo (SGHMC)** which is a gradient-based MCMC methods whose samples approximate the true posterior distributions of $\theta_g$ and $\theta_d$.
The Bayesian GAN training process starts from sampling noise $z$ from a fixed distribution (typically standard d-dim normal). The noise is fed to the generator where the parameters $\theta_g$ are sampled from the posterior distribution $p(\theta_g | D)$. The generated image given the parameters $\theta_g$ ($G(z|\theta_g)$) as well as the real data are presented to the discriminator, whose parameters are sample from its posterior distribution $p(\theta_d|D)$. We update the posteriors using the gradients $\frac{\partial \log p(\theta_g|D) }{\partial \theta_g }$ and $\frac{\partial \log p(\theta_d|D) }{\partial \theta_d }$ with Stochastic Gradient Hamiltonian Monte Carlo (SGHMC). Next section explains the intuition behind SGHMC.

<img src="figs/graphics_bayesgan.pdf">
# Learning Posterior Distributions
There are many approaches to estimate the posterior distribution of model parameters, namely, Markov Chain Monte Carlo (MCMC), Variational Inference (VI), Approximate Bayesian Computation (ABC), etc. Bayesian GAN uses SGHMC (Chen, 2014), a stochastic version of HMC (Neal, 2012), which is an MCMC method that (1) uses gradient to perform sampling efficiently (2) stochastic gradient from minibatch to handle large amount of data.
Below we show the visualization of samples generated from HMC. Once the algorithm runs for a while, we can see that the high density region has higher concentration of points. HMC can also handle multimodality (the second visualization).
```
from IPython.display import HTML
HTML('<iframe width="1000" height="400" src="https://chi-feng.github.io/mcmc-demo/app.html#HamiltonianMC,banana" frameborder="0" allowfullscreen></iframe>')
```
Hamiltonian Monte Carlo allows us to learn arbitrary distributions, including multimodal distributions where other Bayesian approach such as variational inference cannot model.
```
HTML('<iframe width="1000" height="400" src="https://chi-feng.github.io/mcmc-demo/app.html#HamiltonianMC,multimodal" frameborder="0" allowfullscreen></iframe>')
```
# Training
We show that Bayesian GAN can capture the data distribution by measuring its performance in the semi-supervised setting. We will perform the posterior update as outline in Algorithm 1 in Saatchi (2017). This algorithm can be implemented quite simply by adding noise to standard optimizers such as SGD with momentum and keep track of the parameters we sample from the posterior.

### SGHMC by Optimizing a Noisy Loss
First, observe that the update rules are similar to momentum SGD except for the noise $\boldsymbol{n}$. In fact, without $\boldsymbol{n}$, this is equivalent to performing momentum SGD with the loss is $- \sum_{i=1}{J_g} \sum_{k=1}^{J_d} \log \text{posterior} $. We will describe the case where $J_g = J_d=1$ for simplicity.
We use the main loss $\mathcal{L} = - \log p(\theta | ..)$ and add a noise loss $\mathcal{L}_\text{noise} = \frac{1}{\eta} \theta \cdot \boldsymbol{n}$ where $\boldsymbol{n} \sim \mathcal{N}(0, 2 \alpha \eta I)$ so that optimizing the loss function $\mathcal{L} + \mathcal{L}_\text{noise}$ with momentum SGD is equivalent to performing the SGHMC update step.
Below (Equation 3 and 4) are the posterior probabilities where each error term corresponds its negative log probability.

```
!pip install tensorboard_logger
from __future__ import print_function
import os, pickle
import numpy as np
import random, math
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from statsutil import AverageMeter, accuracy
from tensorboard_logger import configure, log_value
# Default Parameters
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', default='cifar10')
parser.add_argument('--imageSize', type=int, default=32)
parser.add_argument('--batchSize', type=int, default=64, help='input batch size')
parser.add_argument('--nz', type=int, default=100, help='size of the latent z vector')
parser.add_argument('--niter', type=int, default=2, help='number of epochs to train for')
parser.add_argument('--lr', type=float, default=0.0002, help='learning rate, default=0.0002')
parser.add_argument('--cuda', type=int, default=1, help='enables cuda')
parser.add_argument('--ngpu', type=int, default=1, help='number of GPUs to use')
parser.add_argument('--outf', default='modelfiles/pytorch_demo3', help='folder to output images and model checkpoints')
parser.add_argument('--numz', type=int, default=1, help='The number of set of z to marginalize over.')
parser.add_argument('--num_mcmc', type=int, default=10, help='The number of MCMC chains to run in parallel')
parser.add_argument('--num_semi', type=int, default=4000, help='The number of semi-supervised samples')
parser.add_argument('--gnoise_alpha', type=float, default=0.0001, help='')
parser.add_argument('--dnoise_alpha', type=float, default=0.0001, help='')
parser.add_argument('--d_optim', type=str, default='adam', choices=['adam', 'sgd'], help='')
parser.add_argument('--g_optim', type=str, default='adam', choices=['adam', 'sgd'], help='')
parser.add_argument('--stats_interval', type=int, default=10, help='Calculate test accuracy every interval')
parser.add_argument('--tensorboard', type=int, default=1, help='')
parser.add_argument('--bayes', type=int, default=1, help='Do Bayesian GAN or normal GAN')
import sys; sys.argv=['']; del sys
opt = parser.parse_args()
try:
os.makedirs(opt.outf)
except OSError:
print("Error Making Directory", opt.outf)
pass
if opt.tensorboard: configure(opt.outf)
# First, we construct the data loader for full training set
# as well as the data loader of a partial training set for semi-supervised learning
# transformation operator
normalize = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
transform_opt = transforms.Compose([
transforms.ToTensor(),
normalize,
])
# get training set and test set
dataset = dset.CIFAR10(root=os.environ['CIFAR10_PATH'], download=True,
transform=transform_opt)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=opt.batchSize,
shuffle=True, num_workers=1)
from partial_dataset import PartialDataset
# partial dataset for semi-supervised training
dataset_partial = PartialDataset(dataset, opt.num_semi)
# test set for evaluation
dataset_test = dset.CIFAR10(root=os.environ['CIFAR10_PATH'],
train=False,
transform=transform_opt)
dataloader_test = torch.utils.data.DataLoader(dataset_test,
batch_size=opt.batchSize, shuffle=False, pin_memory=True, num_workers=1)
dataloader_semi = torch.utils.data.DataLoader(dataset_partial, batch_size=opt.batchSize,
shuffle=True, num_workers=1)
# Now we initialize the distributions of G and D
##### Generator ######
# opt.num_mcmc is the number of MCMC chains that we run in parallel
# opt.numz is the number of noise batches that we use. We also use different parameter samples for different batches
# we construct opt.numz * opt.num_mcmc initial generator parameters
# We will keep sampling parameters from the posterior starting from this set
# Keeping track of many MCMC chains can be done quite elegantly in Pytorch
from models.discriminators import _netD
from models.generators import _netG
from statsutil import weights_init
netGs = []
for _idxz in range(opt.numz):
for _idxm in range(opt.num_mcmc):
netG = _netG(opt.ngpu, nz=opt.nz)
netG.apply(weights_init)
netGs.append(netG)
##### Discriminator ######
# We will use 1 chain of MCMCs for the discriminator
# The number of classes for semi-supervised case is 11; that is,
# index 0 for fake data and 1-10 for the 10 classes of CIFAR.
num_classes = 11
netD = _netD(opt.ngpu, num_classes=num_classes)
# In order to calculate errG or errD_real, we need to sum the probabilities over all the classes (1 to K)
# ComplementCrossEntropyLoss is a loss function that performs this task
# We can specify a default except_index that corresponds to a fake label. In this case, we use index=0
from ComplementCrossEntropyLoss import ComplementCrossEntropyLoss
criterion = nn.CrossEntropyLoss()
# use the default index = 0 - equivalent to summing all other probabilities
criterion_comp = ComplementCrossEntropyLoss(except_index=0)
from models.distributions import Normal
from models.bayes import NoiseLoss, PriorLoss
# Finally, initialize the ``optimizers''
# Since we keep track of a set of parameters, we also need a set of
# ``optimizers''
if opt.d_optim == 'adam':
optimizerD = optim.Adam(netD.parameters(), lr=opt.lr, betas=(0.5, 0.999))
elif opt.d_optim == 'sgd':
optimizerD = torch.optim.SGD(netD.parameters(), lr=opt.lr,
momentum=0.9,
nesterov=True,
weight_decay=1e-4)
optimizerGs = []
for netG in netGs:
optimizerG = optim.Adam(netG.parameters(), lr=opt.lr, betas=(0.5, 0.999))
optimizerGs.append(optimizerG)
# since the log posterior is the average per sample, we also scale down the prior and the noise
gprior_criterion = PriorLoss(prior_std=1., observed=1000.)
gnoise_criterion = NoiseLoss(params=netGs[0].parameters(), scale=math.sqrt(2*opt.gnoise_alpha/opt.lr), observed=1000.)
dprior_criterion = PriorLoss(prior_std=1., observed=50000.)
dnoise_criterion = NoiseLoss(params=netD.parameters(), scale=math.sqrt(2*opt.dnoise_alpha*opt.lr), observed=50000.)
# Fixed noise for data generation
fixed_noise = torch.FloatTensor(opt.batchSize, opt.nz, 1, 1).normal_(0, 1).cuda()
fixed_noise = Variable(fixed_noise)
# initialize input variables and use CUDA (optional)
input = torch.FloatTensor(opt.batchSize, 3, opt.imageSize, opt.imageSize)
noise = torch.FloatTensor(opt.batchSize, opt.nz, 1, 1)
label = torch.FloatTensor(opt.batchSize)
real_label = 1
fake_label = 0
if opt.cuda:
netD.cuda()
for netG in netGs:
netG.cuda()
criterion.cuda()
criterion_comp.cuda()
input, label = input.cuda(), label.cuda()
noise = noise.cuda()
# fully supervised
netD_fullsup = _netD(opt.ngpu, num_classes=num_classes)
netD_fullsup.apply(weights_init)
criterion_fullsup = nn.CrossEntropyLoss()
if opt.d_optim == 'adam':
optimizerD_fullsup = optim.Adam(netD_fullsup.parameters(), lr=opt.lr, betas=(0.5, 0.999))
else:
optimizerD_fullsup = optim.SGD(netD_fullsup.parameters(), lr=opt.lr,
momentum=0.9,
nesterov=True,
weight_decay=1e-4)
if opt.cuda:
netD_fullsup.cuda()
criterion_fullsup.cuda()
# We define a class to calculate the accuracy on test set
# to test the performance of semi-supervised training
def get_test_accuracy(model_d, iteration, label='semi'):
# don't forget to do model_d.eval() before doing evaluation
top1 = AverageMeter()
for i, (input, target) in enumerate(dataloader_test):
target = target.cuda()
input = input.cuda()
input_var = torch.autograd.Variable(input.cuda(), volatile=True)
target_var = torch.autograd.Variable(target, volatile=True)
output = model_d(input_var)
probs = output.data[:, 1:] # discard the zeroth index
prec1 = accuracy(probs, target, topk=(1,))[0]
top1.update(prec1[0], input.size(0))
if i % 50 == 0:
print("{} Test: [{}/{}]\t Prec@1 {top1.val:.3f} ({top1.avg:.3f})"\
.format(label, i, len(dataloader_test), top1=top1))
print('{label} Test Prec@1 {top1.avg:.2f}'.format(label=label, top1=top1))
log_value('test_acc_{}'.format(label), top1.avg, iteration)
iteration = 0
for epoch in range(opt.niter):
top1 = AverageMeter()
top1_weakD = AverageMeter()
for i, data in enumerate(dataloader):
iteration += 1
#######
# 1. real input
netD.zero_grad()
_input, _ = data
batch_size = _input.size(0)
if opt.cuda:
_input = _input.cuda()
input.resize_as_(_input).copy_(_input)
label.resize_(batch_size).fill_(real_label)
inputv = Variable(input)
labelv = Variable(label)
output = netD(inputv)
errD_real = criterion_comp(output)
errD_real.backward()
# calculate D_x, the probability that real data are classified
D_x = 1 - torch.nn.functional.softmax(output).data[:, 0].mean()
#######
# 2. Generated input
fakes = []
for _idxz in range(opt.numz):
noise.resize_(batch_size, opt.nz, 1, 1).normal_(0, 1)
noisev = Variable(noise)
for _idxm in range(opt.num_mcmc):
idx = _idxz*opt.num_mcmc + _idxm
netG = netGs[idx]
_fake = netG(noisev)
fakes.append(_fake)
fake = torch.cat(fakes)
output = netD(fake.detach())
labelv = Variable(torch.LongTensor(fake.data.shape[0]).cuda().fill_(fake_label))
errD_fake = criterion(output, labelv)
errD_fake.backward()
D_G_z1 = 1 - torch.nn.functional.softmax(output).data[:, 0].mean()
#######
# 3. Labeled Data Part (for semi-supervised learning)
for ii, (input_sup, target_sup) in enumerate(dataloader_semi):
input_sup, target_sup = input_sup.cuda(), target_sup.cuda()
break
input_sup_v = Variable(input_sup.cuda())
# convert target indicies from 0 to 9 to 1 to 10
target_sup_v = Variable( (target_sup + 1).cuda())
output_sup = netD(input_sup_v)
err_sup = criterion(output_sup, target_sup_v)
err_sup.backward()
prec1 = accuracy(output_sup.data, target_sup + 1, topk=(1,))[0]
top1.update(prec1[0], input_sup.size(0))
if opt.bayes:
errD_prior = dprior_criterion(netD.parameters())
errD_prior.backward()
errD_noise = dnoise_criterion(netD.parameters())
errD_noise.backward()
errD = errD_real + errD_fake + err_sup + errD_prior + errD_noise
else:
errD = errD_real + errD_fake + err_sup
optimizerD.step()
# 4. Generator
for netG in netGs:
netG.zero_grad()
labelv = Variable(torch.FloatTensor(fake.data.shape[0]).cuda().fill_(real_label))
output = netD(fake)
errG = criterion_comp(output)
if opt.bayes:
for netG in netGs:
errG += gprior_criterion(netG.parameters())
errG += gnoise_criterion(netG.parameters())
errG.backward()
D_G_z2 = 1 - torch.nn.functional.softmax(output).data[:, 0].mean()
for optimizerG in optimizerGs:
optimizerG.step()
# 5. Fully supervised training (running in parallel for comparison)
netD_fullsup.zero_grad()
input_fullsup = Variable(input_sup)
target_fullsup = Variable((target_sup + 1))
output_fullsup = netD_fullsup(input_fullsup)
err_fullsup = criterion_fullsup(output_fullsup, target_fullsup)
optimizerD_fullsup.zero_grad()
err_fullsup.backward()
optimizerD_fullsup.step()
# 6. get test accuracy after every interval
if iteration % opt.stats_interval == 0:
# get test accuracy on train and test
netD.eval()
get_test_accuracy(netD, iteration, label='semi')
get_test_accuracy(netD_fullsup, iteration, label='sup')
netD.train()
# 7. Report for this iteration
cur_val, ave_val = top1.val, top1.avg
log_value('train_acc', top1.avg, iteration)
print('[%d/%d][%d/%d] Loss_D: %.2f Loss_G: %.2f D(x): %.2f D(G(z)): %.2f / %.2f | Acc %.1f / %.1f'
% (epoch, opt.niter, i, len(dataloader),
errD.data[0], errG.data[0], D_x, D_G_z1, D_G_z2, cur_val, ave_val))
# after each epoch, save images
vutils.save_image(_input,
'%s/real_samples.png' % opt.outf,
normalize=True)
for _zid in range(opt.numz):
for _mid in range(opt.num_mcmc):
idx = _zid*opt.num_mcmc + _mid
netG = netGs[idx]
fake = netG(fixed_noise)
vutils.save_image(fake.data,
'%s/fake_samples_epoch_%03d_G_z%02d_m%02d.png' % (opt.outf, epoch, _zid, _mid),
normalize=True)
for ii, netG in enumerate(netGs):
torch.save(netG.state_dict(), '%s/netG%d_epoch_%d.pth' % (opt.outf, ii, epoch))
torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (opt.outf, epoch))
torch.save(netD_fullsup.state_dict(), '%s/netD_fullsup_epoch_%d.pth' % (opt.outf, epoch))
from tensorflow.python.summary import event_accumulator
import pandas as pd
from plotnine import *
ea = event_accumulator.EventAccumulator(opt.outf)
ea.Reload()
_df1 = pd.DataFrame(ea.Scalars('test_acc_semi'))
_df2 = pd.DataFrame(ea.Scalars('test_acc_sup'))
df = pd.DataFrame()
df['Iteration'] = pd.concat([_df1['step'], _df2['step']])
df['Accuracy'] = pd.concat([_df1['value'], _df2['value']])
df['Classification'] = ['BayesGAN']*len(_df1['step']) + ['Baseline']*len(_df2['step'])
```
The results show that the Bayesian discriminator trained with the Bayesian generator outperforms the discriminator trained on partial data.
```
%matplotlib inline
p = ggplot(df, aes(x='Iteration', y='Accuracy', color='Classification', label='Classification')) + geom_point(size=0.5)
print(p)
```
After training for 50 epochs, below are the samples generator by four different parameters $\theta_g$'s. Note that different parameters tend to have different artistic styles.




Note: This code is adapted from the implementation by Saatchai and Wilson in Tensorflow (https://github.com/andrewgordonwilson/bayesgan) and the DCGAN code from Pytorch examples (https://github.com/pytorch/examples/tree/master/dcgan).
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review = review_to_words(train_X[100])
print(review)
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:**
* It removes punctuation marks from text
* It also converts all text to lower case
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
from collections import Counter
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = Counter() # A dict storing the words that appear in the reviews along with how often they occur
for i in range(len(data)):
for word in data[i]:
word_count[word] += 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words_count = word_count.most_common()
sorted_words = []
for i in range(len(sorted_words_count)):
w, _i = sorted_words_count[i]
sorted_words.append(w)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:**
The most frequently appearing words are:
['movi', 'film', 'one', 'like', 'time']
and it makes sense that this words appear frequently because they are all movies term
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
print(list(word_dict)[0:5])
# print()
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(len(train_X[10]))
print(test_X_len[12])
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:**
Why this might be a problem is, some sentences length are way less than 500, so we are just populating the rest of words with zero which might increase computational time and load
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
pred = model(batch_X)
loss = loss_fn(pred, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
# image_name='sagemaker-pytorch-2020-06-29-01-11-40-917',
hyperparameters={
'epochs': 25,
'hidden_dim': 300,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
# from sagemaker.pytorch import PyTorch
# my_training_job_name = 'sagemaker-pytorch-2020-06-29-22-56-57-261'
# estimator = PyTorch.attach(my_training_job_name)
# estimator.fit({'training': input_data})
# predictor = estimator.deploy(initial_instance_count=1,
# instance_type='ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:**
The XGBoost model performs more great than the custom model, custom model has an accuracy of 0.7728, but the later had 0.87232
The reason i think the two models perform differently is they both have different achitectures
In this case i think the XGBoost model is better than the custom LSTM model sentiment analysis, but i think the LSTM can be tuned to perform far more better
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
sentences, length = convert_and_pad(word_dict, review_to_words(test_review))
test_data = [0]
test_data[0] = np.append(np.array(length),np.array(sentences))
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
predictor.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
from sagemaker.pytorch import PyTorch
my_training_job_name = 'sagemaker-pytorch-2020-07-01-07-19-11-965'
estimator = PyTorch.attach(my_training_job_name)
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
test_review = "a very very bad review"
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:**
Review: What the heck
Prediction: NEGATIVE (predicted correctly)
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
from fasterai.visualize import *
plt.style.use('dark_background')
#Adjust render_factor (int) if image doesn't look quite right (max 64 on 11GB GPU). The default here works for most photos.
#It literally just is a number multiplied by 16 to get the square render resolution.
#Note that this doesn't affect the resolution of the final output- the output is the same resolution as the input.
#Example: render_factor=21 => color is rendered at 16x21 = 336x336 px.
render_factor=35
vis = get_image_colorizer(render_factor=render_factor, artistic=False)
#vis = get_video_colorizer(render_factor=render_factor).vis
vis.plot_transformed_image("test_images/poolparty.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1852GatekeepersWindsor.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/Chief.jpg", render_factor=10, compare=True)
vis.plot_transformed_image("test_images/1850SchoolForGirls.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/AtlanticCityBeach1905.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/CottonMillWorkers1913.jpg", compare=True)
vis.plot_transformed_image("test_images/BrooklynNavyYardHospital.jpg", compare=True)
vis.plot_transformed_image("test_images/FinnishPeasant1867.jpg", compare=True)
vis.plot_transformed_image("test_images/AtlanticCity1905.png", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/PushingCart.jpg", render_factor=24, compare=True)
vis.plot_transformed_image("test_images/Drive1905.jpg", compare=True)
vis.plot_transformed_image("test_images/IronLung.png", render_factor=26, compare=True)
vis.plot_transformed_image("test_images/FamilyWithDog.jpg", compare=True)
vis.plot_transformed_image("test_images/DayAtSeaBelgium.jpg", compare=True)
vis.plot_transformed_image("test_images/marilyn_woods.jpg", render_factor=16, compare=True)
vis.plot_transformed_image("test_images/OldWomanSweden1904.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/WomenTapingPlanes.jpg", compare=True)
vis.plot_transformed_image("test_images/overmiller.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/BritishDispatchRider.jpg", render_factor=16, compare=True)
vis.plot_transformed_image("test_images/MuseauNacionalDosCoches.jpg", render_factor=19, compare=True)
vis.plot_transformed_image("test_images/abe.jpg", render_factor=13, compare=True)
vis.plot_transformed_image("test_images/RossCorbettHouseCork.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/HPLabelleOfficeMontreal.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/einstein_beach.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/airmen1943.jpg", compare=True)
vis.plot_transformed_image("test_images/20sWoman.jpg", render_factor=24, compare=True)
vis.plot_transformed_image("test_images/egypt-1.jpg", render_factor=18, compare=True)
vis.plot_transformed_image("test_images/Rutherford_Hayes.jpg", compare=True)
vis.plot_transformed_image("test_images/einstein_portrait.jpg", render_factor=15, compare=True)
vis.plot_transformed_image("test_images/pinkerton.jpg", render_factor=7, compare=True)
vis.plot_transformed_image("test_images/WaltWhitman.jpg", render_factor=9, compare=True)
vis.plot_transformed_image("test_images/dorothea-lange.jpg", render_factor=18, compare=True)
vis.plot_transformed_image("test_images/Hemmingway2.jpg", render_factor=22, compare=True)
vis.plot_transformed_image("test_images/hemmingway.jpg", render_factor=14, compare=True)
vis.plot_transformed_image("test_images/smoking_kid.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/teddy_rubble.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/dustbowl_2.jpg", render_factor=16, compare=True)
vis.plot_transformed_image("test_images/camera_man.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/migrant_mother.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/marktwain.jpg", render_factor=14, compare=True)
vis.plot_transformed_image("test_images/HelenKeller.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/Evelyn_Nesbit.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/Eddie-Adams.jpg", compare=True)
vis.plot_transformed_image("test_images/soldier_kids.jpg", compare=True)
vis.plot_transformed_image("test_images/AnselAdamsYosemite.jpg", compare=True)
vis.plot_transformed_image("test_images/unnamed.jpg", render_factor=28, compare=True)
vis.plot_transformed_image("test_images/workers_canyon.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/CottonMill.jpg", compare=True)
vis.plot_transformed_image("test_images/JudyGarland.jpeg", compare=True)
vis.plot_transformed_image("test_images/kids_pit.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/last_samurai.jpg", render_factor=22, compare=True)
vis.plot_transformed_image("test_images/AnselAdamsWhiteChurch.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/opium.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/dorothea_lange_2.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/rgs.jpg", compare=True)
vis.plot_transformed_image("test_images/wh-auden.jpg", compare=True)
vis.plot_transformed_image("test_images/w-b-yeats.jpg", compare=True)
vis.plot_transformed_image("test_images/marilyn_portrait.jpg", compare=True)
vis.plot_transformed_image("test_images/wilson-slaverevivalmeeting.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/ww1_trench.jpg", render_factor=18, compare=True)
vis.plot_transformed_image("test_images/women-bikers.png", render_factor=23, compare=True)
vis.plot_transformed_image("test_images/Unidentified1855.jpg", render_factor=19, compare=True)
vis.plot_transformed_image("test_images/skycrapper_lunch.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/sioux.jpg", render_factor=28, compare=True)
vis.plot_transformed_image("test_images/school_kids.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/royal_family.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/redwood_lumberjacks.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/poverty.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/paperboy.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/NativeAmericans.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/helmut_newton-.jpg", compare=True)
vis.plot_transformed_image("test_images/Greece1911.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/FatMenClub.jpg", render_factor=18, compare=True)
vis.plot_transformed_image("test_images/EgyptColosus.jpg", compare=True)
vis.plot_transformed_image("test_images/egypt-2.jpg", compare=True)
vis.plot_transformed_image("test_images/dustbowl_sd.jpg", compare=True)
vis.plot_transformed_image("test_images/dustbowl_people.jpg", render_factor=24, compare=True)
vis.plot_transformed_image("test_images/dustbowl_5.jpg", compare=True)
vis.plot_transformed_image("test_images/dustbowl_1.jpg", compare=True)
vis.plot_transformed_image("test_images/DriveThroughGiantTree.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/covered-wagons-traveling.jpg", compare=True)
vis.plot_transformed_image("test_images/civil-war_2.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/civil_war_4.jpg", compare=True)
vis.plot_transformed_image("test_images/civil_war_3.jpg", render_factor=28, compare=True)
vis.plot_transformed_image("test_images/civil_war.jpg", compare=True)
vis.plot_transformed_image("test_images/BritishSlum.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/bicycles.jpg", render_factor=27, compare=True)
vis.plot_transformed_image("test_images/brooklyn_girls_1940s.jpg", compare=True)
vis.plot_transformed_image("test_images/40sCouple.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/1946Wedding.jpg", compare=True)
vis.plot_transformed_image("test_images/Dolores1920s.jpg", render_factor=18, compare=True)
vis.plot_transformed_image("test_images/TitanicGym.jpg", render_factor=26, compare=True)
vis.plot_transformed_image("test_images/FrenchVillage1950s.jpg", render_factor=41, compare=True)
vis.plot_transformed_image("test_images/FrenchVillage1950s.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/ClassDivide1930sBrittain.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1870sSphinx.jpg", compare=True)
vis.plot_transformed_image("test_images/1890Surfer.png", render_factor=37, compare=True)
vis.plot_transformed_image("test_images/TV1930s.jpg", render_factor=43, compare=True)
vis.plot_transformed_image("test_images/1864UnionSoldier.jpg", compare=True)
vis.plot_transformed_image("test_images/1890sMedStudents.jpg", render_factor=18, compare=True)
vis.plot_transformed_image("test_images/BellyLaughWWI.jpg", compare=True)
vis.plot_transformed_image("test_images/PiggyBackRide.jpg", compare=True)
vis.plot_transformed_image("test_images/HealingTree.jpg", compare=True)
vis.plot_transformed_image("test_images/ManPile.jpg", compare=True)
vis.plot_transformed_image("test_images/1910Bike.jpg", compare=True)
vis.plot_transformed_image("test_images/FreeportIL.jpg", compare=True)
vis.plot_transformed_image("test_images/DutchBabyCoupleEllis.jpg", compare=True)
vis.plot_transformed_image("test_images/InuitWoman1903.png", compare=True)
vis.plot_transformed_image("test_images/1920sDancing.jpg", compare=True)
vis.plot_transformed_image("test_images/AirmanDad.jpg", render_factor=13, compare=True)
vis.plot_transformed_image("test_images/1910Racket.png", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/1880Paris.jpg", render_factor=16, compare=True)
vis.plot_transformed_image("test_images/Deadwood1860s.jpg", render_factor=13, compare=True)
vis.plot_transformed_image("test_images/1860sSamauris.jpg", render_factor=43, compare=True)
vis.plot_transformed_image("test_images/LondonUnderground1860.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/Mid1800sSisters.jpg", compare=True)
vis.plot_transformed_image("test_images/1860Girls.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/SanFran1851.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/Kabuki1870s.png", render_factor=8, compare=True)
vis.plot_transformed_image("test_images/Mormons1870s.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/EgyptianWomenLate1800s.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/PicadillyLate1800s.jpg", render_factor=26, compare=True)
vis.plot_transformed_image("test_images/SutroBaths1880s.jpg", compare=True)
vis.plot_transformed_image("test_images/1880sBrooklynBridge.jpg", compare=True)
vis.plot_transformed_image("test_images/ChinaOpiumc1880.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/Locomotive1880s.jpg", render_factor=9, compare=True)
vis.plot_transformed_image("test_images/ViennaBoys1880s.png", compare=True)
vis.plot_transformed_image("test_images/VictorianDragQueen1880s.png", compare=True)
vis.plot_transformed_image("test_images/Sami1880s.jpg", render_factor=44, compare=True)
vis.plot_transformed_image("test_images/ArkansasCowboys1880s.jpg", render_factor=22, compare=True)
vis.plot_transformed_image("test_images/Ballet1890Russia.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/Rottindean1890s.png", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/1890sPingPong.jpg", compare=True)
vis.plot_transformed_image("test_images/London1937.png", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/Harlem1932.jpg", render_factor=37, compare=True)
vis.plot_transformed_image("test_images/OregonTrail1870s.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/EasterNyc1911.jpg", render_factor=19, compare=True)
vis.plot_transformed_image("test_images/1899NycBlizzard.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/Edinburgh1920s.jpg", render_factor=17, compare=True)
vis.plot_transformed_image("test_images/1890sShoeShopOhio.jpg", render_factor=46, compare=True)
vis.plot_transformed_image("test_images/1890sTouristsEgypt.png", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/1938Reading.jpg", render_factor=19, compare=True)
vis.plot_transformed_image("test_images/1850Geography.jpg", compare=True)
vis.plot_transformed_image("test_images/1901Electrophone.jpg", render_factor=10, compare=True)
for i in range(8, 47):
vis.plot_transformed_image("test_images/1901Electrophone.jpg", render_factor=i, compare=True)
vis.plot_transformed_image("test_images/Texas1938Woman.png", render_factor=38, compare=True)
vis.plot_transformed_image("test_images/MaioreWoman1895NZ.jpg", compare=True)
vis.plot_transformed_image("test_images/WestVirginiaHouse.jpg", compare=True)
vis.plot_transformed_image("test_images/1920sGuadalope.jpg", compare=True)
vis.plot_transformed_image("test_images/1909Chicago.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1920sFarmKid.jpg", compare=True)
vis.plot_transformed_image("test_images/ParisLate1800s.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1900sDaytonaBeach.png", render_factor=23, compare=True)
vis.plot_transformed_image("test_images/1930sGeorgia.jpg", compare=True)
vis.plot_transformed_image("test_images/NorwegianBride1920s.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/Depression.jpg", compare=True)
vis.plot_transformed_image("test_images/1888Slum.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/LivingRoom1920Sweden.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1896NewsBoyGirl.jpg", compare=True)
vis.plot_transformed_image("test_images/PetDucks1927.jpg", compare=True)
vis.plot_transformed_image("test_images/1899SodaFountain.jpg", render_factor=46, compare=True)
vis.plot_transformed_image("test_images/TimesSquare1955.jpg", compare=True)
vis.plot_transformed_image("test_images/PuppyGify.jpg", compare=True)
vis.plot_transformed_image("test_images/1890CliffHouseSF.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/1908FamilyPhoto.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1900sSaloon.jpg", render_factor=43, compare=True)
vis.plot_transformed_image("test_images/1890BostonHospital.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/1870Girl.jpg", compare=True)
vis.plot_transformed_image("test_images/AustriaHungaryWomen1890s.jpg", compare=True)
vis.plot_transformed_image("test_images/Shack.jpg",render_factor=42, compare=True)
vis.plot_transformed_image("test_images/Apsaroke1908.png", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/1948CarsGrandma.jpg", compare=True)
vis.plot_transformed_image("test_images/PlanesManhattan1931.jpg", compare=True)
vis.plot_transformed_image("test_images/WorriedKid1940sNyc.jpg", compare=True)
vis.plot_transformed_image("test_images/1920sFamilyPhoto.jpg", compare=True)
vis.plot_transformed_image("test_images/CatWash1931.jpg", compare=True)
vis.plot_transformed_image("test_images/1940sBeerRiver.jpg", compare=True)
vis.plot_transformed_image("test_images/VictorianLivingRoom.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1897BlindmansBluff.jpg", compare=True)
vis.plot_transformed_image("test_images/1874Mexico.png", compare=True)
vis.plot_transformed_image("test_images/MadisonSquare1900.jpg", render_factor=46, compare=True)
vis.plot_transformed_image("test_images/1867MusicianConstantinople.jpg", compare=True)
vis.plot_transformed_image("test_images/1925Girl.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/1907Cowboys.jpg", render_factor=28, compare=True)
vis.plot_transformed_image("test_images/WWIIPeeps.jpg", render_factor=37, compare=True)
vis.plot_transformed_image("test_images/BabyBigBoots.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/1895BikeMaidens.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/IrishLate1800s.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/LibraryOfCongress1910.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/1875Olds.jpg", render_factor=16, compare=True)
vis.plot_transformed_image("test_images/SenecaNative1908.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/WWIHospital.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/1892WaterLillies.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/GreekImmigrants1905.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/FatMensShop.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/KidCage1930s.png", compare=True)
vis.plot_transformed_image("test_images/FarmWomen1895.jpg", compare=True)
vis.plot_transformed_image("test_images/NewZealand1860s.jpg", compare=True)
vis.plot_transformed_image("test_images/JerseyShore1905.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/LondonKidsEarly1900s.jpg", compare=True)
vis.plot_transformed_image("test_images/NYStreetClean1906.jpg", compare=True)
vis.plot_transformed_image("test_images/Boston1937.jpg", compare=True)
vis.plot_transformed_image("test_images/Cork1905.jpg", render_factor=28, compare=True)
vis.plot_transformed_image("test_images/BoxedBedEarly1900s.jpg", compare=True)
vis.plot_transformed_image("test_images/ZoologischerGarten1898.jpg", compare=True)
vis.plot_transformed_image("test_images/EmpireState1930.jpg", compare=True)
vis.plot_transformed_image("test_images/Agamemnon1919.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/AppalachianLoggers1901.jpg", compare=True)
vis.plot_transformed_image("test_images/WWISikhs.jpg", compare=True)
vis.plot_transformed_image("test_images/MementoMori1865.jpg", compare=True)
vis.plot_transformed_image("test_images/RepBrennanRadio1922.jpg", render_factor=43, compare=True)
vis.plot_transformed_image("test_images/Late1800sNative.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/GasPrices1939.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/1933RockefellerCenter.jpg", compare=True)
vis.plot_transformed_image("test_images/Scotland1919.jpg", compare=True)
vis.plot_transformed_image("test_images/1920CobblersShopLondon.jpg", compare=True)
vis.plot_transformed_image("test_images/1909ParisFirstFemaleTaxisDriver.jpg", compare=True)
vis.plot_transformed_image("test_images/HoovervilleSeattle1932.jpg", compare=True)
vis.plot_transformed_image("test_images/ElephantLondon1934.png", compare=True)
vis.plot_transformed_image("test_images/Jane_Addams.jpg", compare=True)
vis.plot_transformed_image("test_images/AnselAdamsAdobe.jpg", compare=True)
vis.plot_transformed_image("test_images/CricketLondon1930.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/Donegal1907Yarn.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/AnselAdamsChurch.jpg", compare=True)
vis.plot_transformed_image("test_images/BreadDelivery1920sIreland.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/BritishTeaBombay1890s.png", compare=True)
vis.plot_transformed_image("test_images/CafeParis1928.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/BigManTavern1908NYC.jpg", compare=True)
vis.plot_transformed_image("test_images/Cars1890sIreland.jpg", compare=True)
vis.plot_transformed_image("test_images/GalwayIreland1902.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/HomeIreland1924.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/HydeParkLondon1920s.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/1929LondonOverFleetSt.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/AccordianKid1900Paris.jpg", compare=True)
vis.plot_transformed_image("test_images/AnselAdamsBuildings.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/AthleticClubParis1913.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/BombedLibraryLondon1940.jpg", compare=True)
vis.plot_transformed_image("test_images/Boston1937.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/BoulevardDuTemple1838.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/BumperCarsParis1930.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/CafeTerrace1925Paris.jpg", render_factor=24, compare=True)
vis.plot_transformed_image("test_images/CoalDeliveryParis1915.jpg", render_factor=37, compare=True)
vis.plot_transformed_image("test_images/CorkKids1910.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/DeepSeaDiver1915.png", render_factor=16, compare=True)
vis.plot_transformed_image("test_images/EastEndLondonStreetKids1901.jpg", compare=True)
vis.plot_transformed_image("test_images/FreightTrainTeens1934.jpg", compare=True)
vis.plot_transformed_image("test_images/HarrodsLondon1920.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/HerbSeller1899Paris.jpg", render_factor=17, compare=True)
vis.plot_transformed_image("test_images/CalcuttaPoliceman1920.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/ElectricScooter1915.jpeg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/GreatGrandparentsIrelandEarly1900s.jpg", compare=True)
vis.plot_transformed_image("test_images/HalloweenEarly1900s.jpg", render_factor=11, compare=True)
vis.plot_transformed_image("test_images/IceManLondon1919.jpg", compare=True)
vis.plot_transformed_image("test_images/LeBonMarcheParis1875.jpg", compare=True)
vis.plot_transformed_image("test_images/LittleAirplane1934.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/RoyalUniversityMedStudent1900Ireland.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/LewisTomalinLondon1895.png", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/SunHelmetsLondon1933.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/Killarney1910.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/LondonSheep1920s.png", compare=True)
vis.plot_transformed_image("test_images/PostOfficeVermont1914.png", compare=True)
vis.plot_transformed_image("test_images/ServantsBessboroughHouse1908Ireland.jpg", compare=True)
vis.plot_transformed_image("test_images/WaterfordIreland1909.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/Lisbon1919.jpg", compare=True)
vis.plot_transformed_image("test_images/London1918WartimeClothesManufacture.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/LondonHeatWave1935.png", compare=True)
vis.plot_transformed_image("test_images/LondonsSmallestShop1900.jpg", compare=True)
vis.plot_transformed_image("test_images/MetropolitanDistrictRailway1869London.jpg", compare=True)
vis.plot_transformed_image("test_images/NativeWoman1926.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/PaddysMarketCork1900s.jpg", compare=True)
vis.plot_transformed_image("test_images/Paris1920Cart.jpg", compare=True)
vis.plot_transformed_image("test_images/ParisLadies1910.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/ParisLadies1930s.jpg", compare=True)
vis.plot_transformed_image("test_images/Sphinx.jpeg", compare=True)
vis.plot_transformed_image("test_images/TheatreGroupBombay1875.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/WorldsFair1900Paris.jpg", compare=True)
vis.plot_transformed_image("test_images/London1850Coach.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/London1900EastEndBlacksmith.jpg", compare=True)
vis.plot_transformed_image("test_images/London1930sCheetah.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/LondonFireBrigadeMember1926.jpg", compare=True)
vis.plot_transformed_image("test_images/LondonGarbageTruck1910.jpg", compare=True)
vis.plot_transformed_image("test_images/LondonRailwayWork1931.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/LondonStreets1900.jpg", compare=True)
vis.plot_transformed_image("test_images/MuffinManlLondon1910.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/NativeCouple1912.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/NewspaperCivilWar1863.jpg", compare=True)
vis.plot_transformed_image("test_images/PaddingtonStationLondon1907.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/Paris1899StreetDig.jpg", compare=True)
vis.plot_transformed_image("test_images/Paris1926.jpg", compare=True)
vis.plot_transformed_image("test_images/ParisWomenFurs1920s.jpg", render_factor=21, compare=True)
vis.plot_transformed_image("test_images/PeddlerParis1899.jpg", compare=True)
vis.plot_transformed_image("test_images/SchoolKidsConnemaraIreland1901.jpg", compare=True)
vis.plot_transformed_image("test_images/SecondHandClothesLondonLate1800s.jpg", render_factor=33, compare=True)
vis.plot_transformed_image("test_images/SoapBoxRacerParis1920s.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/SoccerMotorcycles1923London.jpg", compare=True)
vis.plot_transformed_image("test_images/WalkingLibraryLondon1930.jpg", compare=True)
vis.plot_transformed_image("test_images/LondonStreetDoctor1877.png", render_factor=38, compare=True)
vis.plot_transformed_image("test_images/jacksonville.jpg", compare=True)
vis.plot_transformed_image("test_images/ZebraCarriageLondon1900.jpg", compare=True)
vis.plot_transformed_image("test_images/StreetGramaphonePlayerLondon1920s.png", compare=True)
vis.plot_transformed_image("test_images/YaleBranchBarnardsExpress.jpg", compare=True)
vis.plot_transformed_image("test_images/SynagogueInterior.PNG", compare=True)
vis.plot_transformed_image("test_images/ArmisticeDay1918.jpg", compare=True)
vis.plot_transformed_image("test_images/FlyingMachinesParis1909.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/GreatAunt1920.jpg", compare=True)
vis.plot_transformed_image("test_images/NewBrunswick1915.jpg", compare=True)
vis.plot_transformed_image("test_images/ShoeMakerLate1800s.jpg", compare=True)
vis.plot_transformed_image("test_images/SpottedBull1908.jpg", compare=True)
vis.plot_transformed_image("test_images/TouristsGermany1904.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/TunisianStudents1914.jpg", compare=True)
vis.plot_transformed_image("test_images/Yorktown1862.jpg", compare=True)
vis.plot_transformed_image("test_images/LondonFashion1911.png", compare=True)
vis.plot_transformed_image("test_images/1939GypsyKids.jpg", render_factor=37, compare=True)
vis.plot_transformed_image("test_images/1936OpiumShanghai.jpg", compare=True)
vis.plot_transformed_image("test_images/1923HollandTunnel.jpg", compare=True)
vis.plot_transformed_image("test_images/1939YakimaWAGirl.jpg", compare=True)
vis.plot_transformed_image("test_images/GoldenGateConstruction.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/PostCivilWarAncestors.jpg", compare=True)
vis.plot_transformed_image("test_images/1939SewingBike.png", compare=True)
vis.plot_transformed_image("test_images/1930MaineSchoolBus.jpg", compare=True)
vis.plot_transformed_image("test_images/1913NewYorkConstruction.jpg", compare=True)
vis.plot_transformed_image("test_images/1945HiroshimaChild.jpg", compare=True)
vis.plot_transformed_image("test_images/1941GeorgiaFarmhouse.jpg", render_factor=43, compare=True)
vis.plot_transformed_image("test_images/1934UmbriaItaly.jpg", render_factor=21)
vis.plot_transformed_image("test_images/1900sLadiesTeaParty.jpg", compare=True)
vis.plot_transformed_image("test_images/1919WWIAviationOxygenMask.jpg", compare=True)
vis.plot_transformed_image("test_images/1900NJThanksgiving.jpg", compare=True)
vis.plot_transformed_image("test_images/1940Connecticut.jpg", render_factor=43, compare=True)
vis.plot_transformed_image("test_images/1940Connecticut.jpg", render_factor=i, compare=True)
vis.plot_transformed_image("test_images/1911ThanksgivingMaskers.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/1910ThanksgivingMaskersII.jpg", compare=True)
vis.plot_transformed_image("test_images/1936PetToad.jpg", compare=True)
vis.plot_transformed_image("test_images/1908RookeriesLondon.jpg", compare=True)
vis.plot_transformed_image("test_images/1890sChineseImmigrants.jpg", render_factor=25, compare=True)
vis.plot_transformed_image("test_images/1897VancouverAmberlamps.jpg", compare=True)
vis.plot_transformed_image("test_images/1929VictorianCosplayLondon.jpg", render_factor=35, compare=True)
vis.plot_transformed_image("test_images/1959ParisFriends.png", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/1925GypsyCampMaryland.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/1941PoolTableGeorgia.jpg", render_factor=45, compare=True)
vis.plot_transformed_image("test_images/1900ParkDog.jpg", compare=True)
vis.plot_transformed_image("test_images/1886Hoop.jpg", compare=True)
vis.plot_transformed_image("test_images/1950sLondonPoliceChild.jpg", compare=True)
vis.plot_transformed_image("test_images/1886ProspectPark.jpg", compare=True)
vis.plot_transformed_image("test_images/1930sRooftopPoland.jpg", compare=True)
vis.plot_transformed_image("test_images/1919RevereBeach.jpg", compare=True)
vis.plot_transformed_image("test_images/1936ParisCafe.jpg", render_factor=46, compare=True)
vis.plot_transformed_image("test_images/1902FrenchYellowBellies.jpg", compare=True)
vis.plot_transformed_image("test_images/1940PAFamily.jpg", render_factor=42, compare=True)
vis.plot_transformed_image("test_images/1910Finland.jpg", render_factor=40, compare=True)
vis.plot_transformed_image("test_images/ZebraCarriageLondon1900.jpg", compare=True)
vis.plot_transformed_image("test_images/1904ChineseMan.jpg", compare=True)
vis.plot_transformed_image("test_images/CrystalPalaceLondon1854.PNG", compare=True)
vis.plot_transformed_image("test_images/James1.jpg", render_factor=15, compare=True)
vis.plot_transformed_image("test_images/James2.jpg", render_factor=20, compare=True)
vis.plot_transformed_image("test_images/James3.jpg", render_factor=19, compare=True)
vis.plot_transformed_image("test_images/James4.jpg", render_factor=30, compare=True)
vis.plot_transformed_image("test_images/James5.jpg", render_factor=32, compare=True)
vis.plot_transformed_image("test_images/James6.jpg", render_factor=28, compare=True)
```
| github_jupyter |
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import pandas as pd
import numpy as np
import scipy.stats as st
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import cross_val_score
import sklearn.metrics as mt
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]])
```
Leggiamo i dati da un file csv in un dataframe pandas. I dati hanno 3 valori: i primi due corrispondono alle features e sono assegnati alle colonne x1 e x2 del dataframe; il terzo è il valore target, assegnato alla colonna t. Vengono poi creati una matrice X delle features e un vettore target t
```
# legge i dati in dataframe pandas
data = pd.read_csv("../../data/ex2data1.txt", header= None,delimiter=',', names=['x1','x2','t'])
# calcola dimensione dei dati
n = len(data)
n0 = len(data[data.t==0])
# calcola dimensionalità delle features
features = data.columns
nfeatures = len(features)-1
X = np.array(data[features[:-1]])
t = np.array(data['t'])
```
Visualizza il dataset.
```
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, color=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title('Dataset', fontsize=12)
plt.show()
```
Definisce un classificatore basato su GDA quadratica ed effettua il training sul dataset.
```
clf = LinearDiscriminantAnalysis(store_covariance=True)
clf.fit(X, t)
```
Definiamo la griglia 100x100 da utilizzare per la visualizzazione delle varie distribuzioni.
```
# insieme delle ascisse dei punti
u = np.linspace(min(X[:,0]), max(X[:,0]), 100)
# insieme delle ordinate dei punti
v = np.linspace(min(X[:,1]), max(X[:,1]), 100)
# deriva i punti della griglia: il punto in posizione i,j nella griglia ha ascissa U(i,j) e ordinata V(i,j)
U, V = np.meshgrid(u, v)
```
Calcola sui punti della griglia le probabilità delle classi $p(x|C_0), p(x|C_1)$ e le probabilità a posteriori delle classi $p(C_0|x), p(C_1|x)$
```
# probabilità a posteriori delle due distribuzioni sulla griglia
Z = clf.predict_proba(np.c_[U.ravel(), V.ravel()])
pp0 = Z[:, 0].reshape(U.shape)
pp1 = Z[:, 1].reshape(V.shape)
# rapporto tra le probabilità a posteriori delle classi per tutti i punti della griglia
z=pp0/pp1
# probabilità per le due classi sulla griglia
mu0 = clf.means_[0]
mu1 = clf.means_[1]
sigma = clf.covariance_
vf0=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu0,sigma))
vf1=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu1,sigma))
p0=vf0(U,V)
p1=vf1(U,V)
```
Visualizzazione della distribuzione di $p(x|C_0)$
```
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p0, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu0[0], mu0[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_0)$', fontsize=12)
plt.show()
```
Visualizzazione della distribuzione di $p(x|C1)$
```
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p1, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu1[0], mu1[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_1)$', fontsize=12)
plt.show()
```
Visualizzazione di $p(C_0|x)$
```
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_0|x)$", fontsize=12)
plt.show()
```
Visualizzazione di $p(C_1|x)$
```
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_1|x)$", fontsize=12)
plt.show()
```
Applica la cross validation (5-fold) per calcolare l'accuracy effettuando la media sui 5 valori restituiti.
```
print("Accuracy: {0:5.3f}".format(cross_val_score(clf, X, t, cv=5, scoring='accuracy').mean()))
```
| github_jupyter |
# ML Pipeline Preparation
Follow the instructions below to help you create your ML pipeline.
### 1. Import libraries and load data from database.
- Import Python libraries
- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
- Define feature and target variables X and Y
```
# import basic libraries
import os
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sqlalchemy import create_engine
# import nltk and text processing (like regular expresion) libraries
import nltk
nltk.download(['punkt', 'wordnet','stopwords'])
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
import re
# import libraries for transformation
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
# import machine learning libraries
from sklearn.datasets import make_multilabel_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, fbeta_score, make_scorer
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.ensemble import RandomForestClassifier
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
# load data from database
db_file = "./DisasterResponse.db"
# create the connection to the DB
engine = create_engine('sqlite:///DisasterResponse.db')
# prepare a table name
table_name = os.path.basename(db_file).replace(".db","")
# load the info from the sql table into a pandas file
df = pd.read_sql_table(table_name,engine)
```
# Exploratory Data Analysis (EDA)
Let's do some Exploratory Data Analysis.
First of all we'll see an overview of the dataset
```
df.info()
df.describe()
```
We can see there are 40 columns and 26216 entries and our dataset has a memory usage: of 8.0+ MB
The dataset is pretty complete, with almost all its values (non-null).
There are three string fields and the rest of fields are of type integer
If we look at the fields, the major part of them are values between [0-1] except the "id" field and the "related" field whose values are between [0-2].
As it was stated, all values have to be binary, so we are going to explore this field
Now let's see how many records of every "related" field are.
We will use these query for the final visualization part
```
df.groupby("related").count()
df[df['related'] ==2].head()
df[df['related'] ==2].describe()
df[df['related'] ==0].describe()
df[df['related'] ==1].describe()
```
After exploring this field, there is not much information.
Only that there are few entries of related field with value=2 compare to the rest of the values.
With the related value=2 we have two ways of working:
1. We could impute them with another value, for instance value=1 that is the most often
2. Drop them
And I will drop them
```
df = df[df.related !=2]
```
And we'll check
```
df['related'].describe()
df.groupby("related").count()
```
Now we'll check the 10 first lines of the dataset
```
df.head(10)
```
## Pearson correlation between variables
Let's build a heatmap, to see the correlation of each variable
```
data = df.copy()
f,ax = plt.subplots(figsize=(15, 15))
sns.heatmap(data.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax)
plt.show()
```
From this Pearson map, we clearly see there is a strange column "child_alone"
Let's explore more in detail
```
df.groupby("child_alone").count()
```
We see that column only contains zero values so we are going to delete them, as it doesn't bring us much help
```
df = df.drop(["child_alone"],axis=1)
```
Let's see the Pearson again
```
data = df.copy()
f,ax = plt.subplots(figsize=(15, 15))
sns.heatmap(data.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax)
plt.show()
```
We have a glimpse of the data!!
Now, we will start with the Machine Learning tasks
```
# We separate the features from the variables we are going to predict
X = df ['message']
y = df.drop(columns = ['id', 'message', 'original', 'genre'])
```
### 2. Tokenization function to process your text data
```
def tokenize(text):
# normalize case and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# lemmatize andremove stop words
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokens
```
### 3. Build a machine learning pipeline
This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
```
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier (RandomForestClassifier()))
])
```
### 4. Train pipeline
- Split data into train and test sets
- Train pipeline
```
X_train, X_test, y_train, y_test = train_test_split(X, y)
# train classifier
pipeline.fit(X_train, y_train)
```
### 5. Test your model
Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
```
# predict on test data
y_pred = pipeline.predict(X_test)
# we check dimensions
X_train.shape, y_train.shape, y.shape, X.shape
# and print metrics
accuracy = (y_pred == y_test).mean()
print("Accuracy:", accuracy, "\n")
category_names = list(y.columns)
for i in range(len(category_names)):
print("Output Category:", category_names[i],"\n", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))
print('Accuracy of %25s: %.2f' %(category_names[i], accuracy_score(y_test.iloc[:, i].values, y_pred[:,i])))
```
### 6. Improve your model
Use grid search to find better parameters.
```
base = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
parameters = {'clf__estimator__n_estimators': [10, 20],
'clf__estimator__max_depth': [2, 5],
'clf__estimator__min_samples_split': [2, 3, 4],
'clf__estimator__criterion': ['entropy']
}
cv = GridSearchCV(base, param_grid=parameters, n_jobs=-1, cv=2, verbose=3)
cv.fit(X_train, y_train)
```
### 7. Test your model
Show the accuracy, precision, and recall of the tuned model.
Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
```
y_pred = cv.predict(X_test)
accuracy = (y_pred == y_test).mean()
print("Accuracy:", accuracy, "\n")
for i in range(len(category_names)):
print("Output Category:", category_names[i],"\n", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))
print('Accuracy of %25s: %.2f' %(category_names[i], accuracy_score(y_test.iloc[:, i].values, y_pred[:,i])))
```
### 8. Try improving your model further. Here are a few ideas:
* try other machine learning algorithms
* add other features besides the TF-IDF
```
from sklearn.decomposition import TruncatedSVD
import sklearn
base = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('lsa', TruncatedSVD(random_state=42, n_components=100)),
# ('clf', MultiOutputClassifier(sklearn.svm.SVC(random_state=42, class_weight='balanced', gamma='scale')))
('clf', MultiOutputClassifier(sklearn.svm.SVC(random_state=42)))
])
# SVC parameters
parameters = {'clf__estimator__kernel': ['linear', 'rbf'],
'clf__estimator__C': [0.1, 1, 5]
}
cv = GridSearchCV(base, param_grid=parameters, n_jobs=-1, cv=2, scoring='f1_samples',verbose=3)
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
accuracy = (y_pred == y_test).mean()
print("Accuracy:", accuracy, "\n")
category_names = list(y.columns)
for i in range(len(category_names)):
print("Output Category:", category_names[i],"\n", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))
print('Accuracy of %25s: %.2f' %(category_names[i], accuracy_score(y_test.iloc[:, i].values, y_pred[:,i])))
```
### 9. Export your model as a pickle file
```
def save_model(model, model_filepath):
"""
Save the model as a pickle file:
This procedure saves the model as a pickle file
Args: model, X set, y set
Returns:
nothing, it runs the model and it displays accuracy metrics
"""
try:
pickle.dump(model, open(model_filepath, 'wb'))
except:
print("Error saving the model as a {} pickle file".format(model_filepath))
save_model(cv,"classifier2.pkl")
pickle.dump(cv, open("classifier2.pkl", 'wb'))
```
### 10. Use this notebook to complete `train.py`
Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
# Refactoring
```
def load_data(db_file):
"""
Load data function
This method receives a database file on a path and it loads data
from that database file into a pandas datafile.
It also splits the data into X and y (X: features to work and y: labels to predict)
It returns two sets of data: X and y
Args:
db_file (str): Filepath where database is stored.
Returns:
X (DataFrame): Feature columns
y (DataFrame): Label columns
"""
# load data from database
# db_file = "./CleanDisasterResponse.db"
# create the connection to the DB
engine = create_engine('sqlite:///{}'.format(db_file))
table_name = os.path.basename(db_file).replace(".db","")
# load the info from the sql table into a pandas file
df = pd.read_sql_table(table_name,engine)
# We separate the features from the variables we are going to predict
X = df ['message']
y = df.drop(columns = ['id', 'message', 'original', 'genre'])
return X, y
def display_results(y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
A,b = load_data("./CleanDisasterResponse.db")
def tokenize(text):
"""
Tokenize function
Args:
This method receives a text and it tokenizes it
Returns: a set of tokens
"""
# normalize case and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# lemmatize andremove stop words
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokens
def save_model(model, model_filepath):
"""
Save the model as a pickle file:
This procedure saves the model as a pickle file
Args: model, X set, y set
Returns:
nothing, it runs the model and it displays accuracy metrics
"""
pickle.dump(model, open(model_filepath, 'wb'))
```
And now let's try some queries to show interesting data on the html web
```
top_10_mes = df.iloc[:,5:].sum().sort_values(ascending=False)[0:10]
top_10_mes
top_10_mes_names = list(top_10_mes.index)
top_10_mes_names
mes_categories = df.columns[5:-1]
mes_categories
mes_categories_count = df[mes_categories].sum()
mes_categories_count
bottom_10_mes = df.iloc[:,5:].sum().sort_values()[0:10]
bottom_10_mes
bottom_10_mes_names = list(bottom_10_mes.index)
bottom_10_mes_names
distr_class_1 = df.drop(['id', 'message', 'original', 'genre'], axis = 1).sum()/len(df)
distr_class_1 = distr_class_1.sort_values(ascending = False)
distr_class_0 = 1 - distr_class_1
distr_class_names = list(distr_class_1.index)
list(distr_class_1.index)
distr_class_1 = df.drop(['id', 'message', 'original', 'genre'], axis = 1).sum()/len(df)
```
| github_jupyter |
# AutoRec: Rating Prediction with Autoencoders
Although the matrix factorization model achieves decent performance on the rating prediction task, it is essentially a linear model. Thus, such models are not capable of capturing complex nonlinear and intricate relationships that may be predictive of users' preferences. In this section, we introduce a nonlinear neural network collaborative filtering model, AutoRec :cite:`Sedhain.Menon.Sanner.ea.2015`. It identifies collaborative filtering (CF) with an autoencoder architecture and aims to integrate nonlinear transformations into CF on the basis of explicit feedback. Neural networks have been proven to be capable of approximating any continuous function, making it suitable to address the limitation of matrix factorization and enrich the expressiveness of matrix factorization.
On one hand, AutoRec has the same structure as an autoencoder which consists of an input layer, a hidden layer, and a reconstruction (output) layer. An autoencoder is a neural network that learns to copy its input to its output in order to code the inputs into the hidden (and usually low-dimensional) representations. In AutoRec, instead of explicitly embedding users/items into low-dimensional space, it uses the column/row of the interaction matrix as the input, then reconstructs the interaction matrix in the output layer.
On the other hand, AutoRec differs from a traditional autoencoder: rather than learning the hidden representations, AutoRec focuses on learning/reconstructing the output layer. It uses a partially observed interaction matrix as the input, aiming to reconstruct a completed rating matrix. In the meantime, the missing entries of the input are filled in the output layer via reconstruction for the purpose of recommendation.
There are two variants of AutoRec: user-based and item-based. For brevity, here we only introduce the item-based AutoRec. User-based AutoRec can be derived accordingly.
## Model
Let $\mathbf{R}_{*i}$ denote the $i^\mathrm{th}$ column of the rating matrix, where unknown ratings are set to zeros by default. The neural architecture is defined as:
$$
h(\mathbf{R}_{*i}) = f(\mathbf{W} \cdot g(\mathbf{V} \mathbf{R}_{*i} + \mu) + b)
$$
where $f(\cdot)$ and $g(\cdot)$ represent activation functions, $\mathbf{W}$ and $\mathbf{V}$ are weight matrices, $\mu$ and $b$ are biases. Let $h( \cdot )$ denote the whole network of AutoRec. The output $h(\mathbf{R}_{*i})$ is the reconstruction of the $i^\mathrm{th}$ column of the rating matrix.
The following objective function aims to minimize the reconstruction error:
$$
\underset{\mathbf{W},\mathbf{V},\mu, b}{\mathrm{argmin}} \sum_{i=1}^M{\parallel \mathbf{R}_{*i} - h(\mathbf{R}_{*i})\parallel_{\mathcal{O}}^2} +\lambda(\| \mathbf{W} \|_F^2 + \| \mathbf{V}\|_F^2)
$$
where $\| \cdot \|_{\mathcal{O}}$ means only the contribution of observed ratings are considered, that is, only weights that are associated with observed inputs are updated during back-propagation.
```
import mxnet as mx
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
```
## Implementing the Model
A typical autoencoder consists of an encoder and a decoder. The encoder projects the input to hidden representations and the decoder maps the hidden layer to the reconstruction layer. We follow this practice and create the encoder and decoder with dense layers. The activation of encoder is set to `sigmoid` by default and no activation is applied for decoder. Dropout is included after the encoding transformation to reduce over-fitting. The gradients of unobserved inputs are masked out to ensure that only observed ratings contribute to the model learning process.
```
class AutoRec(nn.Block):
def __init__(self, num_hidden, num_users, dropout=0.05):
super(AutoRec, self).__init__()
self.encoder = nn.Dense(num_hidden, activation='sigmoid',
use_bias=True)
self.decoder = nn.Dense(num_users, use_bias=True)
self.dropout = nn.Dropout(dropout)
def forward(self, input):
hidden = self.dropout(self.encoder(input))
pred = self.decoder(hidden)
if autograd.is_training(): # Mask the gradient during training
return pred * np.sign(input)
else:
return pred
```
## Reimplementing the Evaluator
Since the input and output have been changed, we need to reimplement the evaluation function, while we still use RMSE as the accuracy measure.
```
def evaluator(network, inter_matrix, test_data, devices):
scores = []
for values in inter_matrix:
feat = gluon.utils.split_and_load(values, devices, even_split=False)
scores.extend([network(i).asnumpy() for i in feat])
recons = np.array([item for sublist in scores for item in sublist])
# Calculate the test RMSE
rmse = np.sqrt(np.sum(np.square(test_data - np.sign(test_data) * recons))
/ np.sum(np.sign(test_data)))
return float(rmse)
```
## Training and Evaluating the Model
Now, let us train and evaluate AutoRec on the MovieLens dataset. We can clearly see that the test RMSE is lower than the matrix factorization model, confirming the effectiveness of neural networks in the rating prediction task.
```
devices = d2l.try_all_gpus()
# Load the MovieLens 100K dataset
df, num_users, num_items = d2l.read_data_ml100k()
train_data, test_data = d2l.split_data_ml100k(df, num_users, num_items)
_, _, _, train_inter_mat = d2l.load_data_ml100k(train_data, num_users,
num_items)
_, _, _, test_inter_mat = d2l.load_data_ml100k(test_data, num_users,
num_items)
train_iter = gluon.data.DataLoader(train_inter_mat, shuffle=True,
last_batch="rollover", batch_size=256,
num_workers=d2l.get_dataloader_workers())
test_iter = gluon.data.DataLoader(np.array(train_inter_mat), shuffle=False,
last_batch="keep", batch_size=1024,
num_workers=d2l.get_dataloader_workers())
# Model initialization, training, and evaluation
net = AutoRec(500, num_users)
net.initialize(ctx=devices, force_reinit=True, init=mx.init.Normal(0.01))
lr, num_epochs, wd, optimizer = 0.002, 25, 1e-5, 'adam'
loss = gluon.loss.L2Loss()
trainer = gluon.Trainer(net.collect_params(), optimizer,
{"learning_rate": lr, 'wd': wd})
d2l.train_recsys_rating(net, train_iter, test_iter, loss, trainer, num_epochs,
devices, evaluator, inter_mat=test_inter_mat)
```
## Summary
* We can frame the matrix factorization algorithm with autoencoders, while integrating non-linear layers and dropout regularization.
* Experiments on the MovieLens 100K dataset show that AutoRec achieves superior performance than matrix factorization.
## Exercises
* Vary the hidden dimension of AutoRec to see its impact on the model performance.
* Try to add more hidden layers. Is it helpful to improve the model performance?
* Can you find a better combination of decoder and encoder activation functions?
[Discussions](https://discuss.d2l.ai/t/401)
| github_jupyter |
# <span style='color:darkred'> 4 Trajectory Analysis </span>
***
**<span style='color:darkred'> Important Note </span>**
Before proceeding to the rest of the analysis, it is a good time to define a path that points to the location of the MD simulation data, which we will analyze here.
If you successfully ran the MD simulation, the correct path should be:
```
path="OxCompBio-Datafiles/run"
```
If however, you need/want to use the data from the simulation that has been already performed, uncomment the command below to instead define the path that points to the prerun simulation.
```
#path="OxCompBio-Datafiles/prerun/run"
```
## <span style='color:darkred'> 4.1 Visualize the simulation </span>
The simplest and easiest type of analysis you should always do is to look at it with your eyes! Your eyes will tell you if something strange is happening immediately. A numerical analysis may not.
### <span style='color:darkred'> 4.1.1 VMD </span>
*Note: Again, this step is optional. If you don't have VMD, go to section 4.1.2 below to visualize the trajectory with NGLView instead.*
Let us look at the simulations on VMD.
Open your vmd, by typing on your terminal:
`% vmd`
When it has finished placing all the windows on the screen. Click on `File` in the VMD main menu window and select `New Molecule`. The Molecule File Browser window should appear. Click on `Browse...` then select the `OxCompBio-Datafiles` and then the `run` directory and finally select `em.gro` (i.e. the file you made that has protein system energy minimized). Click `OK` and then click `Load`. It should load up the starting coordinates into the main window. Then click `Browse...` in the Molecule File Browser window. Select again the `OxCompBio-Datafiles`, then the `run` directory and then `md.xtc`. Select `OK` and then hit `Load`. The trajectory should start loading into the main VMD window.
Although things will be moving, you can see that it is quite difficult to visualize the individual components. That is one of the problems with simulating such large and complicated systems. VMD makes it quite easy to look at individual components of a system. For example, let us consider the protein only. On the VMD Main menu, left-click on Graphics and select `Representations`. A new menu will appear (`Graphical Representations`). In the box entitled `Selected Atoms` type protein and hit enter. Only those atoms that form part of the protein are now selected. Various other selections and drawing methods will help to visualize different aspects of the simulation.
<span style='color:Blue'> **Questions** </span>
* How would you say the protein behaves?
* Is it doing anything unexpected? What would you consider unexpected behaviour?
### <span style='color:darkred'> 4.1.2 NGLView </span>
You have already tested NGLView at the Python tutorial (Notebook `12_ProteinAnalysis`) and at the beginning of this tutorial. This time however, you can visualize the trajectory you generated after carrying out the MD simulation.
You should also be familiar now with the MDAnalysis Python library that we will use to analyze the MD trajectory. We will also use it below, to create a Universe and load it on NGLView.
```
# Import MDAnalysis and NGLView
import MDAnalysis
import nglview
# Load the protein structure and the trajectory as a universe named protein
protein=MDAnalysis.Universe(f"{path}/em.gro", f"{path}/md_fit.xtc")
protein_view = nglview.show_mdanalysis(protein)
protein_view.gui_style = 'ngl'
#Color the protein based on its secondary structure
protein_view.update_cartoon(color='sstruc')
protein_view
```
<span style='color:Blue'> **Questions** </span>
* How would you say the protein behaves?
* Is it doing anything unexpected? What would you consider unexpected behaviour?
Now that we are sure the simulation is not doing anything ridiculous, we can start to ask questions about the simulation. The first thing to establish is whether the simulation has equilibrated to some state. So what are some measures of the system
being equilibrated? And what can we use to test the reliability of the simulation?
## <span style='color:darkred'> 4.2 System Equilibration </span>
### <span style='color:darkred'> 4.2.1 Temperature fluctuation </span>
The system temperature as a function of time was calculated in the previous section, with the built-in GROMACS tool `gmx energy`, but we still have not looked at it. It is now time to plot the temperature *vs* time and assess the results.
<span style='color:Blue'> **Questions** </span>
* Does the temperature fluctuate around an equilibrium value?
* Does this value correspond to the temperature that we predefined in the `md.mdp` input file?
Import numpy and pyplot from matplotlib, required to read and plot the data, respectively.
```
# We declare matplotlib inline to make sure it plots properly
%matplotlib inline
# We need to import numpy
import numpy as np
# We need pyplot from matplotlib to generate our plots
from matplotlib import pyplot
```
Now, using numpy, we can read the data from the `1hsg_temperature.xvg` file; the first column is the time (in ps) and the secong is the system temperature (in K).
```
# Read the file that contains the system temperature for each frame
time=np.loadtxt(f"{path}/1hsg_temperature.xvg", comments=['#','@'])[:, 0]
temperature=np.loadtxt(f"{path}/1hsg_temperature.xvg", comments=['#','@'])[:, 1]
```
You can use numpy again to compute the average temperature and its standard deviation.
```
# Calculate and print the mean temperature and the standard deviation
# Keep only two decimal points
mean_temperature=round(np.mean(temperature), 2)
std_temperature=round(np.std(temperature), 2)
print(f"The mean temperature is {mean_temperature} ± {std_temperature} K")
```
Finally, you can plot the temperature *vs* simulation time.
```
# Plot the temperature
pyplot.plot(time, temperature, color='darkred')
pyplot.title("Temperature over time")
pyplot.xlabel("Time [ps]")
pyplot.ylabel("Temperature [K]")
pyplot.show()
```
### <span style='color:darkred'> 4.2.2 Energy of the system </span>
Another set of properties that is quite useful to examine is the various energetic contributions to the energy. The total
energy should be constant. but the various contributions can change and this can sometimes indicate something
interesting or strange happening in your simulation. Let us look at some energetic properties of the simulation.
We have already exctracted the Lennard-Jones energy, the Coulomb energy and the potential energy using again the GROMACS built-in tool `gmx energy`. The data of these three energetic components are saved in the same file called `1hsg_energies.xvg`; the first column contains the time (in ps) and the columns that follow contain the energies (in kJ/mol), in the same order as they were generated.
We can now read the data from the `1hsg_energies.xvg` file using numpy.
```
# Read the file that contains the various energetic components for each frame
time=np.loadtxt(f"{path}/1hsg_energies.xvg", comments=['#','@'])[:, 0]
lennard_jones=np.loadtxt(f"{path}/1hsg_energies.xvg", comments=['#','@'])[:, 1]
coulomb=np.loadtxt(f"{path}/1hsg_energies.xvg", comments=['#','@'])[:, 2]
potential=np.loadtxt(f"{path}/1hsg_energies.xvg", comments=['#','@'])[:, 3]
```
And now that we read the data file, we can plot the energetic components *vs* simulation time in separate plots using matplotlib.
```
# Plot the Lennard-Jones energy
pyplot.plot(time, lennard_jones, color='blue')
pyplot.title("Lennard Jones energy over time")
pyplot.xlabel("Time [ps]")
pyplot.ylabel("LJ energy [kJ/mol]")
pyplot.show()
# Plot the electrostatic energy
pyplot.plot(time, coulomb, color='purple')
pyplot.title("Electrostatic energy over time")
pyplot.xlabel("Time [ps]")
pyplot.ylabel("Coulomb energy [kJ/mol]")
pyplot.show()
# Plot the potential energy
pyplot.plot(time, potential, color='green')
pyplot.title("Potential energy over time")
pyplot.xlabel("Time [ps]")
pyplot.ylabel("Potential energy [kJ/mol]")
pyplot.show()
```
<span style='color:Blue'> **Questions** </span>
* Can you plot the Coulomb energy and the potential energy, following the same steps as above?
* Is the total energy stable in this simulation?
* What is the dominant contribution to the potential energy?
## <span style='color:darkred'> 4.3 Analysis of Protein </span>
### <span style='color:darkred'> 4.3.1 Root mean square deviation (RMSD) of 1HSG </span>
The RMSD gives us an idea of how 'stable' our protein is when compared to our starting, static, structure. The lower the RMSD is, the more stable we can say our protein is.
The RMSD as a function of time, $\rho (t)$, can be defined by the following equation:
\begin{equation}
\\
\rho (t) = \sqrt{\frac{1}{N}\sum^N_{i=1}w_i\big(\mathbf{x}_i(t) - \mathbf{x}^{\text{ref}}_i\big)^2}
\end{equation}
Luckily MDAnalysis has its own built-in function to calculate this and we can import it.
```
# Import built-in MDAnalysis tools for alignment and RMSD.
from MDAnalysis.analysis import align
from MDAnalysis.analysis.rms import RMSD as rmsd
# Define the simulation universe and the reference structure (protein structure at first frame)
protein = MDAnalysis.Universe(f"{path}/md.gro", f"{path}/md_fit.xtc")
protein_ref = MDAnalysis.Universe(f"{path}/em.gro", f"{path}/md_fit.xtc")
protein_ref.trajectory[0]
# Call the MDAnalysis align function to align the MD simulation universe to the reference (first frame) universe
align_strucs = align.AlignTraj(protein, protein_ref, select="backbone", weights="mass", in_memory=True, verbose=True)
R = align_strucs.run()
rmsd_data = R.rmsd
# Plot the RMSD
pyplot.plot(rmsd_data)
pyplot.title("RMSD over time")
pyplot.xlabel("Frame number")
pyplot.ylabel("RMSD (Angstrom)")
pyplot.show()
```
<span style='color:Blue'> **Questions** </span>
* What does this tell you about the stability of the protein? Is it in a state of equilibrium and if so why and at what time?
* Can you think of a situation where this approach might not be a very good indication of stability?
### <span style='color:darkred'> 4.3.2 Root mean square fluctuation (RMSF) of 1HSG </span>
A similar property that is particularly useful is the root mean square fluctuation (RMSF), which shows how each residue flucuates over its average position.
The RMSF for an atom, $\rho_i$, is given by:
\begin{equation}
\rho_i = \sqrt{\sum^N_{i=1} \big\langle(\mathbf{x}_i - \langle \mathbf{x}_i \rangle )^2 \big\rangle }
\end{equation}
```
from MDAnalysis.analysis.rms import RMSF as rmsf
# Define again the simulation universe, using however the renumbered .gro file that you had generated earlier
protein = MDAnalysis.Universe(f"{path}/em.gro", f"{path}/md_fit.xtc")
# Reset the trajectory to the first frame
protein.trajectory[0]
# We will need to select the alpha Carbons only
calphas = protein.select_atoms("name CA")
# Compute the RMSF of alpha carbons. Omit the first 20 frames,
# assuming that the system needs this amount of time (200 ps) to equilibrate
rmsf_calc = rmsf(calphas, verbose=True).run(start=20)
# Plot the RMSF
pyplot.plot(calphas.resindices+1, rmsf_calc.rmsf, color='darkorange' )
pyplot.title("Per-Residue Alpha Carbon RMSF")
pyplot.xlabel("Residue Number")
pyplot.ylabel("RMSF (Angstrom)")
pyplot.show()
```
<span style='color:Blue'> **Questions** </span>
* Can you identify structural regions alone from this plot and does that fit in with the structure?
* Residues 43-58 form part of the flexible flap that covers the binding site. How does this region behave in the simulation?
### <span style='color:darkred'> 4.3.3 Hydrogen Bond Formation </span>
We can also use the simulation to monitor the formation of any hydrogen bonds that may be of interest.
In the case of HIV-1 protease, the hydrogen bonds (HB) that are formed between the ARG8', the ASP29 and the ARG87 amino acids at the interface of the two subunits act in stabilising the dimer.
We can analyse the trajectory and monitor the stability of these interactions *vs* simulation time.
```
# Import the MDAnalysis built-in tool for HB Analysis
from MDAnalysis.analysis.hydrogenbonds.hbond_analysis import HydrogenBondAnalysis as HBA
# Define the protein universe
# Note that when using this tool, it is recommended to include the .tpr file instead of the .gro file,
# because it contains bond information, required for the identification of donors and acceptors.
protein = MDAnalysis.Universe(f"{path}/md.tpr", f"{path}/md.xtc")
# Define the atom selections for the HB calculation.
# In this case, the ARG hydrogens and the ASP oxygens, which act as the HB acceptors are specifically defined.
hbonds = HBA(universe=protein, hydrogens_sel='resname ARG and name HH21 HH22', acceptors_sel='resname ASP and name OD1 OD2')
# Perform the HB calculation
hbonds.run()
# Plot the total number of ASP-ARG HBs vs time
hbonds_time=hbonds.times
hbonds_data=hbonds.count_by_time()
pyplot.plot(hbonds_time, hbonds_data, color='darkorange')
pyplot.title("ASP-ARG Hydrogen Bonds")
pyplot.xlabel("Time [ps]")
pyplot.ylabel("# Hydrogen Bonds")
pyplot.show()
# Compute and print the average number of HBs and the standard deviation
aver_hbonds=round(np.mean(hbonds_data), 2)
std_hbonds=round(np.std(hbonds_data), 2)
print(f"The average number of ASP-ARG HBs is {aver_hbonds} ± {std_hbonds}")
```
<span style='color:Blue'> **Questions** </span>
* How much variation is there in the number of hydrogen bonds?
* Do any break and not reform?
* Using VMD, can you observe the HB formation and breakage throughout the simulation?
***
This concludes the analysis section, but the aim was only to give you an idea of the numerous information that we can gain when analysing an MD trajectory. Feel free to ask and attempt to answer your own questions, utilising the tools that you were introduced to during the tutorial.
## <span style='color:darkred'> 4.4 Further Reading </span>
The texts recommended here are the same as those mentioned in the lecture:
* "Molecular Modelling. Principles and Applications". Andrew Leach. Publisher: Prentice Hall. ISBN: 0582382106. This book has rapidly become the defacto introductory text for all aspects of simulation.
* "Computer simulation of liquids". Allen, Michael P., and Dominic J. Tildesley. Oxford university press, 2017.
* "Molecular Dynamics Simulation: Elementary Methods". J.M. Haile. Publisher: Wiley. ISBN: 047118439X. This text provides a more focus but slightly more old-fashioned view of simulation. It has some nice simple examples of how to code (in fortran) some of the algorithms though.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
from freedom.utils.i3cols_dataloader import load_hits, load_strings
import dragoman as dm
%load_ext line_profiler
plt.rcParams['figure.figsize'] = [12., 8.]
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['axes.labelsize'] = 16
plt.rcParams['axes.titlesize'] = 16
plt.rcParams['legend.fontsize'] = 14
single_hits, repeated_params, labels = load_hits('/home/iwsatlas1/peller/work/oscNext/level3_v01.03/140000_i3cols') #,'/home/iwsatlas1/peller/work/oscNext/level3_v01.03/120000_i3cols'])
strings, params, labels = load_strings('/home/iwsatlas1/peller/work/oscNext/level3_v01.03/140000_i3cols')
strings = strings.reshape(-1, 86, 5)
string_charge = np.sum(strings[:, :, 3], axis =0)
plt.bar(np.arange(86),string_charge)
plt.savefig('../../plots/charge_per_string.png')
hits = dm.PointData()
hits['delta_time'] = - (repeated_params[:, 3] - single_hits[:, 3])
hits.histogram(delta_time=100).plot()
plt.gca().set_yscale('log')
plt.gca().set_xlabel(r'$t_{hit} - t_{vertex}$ (ns)')
plt.savefig('../../plots/delta_time_range.png')
params
mc = dm.PointData()
for i, label in enumerate(labels):
mc[label] = params[:, i]
mc
fig, ax = plt.subplots(2, 4, figsize=(25, 10))
plt.subplots_adjust(wspace=0.35)
def plot(x, y, ax):
np.log(mc.histogram(**{x:100, y:100})['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)', ax=ax)
plot('x', 'y', ax[0,0])
plot('x', 'z', ax[0,1])
plot('y', 'z', ax[0,2])
plot('time', 'z', ax[0,3])
plot('azimuth', 'zenith', ax[1,0])
plot('zenith', 'z', ax[1,1])
plot('azimuth', 'x', ax[1,2])
plot('cascade_energy', 'track_energy', ax[1,3])
#np.log(mc.histogram(x=100, y=100)['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')
#np.log(mc.kde(x=100, y=100, density=False)['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')
plt.savefig('../../plots/validity.png')
np.log(mc.histogram(**{'x':100, 'z':100})['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')
np.log(mc.histogram(cascade_energy=100, track_energy=100)['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')
np.median(mc['x'])
np.median(mc['y'])
inside = ((mc['x'] - 15)**2 + ((mc['y'] + 35)*1.3)**2 < 250**2) & (mc['z'] < -120) & (mc['z'] > -600) & (mc['track_energy'] + mc['cascade_energy'] < 2000) & (mc['time'] > 8500) & (mc['time'] < 10500)
mc[inside]
fig, ax = plt.subplots(2, 4, figsize=(25, 10))
plt.subplots_adjust(wspace=0.35)
def plot(x, y, ax):
np.log(mc[inside].histogram(**{x:100, y:100})['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)', ax=ax)
plot('x', 'y', ax[0,0])
plot('x', 'z', ax[0,1])
plot('y', 'z', ax[0,2])
plot('time', 'z', ax[0,3])
plot('azimuth', 'zenith', ax[1,0])
plot('zenith', 'z', ax[1,1])
plot('azimuth', 'x', ax[1,2])
plot('cascade_energy', 'track_energy', ax[1,3])
```
| github_jupyter |
# Automatic generation of Notebook using PyCropML
This notebook implements a crop model.
### Model Cumulttfrom
```
model_cumulttfrom <- function (calendarMoments_t1 = c('Sowing'),
calendarCumuls_t1 = c(0.0),
cumulTT = 8.0){
#'- Name: CumulTTFrom -Version: 1.0, -Time step: 1
#'- Description:
#' * Title: CumulTTFrom Model
#' * Author: Pierre Martre
#' * Reference: Modeling development phase in the
#' Wheat Simulation Model SiriusQuality.
#' See documentation at http://www1.clermont.inra.fr/siriusquality/?page_id=427
#' * Institution: INRA Montpellier
#' * Abstract: Calculate CumulTT
#'- inputs:
#' * name: calendarMoments_t1
#' ** description : List containing appearance of each stage at previous day
#' ** variablecategory : state
#' ** datatype : STRINGLIST
#' ** default : ['Sowing']
#' ** unit :
#' ** inputtype : variable
#' * name: calendarCumuls_t1
#' ** description : list containing for each stage occured its cumulated thermal times at previous day
#' ** variablecategory : state
#' ** datatype : DOUBLELIST
#' ** default : [0.0]
#' ** unit : °C d
#' ** inputtype : variable
#' * name: cumulTT
#' ** description : cumul TT at current date
#' ** datatype : DOUBLE
#' ** variablecategory : auxiliary
#' ** min : -200
#' ** max : 10000
#' ** default : 8.0
#' ** unit : °C d
#' ** inputtype : variable
#'- outputs:
#' * name: cumulTTFromZC_65
#' ** description : cumul TT from Anthesis to current date
#' ** variablecategory : auxiliary
#' ** datatype : DOUBLE
#' ** min : 0
#' ** max : 5000
#' ** unit : °C d
#' * name: cumulTTFromZC_39
#' ** description : cumul TT from FlagLeafLiguleJustVisible to current date
#' ** variablecategory : auxiliary
#' ** datatype : DOUBLE
#' ** min : 0
#' ** max : 5000
#' ** unit : °C d
#' * name: cumulTTFromZC_91
#' ** description : cumul TT from EndGrainFilling to current date
#' ** variablecategory : auxiliary
#' ** datatype : DOUBLE
#' ** min : 0
#' ** max : 5000
#' ** unit : °C d
cumulTTFromZC_65 <- 0.0
cumulTTFromZC_39 <- 0.0
cumulTTFromZC_91 <- 0.0
if ('Anthesis' %in% calendarMoments_t1)
{
cumulTTFromZC_65 <- cumulTT - calendarCumuls_t1[which(calendarMoments_t1 %in% 'Anthesis')]
}
if ('FlagLeafLiguleJustVisible' %in% calendarMoments_t1)
{
cumulTTFromZC_39 <- cumulTT - calendarCumuls_t1[which(calendarMoments_t1 %in% 'FlagLeafLiguleJustVisible')]
}
if ('EndGrainFilling' %in% calendarMoments_t1)
{
cumulTTFromZC_91 <- cumulTT - calendarCumuls_t1[which(calendarMoments_t1 %in% 'EndGrainFilling')]
}
return (list ("cumulTTFromZC_65" = cumulTTFromZC_65,"cumulTTFromZC_39" = cumulTTFromZC_39,"cumulTTFromZC_91" = cumulTTFromZC_91))
}
library(assertthat)
test_test_wheat1<-function(){
params= model_cumulttfrom(
calendarMoments_t1 = c("Sowing","Emergence","FloralInitiation","FlagLeafLiguleJustVisible","Heading","Anthesis"),
calendarCumuls_t1 = c(0.0,112.330110409888,354.582294511779,741.510096671757,853.999637026622,954.59002776961),
cumulTT = 972.970888983105
)
cumulTTFromZC_65_estimated = params$cumulTTFromZC_65
cumulTTFromZC_65_computed = 18.38
assert_that(all.equal(cumulTTFromZC_65_estimated, cumulTTFromZC_65_computed, scale=1, tol=0.2)==TRUE)
cumulTTFromZC_39_estimated = params$cumulTTFromZC_39
cumulTTFromZC_39_computed = 231.46
assert_that(all.equal(cumulTTFromZC_39_estimated, cumulTTFromZC_39_computed, scale=1, tol=0.2)==TRUE)
cumulTTFromZC_91_estimated = params$cumulTTFromZC_91
cumulTTFromZC_91_computed = 0
assert_that(all.equal(cumulTTFromZC_91_estimated, cumulTTFromZC_91_computed, scale=1, tol=0.2)==TRUE)
}
test_test_wheat1()
```
| github_jupyter |
# Store Item Demand Forecasting Challenge
## Benchmark Models
<a href="https://www.kaggle.com/c/demand-forecasting-kernels-only">Link to competition on Kaggle.</a>
In this notebook, two simple benchmarking techniques are presented.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
pd.options.display.max_columns = 99
plt.rcParams['figure.figsize'] = (16, 9)
```
## Load Data
```
df_train = pd.read_csv('data/train.csv', parse_dates=['date'], index_col=['date'])
df_test = pd.read_csv('data/test.csv', parse_dates=['date'], index_col=['date'])
df_train.shape, df_test.shape
df_train.head()
num_stores = len(df_train['store'].unique())
fig, axes = plt.subplots(num_stores, figsize=(8, 16))
for s in df_train['store'].unique():
t = df_train.loc[df_train['store'] == s, 'sales'].resample('W').sum()
ax = t.plot(ax=axes[s-1])
ax.grid()
ax.set_xlabel('')
ax.set_ylabel('sales')
fig.tight_layout();
```
All stores appear to show identical trends and seasonality; they just differ in scale.
## Average Method
For our first and simplest model, we make our predictions using the average value from the historical data.
For our first and simplest model, we make our predictions using the average value from the historical data.
```
am_results = df_test.copy()
am_results['sales'] = 0
for s in am_results['store'].unique():
for i in am_results['item'].unique():
historical_average = df_train.loc[(df_train['store'] == s) & (df_train['item'] == i), 'sales'].mean()
am_results.loc[(am_results['store'] == s) & (am_results['item'] == i), 'sales'] = historical_average
am_results.reset_index(inplace=True)
am_results.drop(['date', 'store', 'item'], axis=1, inplace=True)
am_results.head()
am_results.to_csv('am_results.csv', index=False)
```
Scores 28.35111 on the leaderboard.
## Seasonal Naive Method
For this model, we predict the value from the same time the previous year.
```
snm_results = df_test.copy()
snm_results['sales'] = 0
import datetime
prev_dates = snm_results.loc[(snm_results['store'] == 1) & (snm_results['item'] == 1)].index - datetime.timedelta(days=365)
for s in snm_results['store'].unique():
for i in snm_results['item'].unique():
snm_results.loc[(snm_results['store'] == s) & (snm_results['item'] == i), 'sales'] = \
df_train.loc[((df_train['store'] == s) & (df_train['item'] == i)) & (df_train.index.isin(prev_dates)), 'sales'].values
snm_results.reset_index(inplace=True)
snm_results.drop(['date', 'store', 'item'], axis=1, inplace=True)
snm_results.head()
snm_results.to_csv('snm_results.csv', index=False)
```
Scores 24.43958 on the leaderboard.
| github_jupyter |
```
import requests
import requests_cache
requests_cache.install_cache('calrecycle')
import pandas as pd
import time
URL = 'https://www2.calrecycle.ca.gov/LGCentral/DisposalReporting/Destination/CountywideSummary'
params = {'CountyID': 58, 'ReportFormat': 'XLS'}
resp = requests.post(URL, data=params)
resp
import io
def set_columns(df, columns=None, row_idx=None):
df = df.copy()
if row_idx:
columns = df.iloc[row_idx, :].tolist()
df.columns = columns
return df
(pd.read_excel(io.BytesIO(resp.content))
# .iloc[4,:].tolist()
.pipe(set_columns, row_idx=4)
.iloc[5:, :]
.dropna(axis=1, how='all')
.assign(is_data_row=lambda d: d['Destination Facility'].notnull())
.fillna(method='ffill')
.query('is_data_row')
)
def make_throttle_hook(timeout=1):
"""
Returns a response hook function which sleeps for `timeout` seconds if
response is not coming from the cache.
From https://requests-cache.readthedocs.io/en/latest/user_guide.html#usage
"""
def hook(response, *args, **kwargs):
if not getattr(response, 'from_cache', False):
print(f'{response} not found in cache. Timeout for {timeout:.3f} s.')
time.sleep(timeout)
return response
return hook
def get_session(rate_max=.5, timeout=None):
timeout = 1 / rate_max
s = requests_cache.CachedSession()
s.hooks = {'response': make_throttle_hook(timeout)}
return s
def process(df):
return (df
.pipe(set_columns, row_idx=4)
.iloc[5:, :]
.dropna(axis=1, how='all')
.assign(is_data_row=lambda d: d['Destination Facility'].notnull())
.fillna(method='ffill')
.query('is_data_row')
.drop(columns=['is_data_row'])
)
def get_df(resp):
if resp.ok:
return pd.read_excel(io.BytesIO(resp.content))
return pd.DataFrame()
# so ducky...
def get_report(county_id, session=requests):
params = {'CountyID': int(county_id), 'ReportFormat': 'XLS'}
# if "no record found", the server should return 404 instead of a 200 response with an empty XLS
resp = session.post(URL, data=params)
try:
df = get_df(resp).pipe(process).assign(county_id=county_id)
except Exception as e:
print(e)
else:
return df
def get_reports():
dfs = []
# sesh = get_session(rate_max=2)
ids = range(1, 58)
for county_id in ids:
df = get_report(county_id)
if df is not None:
dfs.append(df)
else:
print(f'county_id {county_id} not processed')
# TODO else append to missed ids?
return pd.concat(dfs)
def process_whole(df):
# Destination Facility Diposal Ton Quarter Report Year Total ADC Transformation Ton county_id
names = {
'Destination Facility': 'destination_facility',
'Diposal Ton': 'disposal',
'Report Year': 'report_year',
'Quarter': 'report_quarter',
'Total ADC': 'total_adc',
'Transformation Ton': 'transformation',
}
return (df
.rename(columns=names)
.fillna(0)
.astype({'report_quarter': int})
)
REPORTS = get_reports()
REPORTS = REPORTS.pipe(process_whole)
REPORTS
REPORTS.to_csv('/data/datasets/catdd/clean/calrecycle-disposal-reporting.csv')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a>
# Tutorial 2: Hidden Markov Model
**Week 3, Day 2: Hidden Dynamics**
**By Neuromatch Academy**
__Content creators:__ Yicheng Fei with help from Jesse Livezey and Xaq Pitkow
__Content reviewers:__ John Butler, Matt Krause, Meenakshi Khosla, Spiros Chavlis, Michael Waskom
__Production editor:__ Ella Batty
# Tutorial objectives
*Estimated timing of tutorial: 1 hour, 5 minutes*
The world around us is often changing, but we only have noisy sensory measurements. Similarly, neural systems switch between discrete states (e.g. sleep/wake) which are observable only indirectly, through their impact on neural activity. **Hidden Markov Models** (HMM) let us reason about these unobserved (also called hidden or latent) states using a time series of measurements.
Here we'll learn how changing the HMM's transition probability and measurement noise impacts the data. We'll look at how uncertainty increases as we predict the future, and how to gain information from the measurements.
We will use a binary latent variable $s_t \in \{0,1\}$ that switches randomly between the two states, and a 1D Gaussian emission model $m_t|s_t \sim \mathcal{N}(\mu_{s_t},\sigma^2_{s_t})$ that provides evidence about the current state.
By the end of this tutorial, you should be able to:
- Describe how the hidden states in a Hidden Markov model evolve over time, both in words, mathematically, and in code
- Estimate hidden states from data using forward inference in a Hidden Markov model
- Describe how measurement noise and state transition probabilities affect uncertainty in predictions in the future and the ability to estimate hidden states.
<br>
**Summary of Exercises**
1. Generate data from an HMM.
2. Calculate how predictions propagate in a Markov Chain without evidence.
3. Combine new evidence and prediction from past evidence to estimate hidden states.
```
# @title Video 1: Introduction
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Hh411r7JE", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="pIXxVl1A4l0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
# Setup
```
# Imports
import numpy as np
import time
from scipy import stats
from scipy.optimize import linear_sum_assignment
from collections import namedtuple
import matplotlib.pyplot as plt
from matplotlib import patches
#@title Figure Settings
# import ipywidgets as widgets # interactive display
from IPython.html import widgets
from ipywidgets import interactive, interact, HBox, Layout,VBox
from IPython.display import HTML
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle")
# @title Plotting Functions
def plot_hmm1(model, states, measurements, flag_m=True):
"""Plots HMM states and measurements for 1d states and measurements.
Args:
model (hmmlearn model): hmmlearn model used to get state means.
states (numpy array of floats): Samples of the states.
measurements (numpy array of floats): Samples of the states.
"""
T = states.shape[0]
nsteps = states.size
aspect_ratio = 2
fig, ax1 = plt.subplots(figsize=(8,4))
states_forplot = list(map(lambda s: model.means[s], states))
ax1.step(np.arange(nstep), states_forplot, "-", where="mid", alpha=1.0, c="green")
ax1.set_xlabel("Time")
ax1.set_ylabel("Latent State", c="green")
ax1.set_yticks([-1, 1])
ax1.set_yticklabels(["-1", "+1"])
ax1.set_xticks(np.arange(0,T,10))
ymin = min(measurements)
ymax = max(measurements)
ax2 = ax1.twinx()
ax2.set_ylabel("Measurements", c="crimson")
# show measurement gaussian
if flag_m:
ax2.plot([T,T],ax2.get_ylim(), color="maroon", alpha=0.6)
for i in range(model.n_components):
mu = model.means[i]
scale = np.sqrt(model.vars[i])
rv = stats.norm(mu, scale)
num_points = 50
domain = np.linspace(mu-3*scale, mu+3*scale, num_points)
left = np.repeat(float(T), num_points)
# left = np.repeat(0.0, num_points)
offset = rv.pdf(domain)
offset *= T / 15
lbl = "measurement" if i == 0 else ""
# ax2.fill_betweenx(domain, left, left-offset, alpha=0.3, lw=2, color="maroon", label=lbl)
ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color="maroon", label=lbl)
ax2.scatter(np.arange(nstep), measurements, c="crimson", s=4)
ax2.legend(loc="upper left")
ax1.set_ylim(ax2.get_ylim())
plt.show(fig)
def plot_marginal_seq(predictive_probs, switch_prob):
"""Plots the sequence of marginal predictive distributions.
Args:
predictive_probs (list of numpy vectors): sequence of predictive probability vectors
switch_prob (float): Probability of switching states.
"""
T = len(predictive_probs)
prob_neg = [p_vec[0] for p_vec in predictive_probs]
prob_pos = [p_vec[1] for p_vec in predictive_probs]
fig, ax = plt.subplots()
ax.plot(np.arange(T), prob_neg, color="blue")
ax.plot(np.arange(T), prob_pos, color="orange")
ax.legend([
"prob in state -1", "prob in state 1"
])
ax.text(T/2, 0.05, "switching probability={}".format(switch_prob), fontsize=12,
bbox=dict(boxstyle="round", facecolor="wheat", alpha=0.6))
ax.set_xlabel("Time")
ax.set_ylabel("Probability")
ax.set_title("Forgetting curve in a changing world")
#ax.set_aspect(aspect_ratio)
plt.show(fig)
def plot_evidence_vs_noevidence(posterior_matrix, predictive_probs):
"""Plots the average posterior probabilities with evidence v.s. no evidence
Args:
posterior_matrix: (2d numpy array of floats): The posterior probabilities in state 1 from evidence (samples, time)
predictive_probs (numpy array of floats): Predictive probabilities in state 1 without evidence
"""
nsample, T = posterior_matrix.shape
posterior_mean = posterior_matrix.mean(axis=0)
fig, ax = plt.subplots(1)
# ax.plot([0.0, T],[0.5, 0.5], color="red", linestyle="dashed")
ax.plot([0.0, T],[0., 0.], color="red", linestyle="dashed")
ax.plot(np.arange(T), predictive_probs, c="orange", linewidth=2, label="No evidence")
ax.scatter(np.tile(np.arange(T), (nsample, 1)), posterior_matrix, s=0.8, c="green", alpha=0.3, label="With evidence(Sample)")
ax.plot(np.arange(T), posterior_mean, c='green', linewidth=2, label="With evidence(Average)")
ax.legend()
ax.set_yticks([0.0, 0.25, 0.5, 0.75, 1.0])
ax.set_xlabel("Time")
ax.set_ylabel("Probability in State +1")
ax.set_title("Gain confidence with evidence")
plt.show(fig)
def plot_forward_inference(model, states, measurements, states_inferred,
predictive_probs, likelihoods, posterior_probs,
t=None,
flag_m=True, flag_d=True, flag_pre=True, flag_like=True, flag_post=True,
):
"""Plot ground truth state sequence with noisy measurements, and ground truth states v.s. inferred ones
Args:
model (instance of hmmlearn.GaussianHMM): an instance of HMM
states (numpy vector): vector of 0 or 1(int or Bool), the sequences of true latent states
measurements (numpy vector of numpy vector): the un-flattened Gaussian measurements at each time point, element has size (1,)
states_inferred (numpy vector): vector of 0 or 1(int or Bool), the sequences of inferred latent states
"""
T = states.shape[0]
if t is None:
t = T-1
nsteps = states.size
fig, ax1 = plt.subplots(figsize=(11,6))
# inferred states
#ax1.step(np.arange(nstep)[:t+1], states_forplot[:t+1], "-", where="mid", alpha=1.0, c="orange", label="inferred")
# true states
states_forplot = list(map(lambda s: model.means[s], states))
ax1.step(np.arange(nstep)[:t+1], states_forplot[:t+1], "-", where="mid", alpha=1.0, c="green", label="true")
ax1.step(np.arange(nstep)[t+1:], states_forplot[t+1:], "-", where="mid", alpha=0.3, c="green", label="")
# Posterior curve
delta = model.means[1] - model.means[0]
states_interpolation = model.means[0] + delta * posterior_probs[:,1]
if flag_post:
ax1.step(np.arange(nstep)[:t+1], states_interpolation[:t+1], "-", where="mid", c="grey", label="posterior")
ax1.set_xlabel("Time")
ax1.set_ylabel("Latent State", c="green")
ax1.set_yticks([-1, 1])
ax1.set_yticklabels(["-1", "+1"])
ax1.legend(bbox_to_anchor=(0,1.02,0.2,0.1), borderaxespad=0, ncol=2)
ax2 = ax1.twinx()
ax2.set_ylim(
min(-1.2, np.min(measurements)),
max(1.2, np.max(measurements))
)
if flag_d:
ax2.scatter(np.arange(nstep)[:t+1], measurements[:t+1], c="crimson", s=4, label="measurement")
ax2.set_ylabel("Measurements", c="crimson")
# show measurement distributions
if flag_m:
for i in range(model.n_components):
mu = model.means[i]
scale = np.sqrt(model.vars[i])
rv = stats.norm(mu, scale)
num_points = 50
domain = np.linspace(mu-3*scale, mu+3*scale, num_points)
left = np.repeat(float(T), num_points)
offset = rv.pdf(domain)
offset *= T /15
# lbl = "measurement" if i == 0 else ""
lbl = ""
# ax2.fill_betweenx(domain, left, left-offset, alpha=0.3, lw=2, color="maroon", label=lbl)
ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color="maroon", label=lbl)
ymin, ymax = ax2.get_ylim()
width = 0.1 * (ymax-ymin) / 2.0
centers = [-1.0, 1.0]
bar_scale = 15
# Predictions
data = predictive_probs
if flag_pre:
for i in range(model.n_components):
domain = np.array([centers[i]-1.5*width, centers[i]-0.5*width])
left = np.array([t,t])
offset = np.array([data[t,i]]*2)
offset *= bar_scale
lbl = "todays prior" if i == 0 else ""
ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color="dodgerblue", label=lbl)
# Likelihoods
# data = np.stack([likelihoods, 1.0-likelihoods],axis=-1)
data = likelihoods
data /= np.sum(data,axis=-1, keepdims=True)
if flag_like:
for i in range(model.n_components):
domain = np.array([centers[i]+0.5*width, centers[i]+1.5*width])
left = np.array([t,t])
offset = np.array([data[t,i]]*2)
offset *= bar_scale
lbl = "likelihood" if i == 0 else ""
ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color="crimson", label=lbl)
# Posteriors
data = posterior_probs
if flag_post:
for i in range(model.n_components):
domain = np.array([centers[i]-0.5*width, centers[i]+0.5*width])
left = np.array([t,t])
offset = np.array([data[t,i]]*2)
offset *= bar_scale
lbl = "posterior" if i == 0 else ""
ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color="grey", label=lbl)
if t<T-1:
ax2.plot([t,t],ax2.get_ylim(), color='black',alpha=0.6)
if flag_pre or flag_like or flag_post:
ax2.plot([t,t],ax2.get_ylim(), color='black',alpha=0.6)
ax2.legend(bbox_to_anchor=(0.4,1.02,0.6, 0.1), borderaxespad=0, ncol=4)
ax1.set_ylim(ax2.get_ylim())
return fig
# plt.show(fig)
```
---
# Section 1: Binary HMM with Gaussian measurements
In contrast to last tutorial, the latent state in an HMM is not fixed, but may switch to a different state at each time step. The time dependence is simple: the probability of the state at time $t$ is wholely determined by the state at time $t-1$. This is called called the **Markov property** and the dependency of the whole state sequence $\{s_1,...,s_t\}$ can be described by a chain structure called a Markov Chain. You have seen a Markov chain in the [pre-reqs Statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.html#section-1-2-markov-chains) and in the [Linear Systems Tutorial 2](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial2.html).
**Markov model for binary latent dynamics**
Let's reuse the binary switching process you saw in the [Linear Systems Tutorial 2](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial2.html): our state can be either +1 or -1. The probability of switching to state $s_t=j$ from the previous state $s_{t-1}=i$ is the conditional probability distribution $p(s_t = j| s_{t-1} = i)$. We can summarize these as a $2\times 2$ matrix we will denote $D$ for Dynamics.
\begin{align*}
D = \begin{bmatrix}p(s_t = +1 | s_{t-1} = +1) & p(s_t = -1 | s_{t-1} = +1)\\p(s_t = +1 | s_{t-1} = -1)& p(s_t = -1 | s_{t-1} = -1)\end{bmatrix}
\end{align*}
$D_{ij}$ represents the transition probability to switch from state $i$ to state $j$ at next time step. Please note that this is contrast to the meaning used in the intro and in Linear Systems (their transition matrices are the transpose of ours) but syncs with the [pre-reqs Statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.html#section-1-2-markov-chains).
We can represent the probability of the _current_ state as a 2-dimensional vector
$ P_t = [p(s_t = +1), p(s_t = -1)]$
. The entries are the probability that the current state is +1 and the probability that the current state is -1 so these must sum up to 1.
We then update the probabilities over time following the Markov process:
\begin{align*}
P_{t}= P_{t-1}D \tag{1}
\end{align*}
If you know the state, the entries of $P_{t-1}$ would be either 1 or 0 as there is no uncertainty.
**Measurements**
In a _Hidden_ Markov model, we cannot directly observe the latent states $s_t$. Instead we get noisy measurements $m_t\sim p(m|s_t)$.
```
# @title Video 2: Binary HMM with Gaussian measurements
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Sw41197Mj", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="z6KbKILMIPU", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Coding Exercise 1.1: Simulate a binary HMM with Gaussian measurements
In this exercise, you will implement a binary HMM with Gaussian measurements. Your HMM will start in State +1 and transition between states (both $-1 \rightarrow 1$ and $1 \rightarrow -1$) with probability `switch_prob`. Each state emits measurements drawn from a Gaussian with mean $+1$ for State +1 and mean $-1$ for State -1. The standard deviation of both states is given by `noise_level`.
The exercises in the next cell have three steps:
**STEP 1**. In `create_HMM`, complete the transition matrix `transmat_` (i.e., $D$) in the code.
\begin{equation*}
D =
\begin{pmatrix}
p_{\rm stay} & p_{\rm switch} \\
p_{\rm switch} & p_{\rm stay} \\
\end{pmatrix}
\end{equation*}
with $p_{\rm stay} = 1 - p_{\rm switch}$.
**STEP 2**. In `create_HMM`, specify gaussian measurements $m_t | s_t$, by specifying the means for each state, and the standard deviation.
**STEP 3**. In `sample`, use the transition matrix to specify the probabilities for the next state $s_t$ given the previous state $s_{t-1}$.
In this exercise, we will use a helper data structure named `GaussianHMM1D`, implemented in the following cell. This allows us to set the information we need about the HMM model (the starting probabilities of state, the transition matrix, the means and variances of the Gaussian distributions, and the number of components) and easily access it. For example, if we can set our model using:
```
model = GaussianHMM1D(
startprob = startprob_vec,
transmat = transmat_mat,
means = means_vec,
vars = vars_vec,
n_components = n_components
)
```
and then access the variances as:
```
model.vars
```
Also note that we refer to the states as `0` and `1` in the code, instead of as `-1` and `+1`.
```
GaussianHMM1D = namedtuple('GaussianHMM1D', ['startprob', 'transmat','means','vars','n_components'])
def create_HMM(switch_prob=0.1, noise_level=1e-1, startprob=[1.0, 0.0]):
"""Create an HMM with binary state variable and 1D Gaussian measurements
The probability to switch to the other state is `switch_prob`. Two
measurement models have mean 1.0 and -1.0 respectively. `noise_level`
specifies the standard deviation of the measurement models.
Args:
switch_prob (float): probability to jump to the other state
noise_level (float): standard deviation of measurement models. Same for
two components
Returns:
model (GaussianHMM instance): the described HMM
"""
############################################################################
# Insert your code here to:
# * Create the transition matrix, `transmat_mat` so that the odds of
# switching is `switch_prob`
# * Set the measurement model variances, to `noise_level ^ 2` for both
# states
raise NotImplementedError("`create_HMM` is incomplete")
############################################################################
n_components = 2
startprob_vec = np.asarray(startprob)
# STEP 1: Transition probabilities
transmat_mat = ... # np.array([[...], [...]])
# STEP 2: Measurement probabilities
# Mean measurements for each state
means_vec = ...
# Noise for each state
vars_vec = np.ones(2) * ...
# Initialize model
model = GaussianHMM1D(
startprob = startprob_vec,
transmat = transmat_mat,
means = means_vec,
vars = vars_vec,
n_components = n_components
)
return model
def sample(model, T):
"""Generate samples from the given HMM
Args:
model (GaussianHMM1D): the HMM with Gaussian measurement
T (int): number of time steps to sample
Returns:
M (numpy vector): the series of measurements
S (numpy vector): the series of latent states
"""
############################################################################
# Insert your code here to:
# * take row i from `model.transmat` to get the transition probabilities
# from state i to all states
raise NotImplementedError("`sample` is incomplete")
############################################################################
# Initialize S and M
S = np.zeros((T,),dtype=int)
M = np.zeros((T,))
# Calculate initial state
S[0] = np.random.choice([0,1],p=model.startprob)
# Latent state at time `t` depends on `t-1` and the corresponding transition probabilities to other states
for t in range(1,T):
# STEP 3: Get vector of probabilities for all possible `S[t]` given a particular `S[t-1]`
transition_vector = ...
# Calculate latent state at time `t`
S[t] = np.random.choice([0,1],p=transition_vector)
# Calculate measurements conditioned on the latent states
# Since measurements are independent of each other given the latent states, we could calculate them as a batch
means = model.means[S]
scales = np.sqrt(model.vars[S])
M = np.random.normal(loc=means, scale=scales, size=(T,))
return M, S
# Set random seed
np.random.seed(101)
# Set parameters of HMM
T = 100
switch_prob = 0.1
noise_level = 2.0
# Create HMM
model = create_HMM(switch_prob=switch_prob, noise_level=noise_level)
# Sample from HMM
M, S = sample(model,T)
assert M.shape==(T,)
assert S.shape==(T,)
# Print values
print(M[:5])
print(S[:5])
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_76573dcd.py)
You should see that the first five measurements are:
`[-3.09355908 1.58552915 -3.93502804 -1.98819072 -1.32506947]`
while the first five states are:
`[0 0 0 0 0]`
## Interactive Demo 1.2: Binary HMM
In the demo below, we simulate and plot a similar HMM. You can change the probability of switching states and the noise level (the standard deviation of the Gaussian distributions for measurements). You can click the empty box to also visualize the measurements.
**First**, think about and discuss these questions:
1. What will the states do if the switching probability is zero? One?
2. What will measurements look like with high noise? Low?
**Then**, play with the demo to see if you were correct or not.
```
#@title
#@markdown Execute this cell to enable the widget!
nstep = 100
@widgets.interact
def plot_samples_widget(
switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.02, value=0.1),
log10_noise_level=widgets.FloatSlider(min=-1., max=1., step=.01, value=-0.3),
flag_m=widgets.Checkbox(value=False, description='measurements', disabled=False, indent=False)
):
np.random.seed(101)
model = create_HMM(switch_prob=switch_prob,
noise_level=10.**log10_noise_level)
print(model)
observations, states = sample(model, nstep)
plot_hmm1(model, states, observations, flag_m=flag_m)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_507ce9e9.py)
```
# @title Video 3: Section 1 Exercises Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1dX4y1F7Fq", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="bDDRgAvQeFA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
**Applications**. Measurements could be:
* fish caught at different times as the school of fish moves from left to right
* membrane voltage when an ion channel changes between open and closed
* EEG frequency measurements as the brain moves between sleep states
What phenomena can you imagine modeling with these HMMs?
----
# Section 2: Predicting the future in an HMM
*Estimated timing to here from start of tutorial: 20 min*
```
# @title Video 4: Forgetting in a changing world
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1o64y1s7M7", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XOec560m61o", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
### Interactive Demo 2.1: Forgetting in a changing world
Even if we know the world state for sure, the world changes. We become less and less certain as time goes by since our last measurement. In this exercise, we'll see how a Hidden Markov Model gradually "forgets" the current state when predicting the future without measurements.
Assume we know that the initial state is -1, $s_0=-1$, so $p(s_0)=[1,0]$. We will plot $p(s_t)$ versus time.
1. Examine helper function `simulate_prediction_only` and understand how the predicted distribution changes over time.
2. Using our provided code, plot this distribution over time, and manipulate the process dynamics via the slider controlling the switching probability.
Do you forget more quickly with low or high switching probability? Why? How does the curve look when `prob_switch` $>0.5$? Why?
```
# @markdown Execute this cell to enable helper function `simulate_prediction_only`
def simulate_prediction_only(model, nstep):
"""
Simulate the diffusion of HMM with no observations
Args:
model (GaussianHMM1D instance): the HMM instance
nstep (int): total number of time steps to simulate(include initial time)
Returns:
predictive_probs (list of numpy vector): the list of marginal probabilities
"""
entropy_list = []
predictive_probs = []
prob = model.startprob
for i in range(nstep):
# Log probabilities
predictive_probs.append(prob)
# One step forward
prob = prob @ model.transmat
return predictive_probs
# @markdown Execute this cell to enable the widget!
np.random.seed(101)
T = 100
noise_level = 0.5
@widgets.interact(switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.1))
def plot(switch_prob=switch_prob):
model = create_HMM(switch_prob=switch_prob, noise_level=noise_level)
predictive_probs = simulate_prediction_only(model, T)
plot_marginal_seq(predictive_probs, switch_prob)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_8357dee2.py)
```
# @title Video 5: Section 2 Exercise Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1DM4y1K7tK", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GRnlvxZ_ozk", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
# Section 3: Forward inference in an HMM
*Estimated timing to here from start of tutorial: 35 min*
```
# @title Video 6: Inference in an HMM
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV17f4y1571y", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="fErhvxE9SHs", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
### Coding Exercise 3.1: Forward inference of HMM
As a recursive algorithm, let's assume we already have yesterday's posterior from time $t-1$: $p(s_{t-1}|m_{1:t-1})$. When the new data $m_{t}$ comes in, the algorithm performs the following steps:
* **Predict**: transform yesterday's posterior over $s_{t-1}$ into today's prior over $s_t$ using the transition matrix $D$:
$$\text{today's prior}=p(s_t|m_{1:t-1})= p(s_{t-1}|m_{1:t-1}) D$$
* **Update**: Incorporate measurement $m_t$ to calculate the posterior $p(s_t|m_{0:t})$
$$\text{posterior} \propto \text{prior}\cdot \text{likelihood}=p(m_t|s_t)p(s_t|m_{0:t-1})$$
In this exercise, you will:
* STEP 1: Complete the code in function `markov_forward` to calculate the predictive marginal distribution at next time step
* STEP 2: Complete the code in function `one_step_update` to combine predictive probabilities and data likelihood into a new posterior
* Hint: We have provided a function to calculate the likelihood of $m_t$ under the two possible states: `compute_likelihood(model,M_t)`.
* STEP 3: Using code we provide, plot the posterior and compare with the true values
The complete forward inference is implemented in `simulate_forward_inference` which just calls `one_step_update` recursively.
```
# @markdown Execute to enable helper functions `compute_likelihood` and `simulate_forward_inference`
def compute_likelihood(model, M):
"""
Calculate likelihood of seeing data `M` for all measurement models
Args:
model (GaussianHMM1D): HMM
M (float or numpy vector)
Returns:
L (numpy vector or matrix): the likelihood
"""
rv0 = stats.norm(model.means[0], np.sqrt(model.vars[0]))
rv1 = stats.norm(model.means[1], np.sqrt(model.vars[1]))
L = np.stack([rv0.pdf(M), rv1.pdf(M)],axis=0)
if L.size==2:
L = L.flatten()
return L
def simulate_forward_inference(model, T, data=None):
"""
Given HMM `model`, calculate posterior marginal predictions of x_t for T-1 time steps ahead based on
evidence `data`. If `data` is not give, generate a sequence of measurements from first component.
Args:
model (GaussianHMM instance): the HMM
T (int): length of returned array
Returns:
predictive_state1: predictive probabilities in first state w.r.t no evidence
posterior_state1: posterior probabilities in first state w.r.t evidence
"""
# First re-calculate hte predictive probabilities without evidence
# predictive_probs = simulate_prediction_only(model, T)
predictive_probs = np.zeros((T,2))
likelihoods = np.zeros((T,2))
posterior_probs = np.zeros((T, 2))
# Generate an measurement trajectory condtioned on that latent state x is always 1
if data is not None:
M = data
else:
M = np.random.normal(model.means[0], np.sqrt(model.vars[0]), (T,))
# Calculate marginal for each latent state x_t
predictive_probs[0,:] = model.startprob
likelihoods[0,:] = compute_likelihood(model, M[[0]])
posterior = predictive_probs[0,:] * likelihoods[0,:]
posterior /= np.sum(posterior)
posterior_probs[0,:] = posterior
for t in range(1, T):
prediction, likelihood, posterior = one_step_update(model, posterior_probs[t-1], M[[t]])
# normalize and add to the list
posterior /= np.sum(posterior)
predictive_probs[t,:] = prediction
likelihoods[t,:] = likelihood
posterior_probs[t,:] = posterior
return predictive_probs, likelihoods, posterior_probs
help(compute_likelihood)
help(simulate_forward_inference)
def markov_forward(p0, D):
"""Calculate the forward predictive distribution in a discrete Markov chain
Args:
p0 (numpy vector): a discrete probability vector
D (numpy matrix): the transition matrix, D[i,j] means the prob. to
switch FROM i TO j
Returns:
p1 (numpy vector): the predictive probabilities in next time step
"""
##############################################################################
# Insert your code here to:
# 1. Calculate the predicted probabilities at next time step using the
# probabilities at current time and the transition matrix
raise NotImplementedError("`markov_forward` is incomplete")
##############################################################################
# Calculate predictive probabilities (prior)
p1 = ...
return p1
def one_step_update(model, posterior_tm1, M_t):
"""Given a HMM model, calculate the one-time-step updates to the posterior.
Args:
model (GaussianHMM1D instance): the HMM
posterior_tm1 (numpy vector): Posterior at `t-1`
M_t (numpy array): measurement at `t`
Returns:
posterior_t (numpy array): Posterior at `t`
"""
##############################################################################
# Insert your code here to:
# 1. Call function `markov_forward` to calculate the prior for next time
# step
# 2. Calculate likelihood of seeing current data `M_t` under both states
# as a vector.
# 3. Calculate the posterior which is proportional to
# likelihood x prediction elementwise,
# 4. Don't forget to normalize
raise NotImplementedError("`one_step_update` is incomplete")
##############################################################################
# Calculate predictive probabilities (prior)
prediction = markov_forward(...)
# Get the likelihood
likelihood = compute_likelihood(...)
# Calculate posterior
posterior_t = ...
# Normalize
posterior_t /= ...
return prediction, likelihood, posterior_t
# Set random seed
np.random.seed(12)
# Set parameters
switch_prob = 0.4
noise_level = .4
t = 75
# Create and sample from model
model = create_HMM(switch_prob = switch_prob,
noise_level = noise_level,
startprob=[0.5, 0.5])
measurements, states = sample(model, nstep)
# Infer state sequence
predictive_probs, likelihoods, posterior_probs = simulate_forward_inference(model, nstep,
measurements)
states_inferred = np.asarray(posterior_probs[:,0] <= 0.5, dtype=int)
# Visualize
plot_forward_inference(
model, states, measurements, states_inferred,
predictive_probs, likelihoods, posterior_probs,t=t, flag_m = 0
)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_69ce2879.py)
*Example output:*
<img alt='Solution hint' align='left' width=1532.0 height=825.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial2_Solution_69ce2879_0.png>
## Interactive Demo 3.2: Forward inference in binary HMM
Now visualize your inference algorithm. Play with the sliders and checkboxes to help you gain intuition.
* Use the sliders `switch_prob` and `log10_noise_level` to change the switching probability and measurement noise level.
* Use the slider `t` to view prediction (prior) probabilities, likelihood, and posteriors at different times.
When does the inference make a mistake? For example, set `switch_prob=0.1`, `log_10_noise_level=-0.2`, and take a look at the probabilities at time `t=2`.
```
# @markdown Execute this cell to enable the demo
nstep = 100
@widgets.interact
def plot_forward_inference_widget(
switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.05),
log10_noise_level=widgets.FloatSlider(min=-1., max=1., step=.01, value=0.1),
t=widgets.IntSlider(min=0, max=nstep-1, step=1, value=nstep//2),
#flag_m=widgets.Checkbox(value=True, description='measurement distribution', disabled=False, indent=False),
flag_d=widgets.Checkbox(value=True, description='measurements', disabled=False, indent=False),
flag_pre=widgets.Checkbox(value=True, description='todays prior', disabled=False, indent=False),
flag_like=widgets.Checkbox(value=True, description='likelihood', disabled=False, indent=False),
flag_post=widgets.Checkbox(value=True, description='posterior', disabled=False, indent=False),
):
np.random.seed(102)
# global model, measurements, states, states_inferred, predictive_probs, likelihoods, posterior_probs
model = create_HMM(switch_prob=switch_prob,
noise_level=10.**log10_noise_level,
startprob=[0.5, 0.5])
measurements, states = sample(model, nstep)
# Infer state sequence
predictive_probs, likelihoods, posterior_probs = simulate_forward_inference(model, nstep,
measurements)
states_inferred = np.asarray(posterior_probs[:,0] <= 0.5, dtype=int)
fig = plot_forward_inference(
model, states, measurements, states_inferred,
predictive_probs, likelihoods, posterior_probs,t=t,
flag_m=0,
flag_d=flag_d,flag_pre=flag_pre,flag_like=flag_like,flag_post=flag_post
)
plt.show(fig)
# @title Video 7: Section 3 Exercise Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1EM4y1T7cB", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="CNrjxNedqV0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
---
# Summary
*Estimated timing of tutorial: 1 hour, 5 minutes*
In this tutorial, you
* Simulated the dynamics of the hidden state in a Hidden Markov model and visualized the measured data (Section 1)
* Explored how uncertainty in a future hidden state changes based on the probabilities of switching between states (Section 2)
* Estimated hidden states from the measurements using forward inference, connected this to Bayesian ideas, and explored the effects of noise and transition matrix probabilities on this process (Section 3)
| github_jupyter |
# A tutorial for the whitebox Python package
This notebook demonstrates the usage of the **whitebox** Python package for geospatial analysis, which is built on a stand-alone executable command-line program called [WhiteboxTools](https://github.com/jblindsay/whitebox-tools).
* Authors: Dr. John Lindsay (https://jblindsay.github.io/ghrg/index.html)
* Contributors: Dr. Qiusheng Wu (https://wetlands.io)
* GitHub repo: https://github.com/giswqs/whitebox-python
* WhiteboxTools: https://github.com/jblindsay/whitebox-tools
* User Manual: https://jblindsay.github.io/wbt_book
* PyPI: https://pypi.org/project/whitebox/
* Documentation: https://whitebox.readthedocs.io
* Binder: https://gishub.org/whitebox-cloud
* Free software: [MIT license](https://opensource.org/licenses/MIT)
This tutorial can be accessed in three ways:
* HTML version: https://gishub.org/whitebox-html
* Viewable Notebook: https://gishub.org/whitebox-notebook
* Interactive Notebook: https://gishub.org/whitebox-cloud
**Launch this tutorial as an interactive Jupyter Notebook on the cloud - [MyBinder.org](https://gishub.org/whitebox-cloud).**

## Table of Content
* [Installation](#Installation)
* [About whitebox](#About-whitebox)
* [Getting data](#Getting-data)
* [Using whitebox](#Using-whitebox)
* [Displaying results](#Displaying-results)
* [whitebox GUI](#whitebox-GUI)
* [Citing whitebox](#Citing-whitebox)
* [Credits](#Credits)
* [Contact](#Contact)
## Installation
**whitebox** supports a variety of platforms, including Microsoft Windows, macOS, and Linux operating systems. Note that you will need to have **Python 3.x** installed. Python 2.x is not supported. The **whitebox** Python package can be installed using the following command:
`pip install whitebox`
If you have installed **whitebox** Python package before and want to upgrade to the latest version, you can use the following command:
`pip install whitebox -U`
If you encounter any installation issues, please check [Troubleshooting](https://github.com/giswqs/whitebox#troubleshooting) on the **whitebox** GitHub page and [Report Bugs](https://github.com/giswqs/whitebox#reporting-bugs).
## About whitebox
**import whitebox and call WhiteboxTools()**
```
import whitebox
wbt = whitebox.WhiteboxTools()
```
**Prints the whitebox-tools help...a listing of available commands**
```
print(wbt.help())
```
**Prints the whitebox-tools license**
```
print(wbt.license())
```
**Prints the whitebox-tools version**
```
print("Version information: {}".format(wbt.version()))
```
**Print the help for a specific tool.**
```
print(wbt.tool_help("ElevPercentile"))
```
**Tool names in the whitebox Python package can be called either using the snake_case or CamelCase convention (e.g. lidar_info or LidarInfo). The example below uses snake_case.**
```
import os, pkg_resources
# identify the sample data directory of the package
data_dir = os.path.dirname(pkg_resources.resource_filename("whitebox", 'testdata/'))
# set whitebox working directory
wbt.set_working_dir(data_dir)
wbt.verbose = False
# call whiteboxtools
wbt.feature_preserving_smoothing("DEM.tif", "smoothed.tif", filter=9)
wbt.breach_depressions("smoothed.tif", "breached.tif")
wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif")
```
**You can search tools using keywords. For example, the script below searches and lists tools with 'lidar' or 'LAS' in tool name or description.**
```
lidar_tools = wbt.list_tools(['lidar', 'LAS'])
for index, tool in enumerate(lidar_tools):
print("{} {}: {} ...".format(str(index+1).zfill(3), tool, lidar_tools[tool][:45]))
```
**List all available tools in whitebox-tools**. Currently, **whitebox** contains 372 tools. More tools will be added as they become available.
```
all_tools = wbt.list_tools()
for index, tool in enumerate(all_tools):
print("{} {}: {} ...".format(str(index+1).zfill(3), tool, all_tools[tool][:45]))
```
## Getting data
This section demonstrates two ways to get data into Binder so that you can test **whitebox** on the cloud using your own data.
* [Getting data from direct URLs](#Getting-data-from-direct-URLs)
* [Getting data from Google Drive](#Getting-data-from-Google-Drive)
### Getting data from direct URLs
If you have data hosted on your own HTTP server or GitHub, you should be able to get direct URLs. With a direct URL, users can automatically download the data when the URL is clicked. For example https://github.com/giswqs/whitebox/raw/master/examples/testdata.zip
Import the following Python libraries and start getting data from direct URLs.
```
import os
import zipfile
import tarfile
import shutil
import urllib.request
```
Create a folder named *whitebox* under the user home folder and set it as the working directory.
```
work_dir = os.path.join(os.path.expanduser("~"), 'whitebox')
if not os.path.exists(work_dir):
os.mkdir(work_dir)
os.chdir(work_dir)
print("Working directory: {}".format(work_dir))
```
Replace the following URL with your own direct URL hosting your data.
```
url = "https://github.com/giswqs/whitebox/raw/master/examples/testdata.zip"
```
Download data the from the above URL and unzip the file if needed.
```
# download the file
zip_name = os.path.basename(url)
zip_path = os.path.join(work_dir, zip_name)
print('Downloading {} ...'.format(zip_name))
urllib.request.urlretrieve(url, zip_path)
print('Downloading done.'.format(zip_name))
# if it is a zip file
if '.zip' in zip_name:
print("Decompressing {} ...".format(zip_name))
with zipfile.ZipFile(zip_name, "r") as zip_ref:
zip_ref.extractall(work_dir)
print('Decompressing done.')
# if it is a tar file
if '.tar' in zip_name:
print("Decompressing {} ...".format(zip_name))
with tarfile.open(zip_name, "r") as tar_ref:
tar_ref.extractall(work_dir)
print('Decompressing done.')
print('Data directory: {}'.format(os.path.splitext(zip_path)[0]))
```
You have successfully downloaded data to Binder. Therefore, you can skip to [Using whitebox](#Using-whitebox) and start testing whitebox with your own data.
### Getting data from Google Drive
Alternatively, you can upload data to [Google Drive](https://www.google.com/drive/) and then [share files publicly from Google Drive](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en). Once the file is shared publicly, you should be able to get a shareable URL. For example, https://drive.google.com/file/d/1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV.
To download files from Google Drive to Binder, you can use the Python package called [google-drive-downloader](https://github.com/ndrplz/google-drive-downloader), which can be installed using the following command:
`pip install googledrivedownloader requests`
**Replace the following URL with your own shareable URL from Google Drive.**
```
gfile_url = 'https://drive.google.com/file/d/1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV'
```
**Extract the file id from the above URL.**
```
file_id = gfile_url.split('/')[5] #'1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV'
print('Google Drive file id: {}'.format(file_id))
```
**Download the shared file from Google Drive.**
```
from google_drive_downloader import GoogleDriveDownloader as gdd
dest_path = './testdata.zip' # choose a name for the downloaded file
gdd.download_file_from_google_drive(file_id, dest_path, unzip=True)
```
You have successfully downloaded data from Google Drive to Binder. You can now continue to [Using whitebox](#Using-whitebox) and start testing whitebox with your own data.
## Using whitebox
Here you can specify where your data are located. In this example, we will use [DEM.tif](https://github.com/giswqs/whitebox/blob/master/examples/testdata/DEM.tif), which has been downloaded to the testdata folder.
**List data under the data folder.**
```
data_dir = './testdata/'
print(os.listdir(data_dir))
```
In this simple example, we smooth [DEM.tif](https://github.com/giswqs/whitebox/blob/master/examples/testdata/DEM.tif) using a [feature preserving denoising](https://github.com/jblindsay/whitebox-tools/blob/master/src/tools/terrain_analysis/feature_preserving_denoise.rs) algorithm. Then, we fill depressions in the DEM using a [depression breaching](https://github.com/jblindsay/whitebox-tools/blob/master/src/tools/hydro_analysis/breach_depressions.rs) algorithm. Finally, we calculate [flow accumulation](https://github.com/jblindsay/whitebox-tools/blob/master/src/tools/hydro_analysis/dinf_flow_accum.rs) based on the depressionless DEM.
```
import whitebox
wbt = whitebox.WhiteboxTools()
# set whitebox working directory
wbt.set_working_dir(data_dir)
wbt.verbose = False
# call whiteboxtool
wbt.feature_preserving_smoothing("DEM.tif", "smoothed.tif", filter=9)
wbt.breach_depressions("smoothed.tif", "breached.tif")
wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif")
```
## Displaying results
This section demonstrates how to display images on Jupyter Notebook. Three Python packages are used here, including [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), and [tifffile](https://pypi.org/project/tifffile/). These three packages can be installed using the following command:
`pip install matplotlib imageio tifffile`
**Import the libraries.**
```
# comment out the third line (%matplotlib inline) if you run the tutorial in other IDEs other than Jupyter Notebook
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
```
**Display one single image.**
```
raster = imageio.imread(os.path.join(data_dir, 'DEM.tif'))
plt.imshow(raster)
plt.show()
```
**Read images as numpy arrays.**
```
original = imageio.imread(os.path.join(data_dir, 'DEM.tif'))
smoothed = imageio.imread(os.path.join(data_dir, 'smoothed.tif'))
breached = imageio.imread(os.path.join(data_dir, 'breached.tif'))
flow_accum = imageio.imread(os.path.join(data_dir, 'flow_accum.tif'))
```
**Display multiple images in one plot.**
```
fig=plt.figure(figsize=(16,11))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Original DEM')
plt.imshow(original)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Breached DEM')
plt.imshow(breached)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Flow Accumulation')
plt.imshow(flow_accum)
plt.show()
```
## whitebox GUI
WhiteboxTools also provides a Graphical User Interface (GUI) - **WhiteboxTools Runner**, which can be invoked using the following Python script. *__Note that the GUI might not work in Jupyter notebooks deployed on the cloud (e.g., MyBinder.org), but it should work on Jupyter notebooks on local computers.__*
```python
import whitebox
whitebox.Runner()
```

## Citing whitebox
If you use the **whitebox** Python package for your research and publications, please consider citing the following papers to give Prof. [John Lindsay](http://www.uoguelph.ca/~hydrogeo/index.html) credits for his tremendous efforts in developing [Whitebox GAT](https://github.com/jblindsay/whitebox-geospatial-analysis-tools) and [WhiteboxTools](https://github.com/jblindsay/whitebox-tools). Without his work, this **whitebox** Python package would not exist!
* Lindsay, J. B. (2016). Whitebox GAT: A case study in geomorphometric analysis. Computers & Geosciences, 95, 75-84. http://dx.doi.org/10.1016/j.cageo.2016.07.003
## Credits
This interactive notebook is made possible by [MyBinder.org](https://mybinder.org/). Big thanks to [MyBinder.org](https://mybinder.org/) for developing the amazing binder platform, which is extremely valuable for reproducible research!
This tutorial made use a number of open-source Python packages, including [ Cookiecutter](https://github.com/audreyr/cookiecutter), [numpy](http://www.numpy.org/), [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), [tifffile](https://pypi.org/project/tifffile/), and [google-drive-downloader](https://github.com/ndrplz/google-drive-downloader). Thanks to all developers of these wonderful Python packages!
## Contact
If you have any questions regarding this tutorial or the **whitebox** Python package, you can contact me (Dr. Qiusheng Wu) at [email protected] or https://wetlands.io/#contact
| github_jupyter |
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Dictionaries in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the dictionaries in the Python Programming Language. By the end of this lab, you'll know the basics dictionary operations in Python, including what it is, and the operations on it.</p>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/NotebooksPython101">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#dic">Dictionaries</a>
<ul>
<li><a href="content">What are Dictionaries?</a></li>
<li><a href="key">Keys</a></li>
</ul>
</li>
<li>
<a href="#quiz">Quiz on Dictionaries</a>
</li>
</ul>
<p>
Estimated time needed: <strong>20 min</strong>
</p>
</div>
<hr>
<h2 id="Dic">Dictionaries</h2>
<h3 id="content">What are Dictionaries?</h3>
A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are the keys that are used to access values within a dictionary.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsList.png" width="650" />
An example of a Dictionary <code>Dict</code>:
```
# Create the dictionary
Dict = {"key1": 1, "key2": "2", "key3": [3, 3, 3], "key4": (4, 4, 4), ('key5'): 5, (0, 1): 6}
Dict
```
The keys can be strings:
```
# Access to the value by the key
Dict["key1"]
```
Keys can also be any immutable object such as a tuple:
```
# Access to the value by the key
Dict[(0, 1)]
```
Each key is separated from its value by a colon "<code>:</code>". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this "<code>{}</code>".
```
# Create a sample dictionary
release_year_dict = {"Thriller": "1982", "Back in Black": "1980", \
"The Dark Side of the Moon": "1973", "The Bodyguard": "1992", \
"Bat Out of Hell": "1977", "Their Greatest Hits (1971-1975)": "1976", \
"Saturday Night Fever": "1977", "Rumours": "1977"}
release_year_dict
```
In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be one single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type.
It is helpful to visualize the dictionary as a table, as in the following image. The first column represents the keys, the second column represents the values.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsStructure.png" width="650" />
<h3 id="key">Keys</h3>
You can retrieve the values based on the names:
```
# Get value by keys
release_year_dict['Thriller']
```
This corresponds to:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyOne.png" width="500" />
Similarly for <b>The Bodyguard</b>
```
# Get value by key
release_year_dict['The Bodyguard']
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyTwo.png" width="500" />
Now let you retrieve the keys of the dictionary using the method <code>release_year_dict()</code>:
```
# Get all the keys in dictionary
release_year_dict.keys()
```
You can retrieve the values using the method <code>values()</code>:
```
# Get all the values in dictionary
release_year_dict.values()
```
We can add an entry:
```
# Append value with key into dictionary
release_year_dict['Graduation'] = '2007'
release_year_dict
```
We can delete an entry:
```
# Delete entries by key
del(release_year_dict['Thriller'])
del(release_year_dict['Graduation'])
release_year_dict
```
We can verify if an element is in the dictionary:
```
# Verify the key is in the dictionary
'The Bodyguard' in release_year_dict
```
<hr>
<h2 id="quiz">Quiz on Dictionaries</h2>
<b>You will need this dictionary for the next two questions:</b>
```
# Question sample dictionary
soundtrack_dic = {"The Bodyguard":"1992", "Saturday Night Fever":"1977"}
soundtrack_dic
```
a) In the dictionary <code>soundtrack_dict</code> what are the keys ?
```
# Write your code below and press Shift+Enter to execute
soundtrack_dic = {"The Bodyguard":"1992", "Saturday Night Fever":"1977"}
soundtrack_dic.keys()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
soundtrack_dic.keys() # The Keys "The Bodyguard" and "Saturday Night Fever"
-->
b) In the dictionary <code>soundtrack_dict</code> what are the values ?
```
# Write your code below and press Shift+Enter to execute
soundtrack_dic.values()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
soundtrack_dic.values() # The values are "1992" and "1977"
-->
<hr>
<b>You will need this dictionary for the following questions:</b>
The Albums <b>Back in Black</b>, <b>The Bodyguard</b> and <b>Thriller</b> have the following music recording sales in millions 50, 50 and 65 respectively:
a) Create a dictionary <code>album_sales_dict</code> where the keys are the album name and the sales in millions are the values.
```
# Write your code below and press Shift+Enter to execute
album_sales_dict = {'Back in Black':50,'The Bodyguard':50,'Thriller' : 65}
album_sales_dict
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict = {"The Bodyguard":50, "Back in Black":50, "Thriller":65}
-->
b) Use the dictionary to find the total sales of <b>Thriller</b>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict['Thriller']
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict["Thriller"]
-->
c) Find the names of the albums from the dictionary using the method <code>keys</code>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict.keys()
A = [1,4,'John']
A[0]
A = (1,4,'John')
A[1]
A = {1:4,'John':'x'}
A['John']
A = [1,4,'John']
A[0:2]
A[::2]
A = (1,4,'John')
A[::2]
.a
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict.keys()
-->
d) Find the names of the recording sales from the dictionary using the method <code>values</code>:
```
# Write your code below and press Shift+Enter to execute
album_sales_dict.values()
```
Double-click __here__ for the solution.
<!-- Your answer is below:
album_sales_dict.values()
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/NotebooksPython101bottom"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
### gQuant Tutorial
First import all the necessary modules.
```
import sys; sys.path.insert(0, '..')
import os
import warnings
import ipywidgets as widgets
from gquant.dataframe_flow import TaskGraph
warnings.simplefilter("ignore")
```
In this tutorial, we are going to use gQuant to do a simple quant job. The task is fully described in a yaml file
```
!head -n 31 ../task_example/simple_trade.yaml
```
The yaml file is describing the computation task by a graph, we can visualize it
```
task_graph = TaskGraph.load_taskgraph('../task_example/simple_trade.yaml')
task_graph.draw(show='ipynb')
```
We define a method to organize the output images
```
def plot_figures(o, symbol):
# format the figures
figure_width = '1200px'
figure_height = '400px'
bar_figure = o[2]
sharpe_number = o[0]
cum_return = o[1]
signals = o[3]
bar_figure.layout.height = figure_height
bar_figure.layout.width = figure_width
cum_return.layout.height = figure_height
cum_return.layout.width = figure_width
cum_return.title = 'P & L %.3f' % (sharpe_number)
bar_figure.marks[0].labels = [symbol]
cum_return.marks[0].labels = [symbol]
signals.layout.height = figure_height
signals.layout.width = figure_width
bar_figure.axes = [bar_figure.axes[0]]
cum_return.axes = [cum_return.axes[0]]
output = widgets.VBox([bar_figure, cum_return, signals])
return output
```
We load the symbol name to symbol id mapping file:
```
node_stockSymbol = {"id": "node_stockSymbol",
"type": "StockNameLoader",
"conf": {"path": "./data/security_master.csv.gz"},
"inputs": []}
name_graph = TaskGraph([node_stockSymbol])
list_stocks = name_graph.run(outputs=['node_stockSymbol'])[0].to_pandas().set_index('asset_name').to_dict()['asset']
```
Evaluate the output nodes and plot the results:
```
symbol = 'REXX'
action = "load" if os.path.isfile('./.cache/load_csv_data.hdf5') else "save"
o = task_graph.run(
outputs=['node_sharpeRatio', 'node_cumlativeReturn',
'node_barplot', 'node_lineplot', 'load_csv_data'],
replace={'load_csv_data': {action: True},
'node_barplot': {'conf': {"points": 300}},
'node_assetFilter':
{'conf': {'asset': list_stocks[symbol]}}})
cached_input = o[4]
plot_figures(o, symbol)
```
Change the strategy parameters
```
o = task_graph.run(
outputs=['node_sharpeRatio', 'node_cumlativeReturn',
'node_barplot', 'node_lineplot'],
replace={'load_csv_data': {"load": cached_input},
'node_barplot': {'conf': {"points": 200}},
'node_ma_strategy': {'conf': {'fast': 1, 'slow': 10}},
'node_assetFilter': {'conf': {'asset': list_stocks[symbol]}}})
figure_combo = plot_figures(o, symbol)
figure_combo
add_stock_selector = widgets.Dropdown(options=list_stocks.keys(),
value=None, description="Add stock")
para_selector = widgets.IntRangeSlider(value=[10, 30],
min=3,
max=60,
step=1,
description="MA:",
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True)
def para_selection(*stocks):
with out:
symbol = add_stock_selector.value
para1 = para_selector.value[0]
para2 = para_selector.value[1]
o = task_graph.run(
outputs=['node_sharpeRatio', 'node_cumlativeReturn',
'node_barplot', 'node_lineplot'],
replace={'load_csv_data': {"load": cached_input},
'node_barplot': {'conf': {"points": 200}},
'node_ma_strategy': {'conf': {'fast': para1, 'slow': para2}},
'node_assetFilter': {'conf': {'asset': list_stocks[symbol]}}})
figure_combo = plot_figures(o, symbol)
if (len(w.children) < 2):
w.children = (w.children[0], figure_combo,)
else:
w.children[1].children[1].marks = figure_combo.children[1].marks
w.children[1].children[2].marks = figure_combo.children[2].marks
w.children[1].children[1].title = 'P & L %.3f' % (o[0])
out = widgets.Output(layout={'border': '1px solid black'})
add_stock_selector.observe(para_selection, 'value')
para_selector.observe(para_selection, 'value')
selectors = widgets.HBox([add_stock_selector, para_selector])
w = widgets.VBox([selectors])
w
```
| github_jupyter |
# Cuaderno para cargar metadatos
Este cuaderno toma un directorio de MinIO con estructura de datos abiertos y crea la definición para Hive-Metastore para cada una de las tablas
## Librerias
```
from minio import Minio
import pandas as pd
from io import StringIO
from io import BytesIO
import json
from pyhive import hive
```
## Definicion de coneccion a MinIO
```
client = Minio(
"minio:9000",
access_key="minio",
secret_key="minio123",
secure=False
)
```
Ruta al diccionario de correpondencia
```
#corr_file = None
corr_file = 'correspondence.json'
#corr_file = 'correspondence.csv'
```
## Funciones de ayuda
```
# Obtine el diccionario de correspondencia de un archivo
def get_corrr_dic(corr_file = None):
corr_dic = {"tables":{},"columns":{}}
if corr_file is None:
# Si se pasa None a la funcion dara un diccionario en blanco
pass
elif corr_file.endswith(".json"):
with open(corr_file, 'r') as myfile:
corr_dic = json.loads(myfile.read())
elif corr_file.endswith(".csv"):
df = pd.read_csv(corr_file)
df.apply(fill_dic,1,new_dic=corr_dic)
else:
raise Exception("Format not found, only .json and .csv")
return corr_dic
# llena los datos en un diccionario de correpondencia con la informacion de un renglón
def fill_dic(row,new_dic):
if row["type"] == "tables":
new_dic["tables"][row["original_name"]]=row["final_name"]
if row["type"] == "columns":
try:
new_dic["columns"][row["table"]][row["original_name"]]=row["final_name"]
except KeyError:
new_dic["columns"][row["table"]] = {}
new_dic["columns"][row["table"]][row["original_name"]]=row["final_name"]
# Crea la definicion de la tabla en el directorio dado utilizando el diccionario de datos
def create_hive_table(client, bucket, directory_object_name):
table_name = get_table_name(directory_object_name)
col_def = get_col_def(client, bucket, directory_object_name)
data_location = directory_object_name+"conjunto_de_datos/"
# TODO: revisar si la tabla ya existe
table_def = """ CREATE EXTERNAL TABLE {} ({})
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\\n'
LOCATION 's3a://{}/{}'
TBLPROPERTIES ('skip.header.line.count'='1')
""".format(table_name,col_def,bucket,data_location)
return table_def
# Funcion para establecer el nombre de la tabla, en una version peeliminar es el nombre del directorio, pero se puede refinar para lograr algo mas conciso
def get_table_name(directory_object_name):
old_name = directory_object_name.split("/")[-2]
try:
new_name = corr_dic["tables"][old_name]
except KeyError:
new_name = old_name
return new_name
def get_col_name(variable,table_name):
try:
new_name = corr_dic["columns"][table_name][variable]
except KeyError:
new_name = variable
return new_name
# Fucion que crea la definicion de las variables, nombre de la variable y tipo
def get_col_def (client, bucket, directory_object_name):
data_dictionary = get_data_dictionary(client, bucket, directory_object_name)
# TODO: revisar si este ese el orden real de las columans de la tabla
table_name = get_table_name(directory_object_name)
names = [get_col_name(x,table_name) for x in data_dictionary["Variable"]]
types = [get_type(x) for x in data_dictionary["Tipo"]]
return ", ".join(["{} {}".format(x,y) for x,y in zip(names,types)])
# Obtiene el diccionario de datos del formato de datos abiertos para definir las varibles
def get_data_dictionary(client, bucket, directory_object_name):
dic_location = [obj.object_name for obj in client.list_objects(bucket, directory_object_name+"diccionario_de_datos/")]
try:
response = client.get_object(bucket, dic_location[0])
s = str(response.read(),'latin-1')
finally:
response.close()
response.release_conn()
df = pd.read_csv(StringIO(s),names=["id","Variable","Descripcion","Tipo","valor","etiqueta_rango"], index_col=False)
valid_entries = ~pd.to_numeric(df["id"],errors='coerce',downcast='integer').isna()
return df[valid_entries]
# Convierte el tipo de columna del estilo datos abiertos al estilo SQL
def get_type(entrada):
# TODO: Investigar los tipos de HIve y ver cual se justa mejor
# TODO: agregar lognitudes
lookup = {"C":"STRING","N":"DECIMAL"}
for key in lookup:
if entrada.startswith(key):
return lookup[key]
raise Exception('Variable del tipo "{}" no encontrado'.format(entrada))
# atraviesa todos los sub-directorios para crear un a tabla para cada uno
def create_dataset_tables(client,bucket,data_set):
definitions = [create_hive_table(client, bucket, obj.object_name) for obj in client.list_objects(bucket, data_set) if obj.is_dir]
return definitions
# ToDo: ejecutar directamente en el Hive-Metastore y revisar si la definición ha sido exitosa
```
## Ejemplo de ejecucion
En este ejemplo se toman los datos cargados en el cuaderno CargaObjetos.ipynb para crear las definiciones de tablas.
**Nota:** En los datos a descargar hay un problema con el archivo *conjunto_de_datos_enigh_2018_ns_csv\conjunto_de_datos_poblacion_enigh_2018_ns\diccionario_de_datos\diccionario_datos_poblacion_enigh_2018_ns.csv* en la linea *81* es necesario poner comillas para que el csv se detecte de manera correcta.
Se puede hacer de manera automatica en el cuaderno DescargaDatos.ipynb
```
corr_dic = get_corrr_dic(corr_file)
sqls = create_dataset_tables(client,"hive","warehouse/conjunto_de_datos_enigh_2018_ns_csv/")
sqls += create_dataset_tables(client,"hive","warehouse/conjunto_de_datos_enigh2016_nueva_serie_csv/")
sqls += create_dataset_tables(client,"hive","warehouse/enigh_ncv_2014_csv/")
print(sqls[0])
def get_connection(host='hive-server',port=10000,auth='NOSASL',database="default"):
conn = hive.Connection(host=host,port=port,auth=auth,database=database)
return conn
def procc_SQL_list(list_sqls):
conn = get_connection()
cursor = conn.cursor()
for sql in list_sqls:
try:
cursor.execute(sql)
except:
print(" Error al crear la tabla "+sql.split(" ")[11])
conn.close()
def drop_list_tables(list_sqls):
conn = get_connection()
cursor = conn.cursor()
for sql in list_sqls:
try:
cursor.execute("DROP TABLE "+sql.split(" ")[3])
except:
print("Error al eliminar tabla "+sql.split(" ")[3])
conn.close()
def list_tables():
conn = get_connection()
try:
df = pd.read_sql("SHOW TABLES", conn)
except:
print("Error al listar tablas")
conn.close()
return df
procc_SQL_list(sqls)
list_tables()
```
| github_jupyter |
```
import lifelines
import pymc as pm
import pyBMA
import matplotlib.pyplot as plt
import numpy as np
from math import log
from datetime import datetime
import pandas as pd
%matplotlib inline
```
The first step in any data analysis is acquiring and munging the data
An example data set can be found at:
https://jakecoltman.gitlab.io/website/post/pydata/
Download the file output.txt and transform it into a format like below where the event column should be 0 if there's only one entry for an id, and 1 if there are two entries:
End date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
id,time_to_convert,age,male,event,search,brand
```
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
df
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
```
Problems:
2 - Try to fit your data from section 1
3 - Use the results to plot the distribution of the median
--------
4 - Try adjusting the number of samples, the burn parameter and the amount of thinning to correct get good answers
5 - Try adjusting the prior and see how it affects the estimate
--------
6 - Try to fit a different distribution to the data
7 - Compare answers
Bonus - test the hypothesis that the true median is greater than a certain amount
For question 2, note that the median of a Weibull is:
$$β(log 2)^{1/α}$$
```
#Solution to question 4:
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
#Solution to question 4:
### Increasing the burn parameter allows us to discard results before convergence
### Thinning the results removes autocorrelation
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
## Solution to bonus
## Super easy to do in the Bayesian framework, all we need to do is look at what % of samples
## meet our criteria
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 15.6
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
#Cox model
```
If we want to look at covariates, we need a new approach. We'll use Cox proprtional hazards. More information here.
```
#Fitting solution
cf = lifelines.CoxPHFitter()
cf.fit(df, 'lifetime', event_col = 'event')
cf.summary
```
Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
```
#Solution to 1
fig, axis = plt.subplots(nrows=1, ncols=1)
cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival")
# Solution to prediction
regressors = np.array([[1,45,0,0]])
survival = cf.predict_survival_function(regressors)
survival
#Solution to plotting multiple regressors
fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)
regressor1 = np.array([[1,45,0,1]])
regressor2 = np.array([[1,23,1,1]])
survival_1 = cf.predict_survival_function(regressor1)
survival_2 = cf.predict_survival_function(regressor2)
plt.plot(survival_1,label = "32 year old male")
plt.plot(survival_2,label = "46 year old female")
plt.legend(loc = "lower left")
#Difference in survival
odds = survival_1 / survival_2
plt.plot(odds, c = "red")
```
Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Compare these results to past the lifelines results
3 - Try running with different priors
```
##Solution to 1
from pyBMA import CoxPHFitter
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4)
print(bmaCox.summary)
#Low probability for everything favours parsimonious models
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4)
print(bmaCox.summary)
#Low probability for everything favours parsimonious models
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.9]*4)
print(bmaCox.summary)
#Low probability for everything favours parsimonious models
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3])
print(bmaCox.summary)
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
train=pd.read_csv('../input/nlp-getting-started/train.csv')
test=pd.read_csv('../input/nlp-getting-started/test.csv')
sample=pd.read_csv('../input/nlp-getting-started/sample_submission.csv')
train.head()
train.info()
train.describe()
train.shape
train.head()
target = train['target']
sns.countplot(target)
train.drop(['target'], inplace =True,axis =1)
def concat_df(train, test):
return pd.concat([train, test], sort=True).reset_index(drop=True)
df_all = concat_df(train, test)
print(train.shape)
print(test.shape)
print(df_all.shape)
df_all.head()
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
sentences = train['text']
train_size = int(7613*0.8)
train_sentences = sentences[:train_size]
train_labels = target[:train_size]
test_sentences = sentences[train_size:]
test_labels = target[train_size:]
vocab_size = 10000
embedding_dim = 16
max_length = 120
trunc_type='post'
oov_tok = "<OOV>"
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(train_sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(train_sentences)
padded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(test_sentences)
testing_padded = pad_sequences(testing_sequences,maxlen=max_length)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(14, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 10
train_labels = np.asarray(train_labels)
test_labels = np.asarray(test_labels)
history = model.fit(padded, train_labels, epochs=num_epochs, validation_data=(testing_padded, test_labels))
def plot(history,string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot(history, "accuracy")
plot(history, 'loss')
tokenizer_1 = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer_1.fit_on_texts(train['text'])
word_index = tokenizer_1.word_index
sequences = tokenizer_1.texts_to_sequences(train['text'])
padded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)
true_test_sentences = test['text']
testing_sequences = tokenizer_1.texts_to_sequences(true_test_sentences)
testing_padded = pad_sequences(testing_sequences,maxlen=max_length)
model_2 = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model_2.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model_2.summary()
target = np.asarray(target)
num_epochs = 20
history = model_2.fit(padded, target, epochs=num_epochs, verbose=2)
output = model_2.predict(testing_padded)
predicted = pd.DataFrame(output, columns=['target'])
final_output = []
for val in predicted.target:
if val > 0.5:
final_output.append(1)
else:
final_output.append(0)
sample['target'] = final_output
sample.to_csv("submission_1.csv", index=False)
sample.head()
```
| github_jupyter |
```
import os
import random
import shutil
from shutil import copyfile
import csv
root_dir = "ISAFE MAIN DATABASE FOR PUBLIC/"
data = "Database/"
global_emotion_dir = "emotions_5/"
# global_emotion_dir = "emotions/"
subject_list = os.path.join(root_dir, data)
x = os.listdir(subject_list)
csv_file = "ISAFE MAIN DATABASE FOR PUBLIC\Annotations\self-annotation.csv"
labels_dictionary = {}
with open(csv_file) as rf:
rows = csv.reader(rf, delimiter=",")
for row in rows:
labels_dictionary[row[0]]=row[1]
def parse_labels(directory, cut_images):
li = os.listdir(directory)
string_directory = str(directory)
label_key = string_directory[-6:]
if not "S" in label_key:
label_key = "S"+label_key
for item in li:
path = os.path.join(directory,item)
if os.path.isdir(path):
parse_labels(path, cut_images)
elif item.endswith(".jpg"):
if cut_images:
if (item.endswith("_0.jpg") or item.endswith("_1.jpg") or
item.endswith("_2.jpg") or
item.endswith("_3.jpg") or
item.endswith("_4.jpg") or
item.endswith("_5.jpg") or
item.endswith("_6.jpg") or
item.endswith("_7.jpg") or
item.endswith("_8.jpg") or
item.endswith("_9.jpg") or
item.endswith("_10.jpg") or
item.endswith("_11.jpg") or
item.endswith("_12.jpg") or
item.endswith("_13.jpg") or
item.endswith("_14.jpg")):
continue
randint = random.random()
whydoineedtodothisshit = label_key.replace("\\", "/")
emotion = labels_dictionary[whydoineedtodothisshit]
identifier = label_key.replace("\\", "_")
pic_id = identifier+item
# randomizes the images in real time
if randint < 0.8:
train_test_validate = "train"
elif randint >= 0.8 and randint < 0.9:
train_test_validate = "validation"
else:
train_test_validate = "test"
if emotion == "1":
emotion_ = "joy"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
elif emotion == "2":
emotion_ = "sadness"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
elif emotion == "3":
# 3 = surprise
emotion_ = "surprise_fear"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
elif emotion == "4":
# 4 = disgust
emotion_ = "anger_disgust"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
elif emotion == "5":
# 5=fear
emotion_ = "surprise_fear"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
elif emotion == "6":
#6=anger
emotion_ = "anger_disgust"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
# elif emotion == "7":
# unceratin, I do not have a classification for this
# emotion_ = "joy"
# copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
elif emotion == "8":
emotion_ = "neutral"
copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)
else:
continue
def copy_files(pic, pic_id, orignal_dir, emotion_, global_emotion_dir, ttv):
file_ = os.path.join(orignal_dir, pic)
ttv_dir = os.path.join(global_emotion_dir, ttv)
emotion_dir = os.path.join(ttv_dir, emotion_)
dest_ = os.path.join(emotion_dir, pic_id)
if os.path.getsize(file_) != 0:
copyfile(file_, dest_)
for root, dirs, files in os.walk("emotions_copy_test_dir"):
for x in files:
os.remove(os.path.join(root, x))
parse_labels(subject_list, True)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/aljeshishe/FrameworkBenchmarks/blob/master/How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
pip install kaggle -q
import json
token = {'username':'aljeshishe','key':'32deca82aa1c29fbaeadcce2bf470af4'}
with open('kaggle.json', 'w') as file:
json.dump(token, file)
!mkdir ~/.kaggle
!mv kaggle.json ~/.kaggle/kaggle.json
!chmod 600 ~/.kaggle/kaggle.json
!kaggle competitions download -c dogs-vs-cats
!unzip test1.zip
!unzip train.zip
import tensorflow as tf
import os
# use gpu/cpu/tpu
# see details in https://colab.research.google.com/drive/1cpuwjKTJbMjlvZ7opyrWzMXF_NYnjkiE#scrollTo=y3gk7nSvTUFZ
gpus = tf.config.experimental.list_physical_devices('GPU')
COLAB_TPU_ADDR = os.environ.get('COLAB_TPU_ADDR')
if COLAB_TPU_ADDR:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + COLAB_TPU_ADDR)
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
print('Running on TPU ')
elif len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])
print('Running on multiple GPUs ', [gpu.name for gpu in gpus])
elif len(gpus) == 1:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on single GPU ', gpus[0].name)
else:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on CPU')
print("Number of accelerators: ", strategy.num_replicas_in_sync)
!nvidia-smi
from google.colab import drive
drive.mount('/content/drive')
def notebook_name():
import re
import ipykernel
import requests
from notebook.notebookapp import list_running_servers
# kernel_id = re.search('kernel-(.*).json', ipykernel.connect.get_connection_file()).group(1)
for ss in list_running_servers():
response = requests.get(f'{ss["url"]}api/sessions',params={'token': ss.get('token', '')})
return response.json()[0]['name']
project, _, _ = notebook_name().rpartition('.')
import re
project = re.sub('[^-a-zA-Z0-9_]+', '_', project)
working_dir = f'/content/drive/My Drive/Colab Notebooks/{project}'
print(f'Current project: {project}')
print(f'Places at: {working_dir}')
import pathlib
pathlib.Path(working_dir).mkdir(parents=True, exist_ok=True)
!pip install wandb -q
!WANDB_API_KEY=723983b2d42ccd7c5510bbeb0549aa73f1242844
!export WANDB_API_KEY
import wandb
wandb.init(project=project, dir=working_dir, config=config)
conf = wandb.config
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from keras.preprocessing.image import ImageDataGenerator, load_img
from keras.utils import to_categorical
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random
import os
train_dir = 'train/'
test_dir = 'test1/'
filenames = os.listdir(train_dir)
categories = []
for filename in filenames:
category = filename.split('.')[0]
if category == 'dog':
categories.append("1")
else:
categories.append("0")
from keras.models import Sequential
from keras import layers
from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation,GlobalMaxPooling2D
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.applications import VGG16
from keras.applications import InceptionResNetV2
from keras.models import Model
from keras.callbacks import EarlyStopping
import wandb
from wandb.keras import WandbCallback
def run(samples):
conf.batch_size = 64, # input batch size for training (default: 64)
conf.epochs = 20, # number of epochs to train (default: 10)
conf.lr = 1e-4, # learning rate (default: 0.01)
conf.momentum = 0.9, # SGD momentum (default: 0.5)
conf.steps_per_epoch = 20000/64, # shoud be total_train//config.batch_size
df = pd.DataFrame({
'filename': filenames,
'category': categories
})
df.head()
image_size = 224
input_shape = (image_size, image_size, 3)
epochs = config.epochs
batch_size = 16
pre_trained_model = InceptionResNetV2(input_shape=input_shape, include_top=False, weights="imagenet")
pre_trained_model.summary()
for layer in pre_trained_model.layers[:15]:
layer.trainable = False
for layer in pre_trained_model.layers[15:]:
layer.trainable = True
last_layer = pre_trained_model.get_layer('conv_7b_ac')
last_output = last_layer.output
x = GlobalMaxPooling2D()(last_output)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=config.lr, momentum=config.momentum),
metrics=['accuracy'])
train_df, validate_df = train_test_split(df, test_size=0.1)
train_df = train_df.reset_index()
validate_df = validate_df.reset_index()
validate_df = validate_df.sample(n=2000).reset_index() # use for fast testing code purpose
train_df = train_df.sample(n=samples).reset_index() # use for fast testing code purpose
total_train = train_df.shape[0]
total_validate = validate_df.shape[0]
train_datagen = ImageDataGenerator(
rotation_range=15,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
width_shift_range=0.1,
height_shift_range=0.1
)
train_generator = train_datagen.flow_from_dataframe(
train_df,
train_dir,
x_col='filename',
y_col='category',
class_mode='binary',
target_size=(image_size, image_size),
batch_size=batch_size
)
validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_dataframe(
validate_df,
train_dir,
x_col='filename',
y_col='category',
class_mode='binary',
target_size=(image_size, image_size),
batch_size=batch_size
)
# fine-tune the model
history = model.fit_generator(
train_generator,
epochs=config.epochs,
validation_data=validation_generator,
validation_steps=total_validate//config.batch_size,
steps_per_epoch=config.steps_per_epoch,
verbose=2,
callbacks=[WandbCallback(save_model=True,
verbose=1)])
#EarlyStopping(monitor='val_loss'),
return history
counts = [10, 50, 100]
results = []
results_count = []
for count in counts:
h = run(count)
results.append(h.history)
results_count.append(count)
val_acc = [(i, max(result['val_accuracy'])) for i, result in zip(results_count, results)]
acc = [(i, max(result['accuracy'])) for i, result in zip(results_count, results)]
val_loss = [(i, min(result['val_loss'])) for i, result in zip(results_count, results)]
loss = [(i, min(result['loss'])) for i, result in zip(results_count, results)]
import matplotlib.pyplot as plt
plt.plot(*zip(*val_acc), '-o', label='val_acc')
plt.plot(*zip(*acc), '-o', label='acc')
plt.plot(*zip(*val_loss), '-o', label='val_loss')
plt.plot(*zip(*loss), '-o', label='loss')
plt.legend()
val_acc
```
**500 samples is ok for binary classification**
| github_jupyter |
```
import numpy as np
import pandas as pd
import datetime
from pandas.tseries.frequencies import to_offset
import niftyutils
from niftyutils import load_nifty_data
import matplotlib.pyplot as plt
start_date = datetime.datetime(2005,8,1)
end_date = datetime.datetime(2020,9,25)
nifty_data = load_nifty_data(start_date,end_date)
```
## Daily Return Distribution (For 15 years)
```
daily_returns = (nifty_data['Close']/nifty_data['Close'].shift(1) - 1)*100
daily_returns = daily_returns.dropna()
daily_returns.describe()
plt.figure(figsize=[8,7])
plt.style.use("bmh")
plt.hist(daily_returns, density = True, bins=20, color='#2ab0ff',alpha=0.55)
plt.xlabel('% return', fontsize=15)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.tick_params(left = False, bottom = False)
plt.title('NIFTY daily % returns ({} samples)'.format(len(daily_returns)),fontsize=15)
plt.grid(False)
plt.show()
custom_bins = [daily_returns.min(),-2.5,-2,-1.5,-1,0.-0.75,0.75,1.0,1.5,2.0,2.5,daily_returns.max()]
categorized_daily_returns = pd.cut(daily_returns, bins=custom_bins)
categorized_daily_returns.value_counts(normalize=True,sort=False)
custom_bins_compact = [daily_returns.min(),-3,-1.5,-1.0,1.0,1.5,3.0,daily_returns.max()]
categorized_daily_returns = pd.cut(daily_returns, bins=custom_bins_compact)
categorized_daily_returns.value_counts(normalize=True,sort=False)
```
## Weekly Return Distribution (For 15 years)
```
weekly_nifty_data = nifty_data.resample('W').agg(niftyutils.OHLC_CONVERSION_DICT)
weekly_nifty_data.index = weekly_nifty_data.index - to_offset('6D')
weekly_returns = (weekly_nifty_data['Close']/weekly_nifty_data['Close'].shift(1) - 1)*100
weekly_returns = weekly_returns.dropna().rename('returns')
weekly_returns.describe()
plt.figure(figsize=[8,7])
plt.style.use("bmh")
plt.hist(weekly_returns, density = True, bins=20, color='#2ab0ff',alpha=0.55)
plt.xlabel('% return', fontsize=15)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.tick_params(left = False, bottom = False)
plt.title('NIFTY weekly % returns ({} samples)'.format(len(weekly_returns)),fontsize=15)
plt.grid(False)
plt.show()
custom_bins_compact = [weekly_returns.min(),-5,-2.5,2.5,5,weekly_returns.max()]
categorized_weekly_returns = pd.cut(weekly_returns, bins=custom_bins_compact)
categorized_weekly_returns.value_counts(normalize=True,sort=False)
custom_bins_labels = ['-ve Extreme','-ve','normal','+ve','+ve Extreme']
return_categories = pd.cut(weekly_returns, bins=custom_bins_compact,labels=custom_bins_labels).rename('category')
weekly_returns_categorized = pd.concat([weekly_returns, return_categories], axis=1)
```
| github_jupyter |
```
%matplotlib inline
import gym
import itertools
import matplotlib
import numpy as np
import sys
import sklearn.pipeline
import sklearn.preprocessing
if "../" not in sys.path:
sys.path.append("../")
from lib import plotting
from sklearn.linear_model import SGDRegressor
from sklearn.kernel_approximation import RBFSampler
matplotlib.style.use('ggplot')
env = gym.envs.make("MountainCar-v0")
# Feature Preprocessing: Normalize to zero mean and unit variance
# We use a few samples from the observation space to do this
observation_examples = np.array([env.observation_space.sample() for x in range(10000)])
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(observation_examples)
# Used to converte a state to a featurizes represenation.
# We use RBF kernels with different variances to cover different parts of the space
featurizer = sklearn.pipeline.FeatureUnion([
("rbf1", RBFSampler(gamma=5.0, n_components=100)),
("rbf2", RBFSampler(gamma=2.0, n_components=100)),
("rbf3", RBFSampler(gamma=1.0, n_components=100)),
("rbf4", RBFSampler(gamma=0.5, n_components=100))
])
featurizer.fit(scaler.transform(observation_examples))
class Estimator():
"""
Value Function approximator.
"""
def __init__(self):
# We create a separate model for each action in the environment's
# action space. Alternatively we could somehow encode the action
# into the features, but this way it's easier to code up.
self.models = []
for _ in range(env.action_space.n):
model = SGDRegressor(learning_rate="constant")
# We need to call partial_fit once to initialize the model
# or we get a NotFittedError when trying to make a prediction
# This is quite hacky.
model.partial_fit([self.featurize_state(env.reset())], [0])
self.models.append(model)
def featurize_state(self, state):
"""
Returns the featurized representation for a state.
"""
scaled = scaler.transform([state])
featurized = featurizer.transform(scaled)
return featurized[0]
def predict(self, s, a=None):
"""
Makes value function predictions.
Args:
s: state to make a prediction for
a: (Optional) action to make a prediction for
Returns
If an action a is given this returns a single number as the prediction.
If no action is given this returns a vector or predictions for all actions
in the environment where pred[i] is the prediction for action i.
"""
features = self.featurize_state(s)
if not a:
return np.array([m.predict([features])[0] for m in self.models])
else:
return self.models[a].predict([features])[0]
def update(self, s, a, y):
"""
Updates the estimator parameters for a given state and action towards
the target y.
"""
features = self.featurize_state(s)
self.models[a].partial_fit([features], [y])
def make_epsilon_greedy_policy(estimator, epsilon, nA):
"""
Creates an epsilon-greedy policy based on a given Q-function approximator and epsilon.
Args:
estimator: An estimator that returns q values for a given state
epsilon: The probability to select a random action . float between 0 and 1.
nA: Number of actions in the environment.
Returns:
A function that takes the observation as an argument and returns
the probabilities for each action in the form of a numpy array of length nA.
"""
def policy_fn(observation):
A = np.ones(nA, dtype=float) * epsilon / nA
q_values = estimator.predict(observation)
best_action = np.argmax(q_values)
A[best_action] += (1.0 - epsilon)
return A
return policy_fn
def q_learning(env, estimator, num_episodes, discount_factor=1.0, epsilon=0.1, epsilon_decay=1.0):
"""
Q-Learning algorithm for fff-policy TD control using Function Approximation.
Finds the optimal greedy policy while following an epsilon-greedy policy.
Args:
env: OpenAI environment.
estimator: Action-Value function estimator
num_episodes: Number of episodes to run for.
discount_factor: Lambda time discount factor.
epsilon: Chance the sample a random action. Float betwen 0 and 1.
epsilon_decay: Each episode, epsilon is decayed by this factor
Returns:
An EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.
"""
# Keeps track of useful statistics
stats = plotting.EpisodeStats(
episode_lengths=np.zeros(num_episodes),
episode_rewards=np.zeros(num_episodes))
for i_episode in range(num_episodes):
# The policy we're following
policy = make_epsilon_greedy_policy(
estimator, epsilon * epsilon_decay**i_episode, env.action_space.n)
# Print out which episode we're on, useful for debugging.
# Also print reward for last episode
last_reward = stats.episode_rewards[i_episode - 1]
sys.stdout.flush()
# Reset the environment and pick the first action
state = env.reset()
# Only used for SARSA, not Q-Learning
next_action = None
# One step in the environment
for t in itertools.count():
# Choose an action to take
# If we're using SARSA we already decided in the previous step
if next_action is None:
action_probs = policy(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
else:
action = next_action
# Take a step
next_state, reward, done, _ = env.step(action)
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# TD Update
q_values_next = estimator.predict(next_state)
# Use this code for Q-Learning
# Q-Value TD Target
td_target = reward + discount_factor * np.max(q_values_next)
# Use this code for SARSA TD Target for on policy-training:
# next_action_probs = policy(next_state)
# next_action = np.random.choice(np.arange(len(next_action_probs)), p=next_action_probs)
# td_target = reward + discount_factor * q_values_next[next_action]
# Update the function approximator using our target
estimator.update(state, action, td_target)
print("\rStep {} @ Episode {}/{} ({})".format(t, i_episode + 1, num_episodes, last_reward), end="")
if done:
break
state = next_state
return stats
estimator = Estimator()
# Note: For the Mountain Car we don't actually need an epsilon > 0.0
# because our initial estimate for all states is too "optimistic" which leads
# to the exploration of all states.
stats = q_learning(env, estimator, 100, epsilon=0.0)
plotting.plot_cost_to_go_mountain_car(env, estimator)
plotting.plot_episode_stats(stats, smoothing_window=25)
```
| github_jupyter |
# Writing Reusable Code using Functions in Python

### Part 4 of "Data Analysis with Python: Zero to Pandas"
This tutorial covers the following topics:
- Creating and using functions in Python
- Local variables, return values, and optional arguments
- Reusing functions and using Python library functions
- Exception handling using `try`-`except` blocks
- Documenting functions using docstrings
## Creating and using functions
A function is a reusable set of instructions that takes one or more inputs, performs some operations, and often returns an output. Python contains many in-built functions like `print`, `len`, etc., and provides the ability to define new ones.
```
today = "Saturday"
print("Today is", today)
```
You can define a new function using the `def` keyword.
```
def say_hello():
print('Hello there!')
print('How are you?')
```
Note the round brackets or parentheses `()` and colon `:` after the function's name. Both are essential parts of the syntax. The function's *body* contains an indented block of statements.
The statements inside a function's body are not executed when the function is defined. To execute the statements, we need to *call* or *invoke* the function.
```
say_hello()
```
### Function arguments
Functions can accept zero or more values as *inputs* (also knows as *arguments* or *parameters*). Arguments help us write flexible functions that can perform the same operations on different values. Further, functions can return a result that can be stored in a variable or used in other expressions.
Here's a function that filters out the even numbers from a list and returns a new list using the `return` keyword.
```
def say_hello(name):
print('Hello {}'.format(name))
say_hello("John")
say_hello("Jane")
def filter_even(number_list):
result_list = []
for number in number_list:
if number % 2 == 0:
result_list.append(number)
return result_list
```
Can you understand what the function does by looking at the code? If not, try executing each line of the function's body separately within a code cell with an actual list of numbers in place of `number_list`.
```
even_list = filter_even([1, 2, 3, 4, 5, 6, 7])
even_list
filter_even([1, 3, 5, 7])
```
## Writing great functions in Python
As a programmer, you will spend most of your time writing and using functions. Python offers many features to make your functions powerful and flexible. Let's explore some of these by solving a problem:
> Radha is planning to buy a house that costs `$1,260,000`. She considering two options to finance her purchase:
>
> * Option 1: Make an immediate down payment of `$300,000`, and take loan 8-year loan with an interest rate of 10% (compounded monthly) for the remaining amount.
> * Option 2: Take a 10-year loan with an interest rate of 8% (compounded monthly) for the entire amount.
>
> Both these loans have to be paid back in equal monthly installments (EMIs). Which loan has a lower EMI among the two?
Since we need to compare the EMIs for two loan options, defining a function to calculate the EMI for a loan would be a great idea. The inputs to the function would be cost of the house, the down payment, duration of the loan, rate of interest etc. We'll build this function step by step.
First, let's write a simple function that calculates the EMI on the entire cost of the house, assuming that the loan must be paid back in one year, and there is no interest or down payment.
```
def loan_emi(amount):
emi = amount / 12
print('The EMI is ${}'.format(emi))
loan_emi(12_60_000)
```
### Local variables and scope
Let's add a second argument to account for the duration of the loan in months.
```
def loan_emi(amount, duration):
emi = amount / duration
print('The EMI is ${}'.format(emi))
```
Note that the variable `emi` defined inside the function is not accessible outside. The same is true for the parameters `amount` and `duration`. These are all *local variables* that lie within the *scope* of the function.
> **Scope**: Scope refers to the region within the code where a particular variable is visible. Every function (or class definition) defines a scope within Python. Variables defined in this scope are called *local variables*. Variables that are available everywhere are called *global variables*. Scope rules allow you to use the same variable names in different functions without sharing values from one to the other.
```
emi
amount
duration
```
because above variables are all the local variables
We can now compare a 6-year loan vs. a 10-year loan (assuming no down payment or interest).
```
loan_emi(12_60_600, 8*12)
loan_emi(12_60_000, 10*12)
```
### Return values
As you might expect, the EMI for the 6-year loan is higher compared to the 10-year loan. Right now, we're printing out the result. It would be better to return it and store the results in variables for easier comparison. We can do this using the `return` statement
```
def loan_emi(amount, duration):
emi = amount / duration
return emi
emi1 = loan_emi(12_60_000, 8*12)
emi2 = loan_emi(12_60_000, 10*12)
emi1
emi2
emi1 - emi2
```
### Optional arguments
Next, let's add another argument to account for the immediate down payment. We'll make this an *optional argument* with a default value of 0.
```
def loan_emi(amount, duration, down_payment=0):
loan_amount = amount - down_payment
emi = amount / duration
return emi
emi1 = loan_emi(12_60_000, 8*12, 3e5)
emi1
emi2 = loan_emi(12_60_000, 10*12)
emi2
```
Next, let's add the interest calculation into the function. Here's the formula used to calculate the EMI for a loan:
<img src="https://i.imgur.com/iKujHGK.png" style="width:240px">
where:
* `P` is the loan amount (principal)
* `n` is the no. of months
* `r` is the rate of interest per month
The derivation of this formula is beyond the scope of this tutorial. See this video for an explanation: https://youtu.be/Coxza9ugW4E .
```
def loan_emi(amount, duration, rate, down_payment=0):
loan_amount = amount - down_payment
emi = loan_amount * rate * ((1 + rate)**duration) / (((1 + rate)**duration)-1)
return emi
```
Note that while defining the function, required arguments like `cost`, `duration` and `rate` must appear before optional arguments like `down_payment`.
Let's calculate the EMI for Option 1
```
loan_emi(12_60_000, 8*12, 0.1/12, 3e5)
```
While calculating the EMI for Option 2, we need not include the `down_payment` argument.
```
loan_emi(12_60_000,10*12, 0.08/12)
```
### Named arguments
Invoking a function with many arguments can often get confusing and is prone to human errors. Python provides the option of invoking functions with *named* arguments for better clarity. You can also split function invocation into multiple lines.
```
emi1 = loan_emi(
amount = 12_60_000,
duration=8*12,
rate=0.1/12,
down_payment=3e5
)
emi1
emi2 = loan_emi(amount=1260000, duration=10*12, rate=0.08/12)
emi2
def round_up(x):
emi2 = loan_emi(amount=1260000, duration=10*12, rate=0.08/12)
return round(emi2)
round_up(emi2)
```
### Modules and library functions
We can already see that the EMI for Option 1 is lower than the EMI for Option 2. However, it would be nice to round up the amount to full dollars, rather than showing digits after the decimal. To achieve this, we might want to write a function that can take a number and round it up to the next integer (e.g., 1.2 is rounded up to 2). That would be a great exercise to try out!
However, since rounding numbers is a fairly common operation, Python provides a function for it (along with thousands of other functions) as part of the [Python Standard Library](https://docs.python.org/3/library/). Functions are organized into *modules* that need to be imported to use the functions they contain.
> **Modules**: Modules are files containing Python code (variables, functions, classes, etc.). They provide a way of organizing the code for large Python projects into files and folders. The key benefit of using modules is _namespaces_: you must import the module to use its functions within a Python script or notebook. Namespaces provide encapsulation and avoid naming conflicts between your code and a module or across modules.
We can use the `ceil` function (short for *ceiling*) from the `math` module to round up numbers. Let's import the module and use it to round up the number `1.2`.
```
import math
help(math.ceil)
math.ceil(1.2)
```
Let's now use the `math.ceil` function within the `home_loan_emi` function to round up the EMI amount.
> Using functions to build other functions is a great way to reuse code and implement complex business logic while still keeping the code small, understandable, and manageable. Ideally, a function should do one thing and one thing only. If you find yourself writing a function that does too many things, consider splitting it into multiple smaller, independent functions. As a rule of thumb, try to limit your functions to 10 lines of code or less. Good programmers always write short, simple, and readable functions.
```
def loan_emi(amount, duration, rate, down_payment=0):
loan_amount = amount - down_payment
emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)
emi = math.ceil(emi)
return emi
emi1 = loan_emi(
amount=1260000,
duration=8*12,
rate=0.1/12,
down_payment=3e5
)
emi1
emi2 = loan_emi(amount=1260000, duration=10*12, rate=0.08/12)
emi2
```
Let's compare the EMIs and display a message for the option with the lower EMI.
```
if emi1 < emi2:
print('Option 1 has the lower EMI: ${}'.format(emi1))
else:
print('Option 2 has the lower EMI: ${}'.format(emi2))
```
### Reusing and improving functions
Now we know for sure that "Option 1" has the lower EMI among the two options. But what's even better is that we now have a handy function `loan_emi` that we can use to solve many other similar problems with just a few lines of code. Let's try it with a couple more questions.
> **Q**: Shaun is currently paying back a home loan for a house he bought a few years ago. The cost of the house was `$800,000`. Shaun made a down payment of `25%` of the price. He financed the remaining amount using a 6-year loan with an interest rate of `7%` per annum (compounded monthly). Shaun is now buying a car worth `$60,000`, which he is planning to finance using a 1-year loan with an interest rate of `12%` per annum. Both loans are paid back in EMIs. What is the total monthly payment Shaun makes towards loan repayment?
This question is now straightforward to solve, using the `loan_emi` function we've already defined.
```
cost_of_the_house = 800000
home_loan_duration = 6*12 #months
home_loan_rate = 0.07/12 #monthly
home_down_payment = .25 * 800000
emi_house = loan_emi(amount=cost_of_the_house,
duration=home_loan_duration,
rate=home_loan_rate,
down_payment=home_down_payment)
emi_house
cost_of_car = 60000
car_loan_duration = 1*12 #months
car_loan_rate = .12*12 #monthly
emi_car = loan_emi(amount=cost_of_car,
duration=car_loan_duration,
rate=car_loan_rate)
emi_car
print("Shaun makes a total monthly payment of ${} towards loan repayments.".format(emi_house+emi_car))
```
### Exceptions and `try`-`except`
> Q: If you borrow `$100,000` using a 10-year loan with an interest rate of 9% per annum, what is the total amount you end up paying as interest?
One way to solve this problem is to compare the EMIs for two loans: one with the given rate of interest and another with a 0% rate of interest. The total interest paid is then simply the sum of monthly differences over the duration of the loan.
```
emi_with_interest = loan_emi(amount=100000, duration=10*12, rate=0.09/12)
emi_with_interest
emi_without_interest = loan_emi(amount=100000, duration=10*12, rate=0./12)
emi_without_interest
```
Something seems to have gone wrong! If you look at the error message above carefully, Python tells us precisely what is wrong. Python *throws* a `ZeroDivisionError` with a message indicating that we're trying to divide a number by zero. `ZeroDivisonError` is an *exception* that stops further execution of the program.
> **Exception**: Even if a statement or expression is syntactically correct, it may cause an error when the Python interpreter tries to execute it. Errors detected during execution are called exceptions. Exceptions typically stop further execution of the program unless handled within the program using `try`-`except` statements.
Python provides many built-in exceptions *thrown* when built-in operators, functions, or methods are used incorrectly: https://docs.python.org/3/library/exceptions.html#built-in-exceptions. You can also define your custom exception by extending the `Exception` class (more on that later).
You can use the `try` and `except` statements to *handle* an exception. Here's an example:
```
try:
print("Now computing the result..")
result = 5 / 0
print("Computation was completed successfully")
except ZeroDivisionError:
print("Failed to compute result because you were trying to divide by zero")
result = None
print(result)
```
When an exception occurs inside a `try` block, the block's remaining statements are skipped. The `except` block is executed if the type of exception thrown matches that of the exception being handled. After executing the `except` block, the program execution returns to the normal flow.
You can also handle more than one type of exception using multiple `except` statements. Learn more about exceptions here: https://www.w3schools.com/python/python_try_except.asp .
Let's enhance the `loan_emi` function to use `try`-`except` to handle the scenario where the interest rate is 0%. It's common practice to make changes/enhancements to functions over time as new scenarios and use cases come up. It makes functions more robust & versatile.
```
def loan_emi(amount, duration, rate, down_payment=0):
loan_amount = amount - down_payment
try:
emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)
except ZeroDivisionError:
emi = loan_amount / duration
emi = math.ceil(emi)
return emi
```
We can use the updated `loan_emi` function to solve our problem.
> **Q**: If you borrow `$100,000` using a 10-year loan with an interest rate of 9% per annum, what is the total amount you end up paying as interest?
```
emi_with_interest = loan_emi(amount=100000, duration=10*12, rate=0.09/12)
emi_with_interest
emi_without_interest = loan_emi(amount=100000, duration=10*12, rate=0)
emi_without_interest
total_interest = (emi_with_interest - emi_without_interest) * 10*12
print("The total interest paid is ${}.".format(total_interest))
```
### Documenting functions using Docstrings
We can add some documentation within our function using a *docstring*. A docstring is simply a string that appears as the first statement within the function body, and is used by the `help` function. A good docstring describes what the function does, and provides some explanation about the arguments.
```
def loan_emi(amount, duration, rate, down_payment=0):
"""Calculates the equal montly installment (EMI) for a loan.
Arguments:
amount - Total amount to be spent (loan + down payment)
duration - Duration of the loan (in months)
rate - Rate of interest (monthly)
down_payment (optional) - Optional intial payment (deducted from amount)
"""
loan_amount = amount - down_payment
try:
emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)
except ZeroDivisionError:
emi = loan_amount / duration
emi = math.ceil(emi)
return emi
```
In the docstring above, we've provided some additional information that the `duration` and `rate` are measured in months. You might even consider naming the arguments `duration_months` and `rate_monthly`, to avoid any confusion whatsoever. Can you think of some other ways to improve the function?
```
help(loan_emi)
```
## Exercise - Data Analysis for Vacation Planning
You're planning a vacation, and you need to decide which city you want to visit. You have shortlisted four cities and identified the return flight cost, daily hotel cost, and weekly car rental cost. While renting a car, you need to pay for entire weeks, even if you return the car sooner.
| City | Return Flight (`$`) | Hotel per day (`$`) | Weekly Car Rental (`$`) |
|------|--------------------------|------------------|------------------------|
| Paris| 200 | 20 | 200 |
| London| 250 | 30 | 120 |
| Dubai| 370 | 15 | 80 |
| Mumbai| 450 | 10 | 70 |
Answer the following questions using the data above:
1. If you're planning a 1-week long trip, which city should you visit to spend the least amount of money?
2. How does the answer to the previous question change if you change the trip's duration to four days, ten days or two weeks?
3. If your total budget for the trip is `$1000`, which city should you visit to maximize the duration of your trip? Which city should you visit if you want to minimize the duration?
4. How does the answer to the previous question change if your budget is `$600`, `$2000`, or `$1500`?
*Hint: To answer these questions, it will help to define a function `cost_of_trip` with relevant inputs like flight cost, hotel rate, car rental rate, and duration of the trip. You may find the `math.ceil` function useful for calculating the total cost of car rental.*
```
# Use these cells to answer the question - build the function step-by-step
paris_dict = dict(city="Paris", cost_return_flight=200, cost_hotel_per_night=20, cost_car_rental_weekly=200)
london_dict = dict(city="London", cost_return_flight=250, cost_hotel_per_night=30, cost_car_rental_weekly=120)
dubai_dict = dict(city="Dubai", cost_return_flight=370, cost_hotel_per_night=15, cost_car_rental_weekly=80)
mumbai_dict = dict(city="Mumbai", cost_return_flight=450, cost_hotel_per_night=10, cost_car_rental_weekly=70)
def total_trip_cost_duration(city_dict, trip_duration = 7, trip_budget = 0):
cost_return_flight = city_dict["cost_return_flight"]
cost_hotel_per_night = city_dict["cost_hotel_per_night"]
cost_car_rental_weekly = city_dict["cost_car_rental_weekly"]
total_trip_cost = 0
total_hotel_cost = 0
total_car_rental = 0
total_trip_duration_days = 0
total_trip_duration_weeks = 0
remaining_budget = 0
total_trip_cost += cost_return_flight
if(not trip_budget):
total_hotel_cost = cost_hotel_per_night * trip_duration
total_trip_duration_weeks = (trip_duration // 7) if (trip_duration % 7 == 0) else (trip_duration // 7 + 1)
total_car_rental = cost_car_rental_weekly * total_trip_duration_weeks
total_trip_cost += (total_hotel_cost + total_car_rental)
total_trip_duration_days = trip_duration
else:
remaining_budget = trip_budget - total_trip_cost
total_cost_per_week = (cost_car_rental_weekly + (cost_hotel_per_night * 7))
total_trip_duration_weeks = (remaining_budget // total_cost_per_week)
total_trip_cost += (total_cost_per_week * total_trip_duration_weeks)
remaining_budget = trip_budget - total_trip_cost
total_trip_duration_days = total_trip_duration_weeks * 7
if (remaining_budget >= (cost_car_rental_weekly + cost_hotel_per_night)):
total_trip_duration_weeks += 1
total_trip_cost += cost_car_rental_weekly
remaining_budget = trip_budget - total_trip_cost
remaining_days = remaining_budget // cost_hotel_per_night
total_trip_cost += (cost_hotel_per_night * remaining_days)
total_trip_duration_days += remaining_days
return (total_trip_cost, total_trip_duration_days, total_trip_duration_weeks)
paris_total_trip_cost = total_trip_cost_duration(city_dict = paris_dict, trip_budget = 1150)
london_total_trip_cost = total_trip_cost_duration(city_dict = london_dict, trip_duration = 14)
dubai_total_trip_cost = total_trip_cost_duration(city_dict = dubai_dict, trip_duration = 16)
mumbai_total_trip_cost = total_trip_cost_duration(city_dict = mumbai_dict, trip_budget = 1500)
print(paris_total_trip_cost)
print(london_total_trip_cost)
print(dubai_total_trip_cost)
print(mumbai_total_trip_cost)
```
## Questions for Revision
Try answering the following questions to test your understanding of the topics covered in this notebook:
1. What is a function?
2. What are the benefits of using functions?
3. What are some built-in functions in Python?
4. How do you define a function in Python? Give an example.
5. What is the body of a function?
6. When are the statements in the body of a function executed?
7. What is meant by calling or invoking a function? Give an example.
8. What are function arguments? How are they useful?
9. How do you store the result of a function in a variable?
10. What is the purpose of the `return` keyword in Python?
11. Can you return multiple values from a function?
12. Can a `return` statement be used inside an `if` block or a `for` loop?
13. Can the `return` keyword be used outside a function?
14. What is scope in a programming region?
15. How do you define a variable inside a function?
16. What are local & global variables?
17. Can you access the variables defined inside a function outside its body? Why or why not?
18. What do you mean by the statement "a function defines a scope within Python"?
19. Do for and while loops define a scope, like functions?
20. Do if-else blocks define a scope, like functions?
21. What are optional function arguments & default values? Give an example.
22. Why should the required arguments appear before the optional arguments in a function definition?
23. How do you invoke a function with named arguments? Illustrate with an example.
24. Can you split a function invocation into multiple lines?
25. Write a function that takes a number and rounds it up to the nearest integer.
26. What are modules in Python?
27. What is a Python library?
28. What is the Python Standard Library?
29. Where can you learn about the modules and functions available in the Python standard library?
30. How do you install a third-party library?
31. What is a module namespace? How is it useful?
32. What problems would you run into if Python modules did not provide namespaces?
33. How do you import a module?
34. How do you use a function from an imported module? Illustrate with an example.
35. Can you invoke a function inside the body of another function? Give an example.
36. What is the single responsibility principle, and how does it apply while writing functions?
37. What some characteristics of well-written functions?
38. Can you use if statements or while loops within a function? Illustrate with an example.
39. What are exceptions in Python? When do they occur?
40. How are exceptions different from syntax errors?
41. What are the different types of in-built exceptions in Python? Where can you learn about them?
42. How do you prevent the termination of a program due to an exception?
43. What is the purpose of the `try`-`except` statements in Python?
44. What is the syntax of the `try`-`except` statements? Give an example.
45. What happens if an exception occurs inside a `try` block?
46. How do you handle two different types of exceptions using `except`? Can you have multiple `except` blocks under a single `try` block?
47. How do you create an `except` block to handle any type of exception?
48. Illustrate the usage of `try`-`except` inside a function with an example.
49. What is a docstring? Why is it useful?
50. How do you display the docstring for a function?
51. What are *args and **kwargs? How are they useful? Give an example.
52. Can you define functions inside functions?
53. What is function closure in Python? How is it useful? Give an example.
54. What is recursion? Illustrate with an example.
55. Can functions accept other functions as arguments? Illustrate with an example.
56. Can functions return other functions as results? Illustrate with an example.
57. What are decorators? How are they useful?
58. Implement a function decorator which prints the arguments and result of wrapped functions.
59. What are some in-built decorators in Python?
60. What are some popular Python libraries?
| github_jupyter |
# Modeling and Simulation in Python
Chapter 3
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# set the random number generator
np.random.seed(7)
```
## More than one State object
Here's the code from the previous chapter, with two changes:
1. I've added DocStrings that explain what each function does, and what parameters it takes.
2. I've added a parameter named `state` to the functions so they work with whatever `State` object we give them, instead of always using `bikeshare`. That makes it possible to work with more than one `State` object.
```
def step(state, p1, p2):
"""Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
"""
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
def bike_to_wellesley(state):
"""Move one bike from Olin to Wellesley.
state: bikeshare State object
"""
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
"""Move one bike from Wellesley to Olin.
state: bikeshare State object
"""
state.wellesley -= 1
state.olin += 1
def decorate_bikeshare():
"""Add a title and label the axes."""
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
```
And here's `run_simulation`, which is a solution to the exercise at the end of the previous notebook.
```
def run_simulation(state, p1, p2, num_steps):
"""Simulate the given number of time steps.
state: State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
num_steps: number of time steps
"""
results = TimeSeries()
for i in range(num_steps):
step(state, p1, p2)
results[i] = state.olin
plot(results, label='Olin')
```
Now we can create more than one `State` object:
```
bikeshare1 = State(olin=10, wellesley=2)
bikeshare2 = State(olin=2, wellesley=10)
```
Whenever we call a function, we indicate which `State` object to work with:
```
bike_to_olin(bikeshare1)
bike_to_wellesley(bikeshare2)
```
And you can confirm that the different objects are getting updated independently:
```
bikeshare1
bikeshare2
```
## Negative bikes
In the code we have so far, the number of bikes at one of the locations can go negative, and the number of bikes at the other location can exceed the actual number of bikes in the system.
If you run this simulation a few times, it happens often.
```
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
```
We can fix this problem using the `return` statement to exit the function early if an update would cause negative bikes.
```
def bike_to_wellesley(state):
"""Move one bike from Olin to Wellesley.
state: bikeshare State object
"""
if state.olin == 0:
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
"""Move one bike from Wellesley to Olin.
state: bikeshare State object
"""
if state.wellesley == 0:
return
state.wellesley -= 1
state.olin += 1
```
Now if you run the simulation again, it should behave.
```
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
```
## Comparison operators
The `if` statements in the previous section used the comparison operator `==`. The other comparison operators are listed in the book.
It is easy to confuse the comparison operator `==` with the assignment operator `=`.
Remember that `=` creates a variable or gives an existing variable a new value.
```
x = 5
```
Whereas `==` compares two values and returns `True` if they are equal.
```
x == 5
```
You can use `==` in an `if` statement.
```
if x == 5:
print('yes, x is 5')
```
But if you use `=` in an `if` statement, you get an error.
```
# If you remove the # from the if statement and run it, you'll get
# SyntaxError: invalid syntax
#if x = 5:
# print('yes, x is 5')
```
**Exercise:** Add an `else` clause to the `if` statement above, and print an appropriate message.
Replace the `==` operator with one or two of the other comparison operators, and confirm they do what you expect.
## Metrics
Now that we have a working simulation, we'll use it to evaluate alternative designs and see how good or bad they are. The metric we'll use is the number of customers who arrive and find no bikes available, which might indicate a design problem.
First we'll make a new `State` object that creates and initializes additional state variables to keep track of the metrics.
```
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0)
```
Next we need versions of `bike_to_wellesley` and `bike_to_olin` that update the metrics.
```
def bike_to_wellesley(state):
"""Move one bike from Olin to Wellesley.
state: bikeshare State object
"""
if state.olin == 0:
state.olin_empty += 1
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
"""Move one bike from Wellesley to Olin.
state: bikeshare State object
"""
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
```
Now when we run a simulation, it keeps track of unhappy customers.
```
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
```
After the simulation, we can print the number of unhappy customers at each location.
```
bikeshare.olin_empty
bikeshare.wellesley_empty
```
## Exercises
**Exercise:** As another metric, we might be interested in the time until the first customer arrives and doesn't find a bike. To make that work, we have to add a "clock" to keep track of how many time steps have elapsed:
1. Create a new `State` object with an additional state variable, `clock`, initialized to 0.
2. Write a modified version of `step` that adds one to the clock each time it is invoked.
Test your code by running the simulation and check the value of `clock` at the end.
```
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0)
# Solution
def step(state, p1, p2):
"""Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
"""
state.clock += 1
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
# Solution
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
# Solution
bikeshare
```
**Exercise:** Continuing the previous exercise, let's record the time when the first customer arrives and doesn't find a bike.
1. Create a new `State` object with an additional state variable, `t_first_empty`, initialized to -1 as a special value to indicate that it has not been set.
2. Write a modified version of `step` that checks whether`olin_empty` and `wellesley_empty` are 0. If not, it should set `t_first_empty` to `clock` (but only if `t_first_empty` has not already been set).
Test your code by running the simulation and printing the values of `olin_empty`, `wellesley_empty`, and `t_first_empty` at the end.
```
# Solution
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0, t_first_empty=-1)
# Solution
def step(state, p1, p2):
"""Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
"""
state.clock += 1
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
if state.t_first_empty != -1:
return
if state.olin_empty + state.wellesley_empty > 0:
state.t_first_empty = state.clock
# Solution
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
# Solution
bikeshare
```
| github_jupyter |
# Tutorial: PyTorch
```
__author__ = "Ignacio Cases"
__version__ = "CS224u, Stanford, Spring 2021"
```
## Contents
1. [Motivation](#Motivation)
1. [Importing PyTorch](#Importing-PyTorch)
1. [Tensors](#Tensors)
1. [Tensor creation](#Tensor-creation)
1. [Operations on tensors](#Operations-on-tensors)
1. [GPU computation](#GPU-computation)
1. [Neural network foundations](#Neural-network-foundations)
1. [Automatic differentiation](#Automatic-differentiation)
1. [Modules](#Modules)
1. [Sequential](#Sequential)
1. [Criteria and loss functions](#Criteria-and-loss-functions)
1. [Optimization](#Optimization)
1. [Training a simple model](#Training-a-simple-model)
1. [Reproducibility](#Reproducibility)
1. [References](#References)
## Motivation
PyTorch is a Python package designed to carry out scientific computation. We use PyTorch in a range of different environments: local model development, large-scale deployments on big clusters, and even _inference_ in embedded, low-power systems. While similar in many aspects to NumPy, PyTorch enables us to perform fast and efficient training of deep learning and reinforcement learning models not only on the CPU but also on a GPU or other ASICs (Application Specific Integrated Circuits) for AI, such as Tensor Processing Units (TPU).
## Importing PyTorch
This tutorial assumes a working installation of PyTorch using your `nlu` environment, but the content applies to any regular installation of PyTorch. If you don't have a working installation of PyTorch, please follow the instructions in [the setup notebook](setup.ipynb).
To get started working with PyTorch we simply begin by importing the torch module:
```
import torch
```
**Side note**: why not `import pytorch`? The name of the package is `torch` for historical reasons: `torch` is the orginal name of the ancestor of the PyTorch library that got started back in 2002 as a C library with Lua scripting. It was only much later that the original `torch` was ported to Python. The PyTorch project decided to prefix the Py to make clear that this library refers to the Python version, as it was confusing back then to know which `torch` one was referring to. All the internal references to the library use just `torch`. It's possible that PyTorch will be renamed at some point, as the original `torch` is no longer maintained and there is no longer confusion.
We can see the version installed and determine whether or not we have a GPU-enabled PyTorch install by issuing
```
print("PyTorch version {}".format(torch.__version__))
print("GPU-enabled installation? {}".format(torch.cuda.is_available()))
```
PyTorch has good [documentation](https://pytorch.org/docs/stable/index.html) but it can take some time to familiarize oneself with the structure of the package; it's worth the effort to do so!
We will also make use of other imports:
```
import numpy as np
```
## Tensors
Tensors collections of numbers represented as an array, and are the basic building blocks in PyTorch.
You are probably already familiar with several types of tensors:
- A scalar, a single number, is a zero-th order tensor.
- A column vector $v$ of dimensionality $d_c \times 1$ is a tensor of order 1.
- A row vector $x$ of dimensionality $1 \times d_r$ is a tensor of order 1.
- A matrix $A$ of dimensionality $d_r \times d_c$ is a tensor of order 2.
- A cube $T$ of dimensionality $d_r \times d_c \times d_d$ is a tensor of order 3.
Tensors are the fundamental blocks that carry information in our mathematical models, and they are composed using several operations to create mathematical graphs in which information can flow (propagate) forward (functional application) and backwards (using the chain rule).
We have seen multidimensional arrays in NumPy. These NumPy objects are also a representation of tensors.
**Side note**: what is a tensor __really__? Tensors are important mathematical objects with applications in multiple domains in mathematics and physics. The term "tensor" comes from the usage of these mathematical objects to describe the stretching of a volume of matter under *tension*. They are central objects of study in a subfield of mathematics known as differential geometry, which deals with the geometry of continuous vector spaces. As a very high-level summary (and as first approximation), tensors are defined as multi-linear "machines" that have a number of slots (their order, a.k.a. rank), taking a number of "column" vectors and "row" vectors *to produce a scalar*. For example, a tensor $\mathbf{A}$ (represented by a matrix with rows and columns that you could write on a sheet of paper) can be thought of having two slots. So when $\mathbf{A}$ acts upon a column vector $\mathbf{v}$ and a row vector $\mathbf{x}$, it returns a scalar:
$$\mathbf{A}(\mathbf{x}, \mathbf{v}) = s$$
If $\mathbf{A}$ only acts on the column vector, for example, the result will be another column tensor $\mathbf{u}$ of one order less than the order of $\mathbf{A}$. Thus, when $\mathbf{v}$ acts is similar to "removing" its slot:
$$\mathbf{u} = \mathbf{A}(\mathbf{v})$$
The resulting $\mathbf{u}$ can later interact with another row vector to produce a scalar or be used in any other way.
This can be a very powerful way of thinking about tensors, as their slots can guide you when writing code, especially given that PyTorch has a _functional_ approach to modules in which this view is very much highlighted. As we will see below, these simple equations above have a completely straightforward representation in the code. In the end, most of what our models will do is to process the input using this type of functional application so that we end up having a tensor output and a scalar value that measures how good our output is with respect to the real output value in the dataset.
### Tensor creation
Let's get started with tensors in PyTorch. The framework supports eight different types ([Lapan 2018](#References)):
- 3 float types (16-bit, 32-bit, 64-bit): `torch.FloatTensor` is the class name for the commonly used 32-bit tensor.
- 5 integer types (signed 8-bit, unsigned 8-bit, 16-bit, 32-bit, 64-bit): common tensors of these types are the 8-bit unsigned tensor `torch.ByteTensor` and the 64-bit `torch.LongTensor`.
There are three fundamental ways to create tensors in PyTorch ([Lapan 2018](#References)):
- Call a tensor constructor of a given type, which will create a non-initialized tensor. So we then need to fill this tensor later to be able to use it.
- Call a built-in method in the `torch` module that returns a tensor that is already initialized.
- Use the PyTorch–NumPy bridge.
#### Calling the constructor
Let's first create a 2 x 3 dimensional tensor of the type `float`:
```
t = torch.FloatTensor(2, 3)
print(t)
print(t.size())
```
Note that we specified the dimensions as the arguments to the constructor by passing the numbers directly – and not a list or a tuple, which would have very different outcomes as we will see below! We can always inspect the size of the tensor using the `size()` method.
The constructor method allocates space in memory for this tensor. However, the tensor is *non-initialized*. In order to initialize it, we need to call any of the tensor initialization methods of the basic tensor types. For example, the tensor we just created has a built-in method `zero_()`:
```
t.zero_()
```
The underscore after the method name is important: it means that the operation happens _in place_: the returned object is the same object but now with different content. A very handy way to construct a tensor using the constructor happens when we have available the content we want to put in the tensor in the form of a Python iterable. In this case, we just pass it as the argument to the constructor:
```
torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
```
#### Calling a method in the torch module
A very convenient way to create tensors, in addition to using the constructor method, is to use one of the multiple methods provided in the `torch` module. In particular, the `tensor` method allows us to pass a number or iterable as the argument to get the appropriately typed tensor:
```
tl = torch.tensor([1, 2, 3])
t = torch.tensor([1., 2., 3.])
print("A 64-bit integer tensor: {}, {}".format(tl, tl.type()))
print("A 32-bit float tensor: {}, {}".format(t, t.type()))
```
We can create a similar 2x3 tensor to the one above by using the `torch.zeros()` method, passing a sequence of dimensions to it:
```
t = torch.zeros(2, 3)
print(t)
```
There are many methods for creating tensors. We list some useful ones:
```
t_zeros = torch.zeros_like(t) # zeros_like returns a new tensor
t_ones = torch.ones(2, 3) # creates a tensor with 1s
t_fives = torch.empty(2, 3).fill_(5) # creates a non-initialized tensor and fills it with 5
t_random = torch.rand(2, 3) # creates a uniform random tensor
t_normal = torch.randn(2, 3) # creates a normal random tensor
print(t_zeros)
print(t_ones)
print(t_fives)
print(t_random)
print(t_normal)
```
We now see emerging two important paradigms in PyTorch. The _imperative_ approach to performing operations, using _inplace_ methods, is in marked contrast with an additional paradigm also used in PyTorch, the _functional_ approach, where the returned object is a copy of the original object. Both paradigms have their specific use cases as we will be seeing below. The rule of thumb is that _inplace_ methods are faster and don't require extra memory allocation in general, but they can be tricky to understand (keep this in mind regarding the computational graph that we will see below). _Functional_ methods make the code referentially transparent, which is a highly desired property that makes it easier to understand the underlying math, but we rely on the efficiency of the implementation:
```
# creates a new copy of the tensor that is still linked to
# the computational graph (see below)
t1 = torch.clone(t)
assert id(t) != id(t1), 'Functional methods create a new copy of the tensor'
# To create a new _independent_ copy, we do need to detach
# from the graph
t1 = torch.clone(t).detach()
```
#### Using the PyTorch–NumPy bridge
A quite useful feature of PyTorch is its almost seamless integration with NumPy, which allows us to perform operations on NumPy and interact from PyTorch with the large number of NumPy libraries as well. Converting a NumPy multi-dimensional array into a PyTorch tensor is very simple: we only need to call the `tensor` method with NumPy objects as the argument:
```
# Create a new multi-dimensional array in NumPy with the np datatype (np.float32)
a = np.array([1., 2., 3.])
# Convert the array to a torch tensor
t = torch.tensor(a)
print("NumPy array: {}, type: {}".format(a, a.dtype))
print("Torch tensor: {}, type: {}".format(t, t.dtype))
```
We can also seamlessly convert a PyTorch tensor into a NumPy array:
```
t.numpy()
```
**Side note**: why not `torch.from_numpy(a)`? The `from_numpy()` method is depecrated in favor of `tensor()`, which is a more capable method in the torch package. `from_numpy()` is only there for backwards compatibility. It can be a little bit quirky, so I recommend using the newer method in PyTorch >= 0.4.
#### Indexing
Indexing works as expected with NumPy:
```
t = torch.randn(2, 3)
t[ : , 0]
```
PyTorch also supports indexing using long tensors, for example:
```
t = torch.randn(5, 6)
print(t)
i = torch.tensor([1, 3])
j = torch.tensor([4, 5])
print(t[i]) # selects rows 1 and 3
print(t[i, j]) # selects (1, 4) and (3, 5)
```
#### Type conversion
Each tensor has a set of convenient methods to convert types. For example, if we want to convert the tensor above to a 32-bit float tensor, we use the method `.float()`:
```
t = t.float() # converts to 32-bit float
print(t)
t = t.double() # converts to 64-bit float
print(t)
t = t.byte() # converts to unsigned 8-bit integer
print(t)
```
### Operations on tensors
Now that we know how to create tensors, let's create some of the fundamental tensors and see some common operations on them:
```
# Scalars =: creates a tensor with a scalar
# (zero-th order tensor, i.e. just a number)
s = torch.tensor(42)
print(s)
```
**Tip**: a very convenient to access scalars is with `.item()`:
```
s.item()
```
Let's see higher-order tensors – remember we can always inspect the dimensionality of a tensor using the `.size()` method:
```
# Row vector
x = torch.randn(1,3)
print("Row vector\n{}\nwith size {}".format(x, x.size()))
# Column vector
v = torch.randn(3,1)
print("Column vector\n{}\nwith size {}".format(v, v.size()))
# Matrix
A = torch.randn(3, 3)
print("Matrix\n{}\nwith size {}".format(A, A.size()))
```
A common operation is matrix-vector multiplication (and in general tensor-tensor multiplication). For example, the product $\mathbf{A}\mathbf{v} + \mathbf{b}$ is as follows:
```
u = torch.matmul(A, v)
print(u)
b = torch.randn(3,1)
y = u + b # we can also do torch.add(u, b)
print(y)
```
where we retrieve the expected result (a column vector of dimensions 3x1). We can of course compose operations:
```
s = torch.matmul(x, torch.matmul(A, v))
print(s.item())
```
There are many functions implemented for every tensor, and we encourage you to study the documentation. Some of the most common ones:
```
# common tensor methods (they also have the counterpart in
# the torch package, e.g. as torch.sum(t))
t = torch.randn(2,3)
t.sum(dim=0)
t.t() # transpose
t.numel() # number of elements in tensor
t.nonzero() # indices of non-zero elements
t.view(-1, 2) # reorganizes the tensor to these dimensions
t.squeeze() # removes size 1 dimensions
t.unsqueeze(0) # inserts a dimension
# operations in the package
torch.arange(0, 10) # tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
torch.eye(3, 3) # creates a 3x3 matrix with 1s in the diagonal (identity in this case)
t = torch.arange(0, 3)
torch.cat((t, t)) # tensor([0, 1, 2, 0, 1, 2])
torch.stack((t, t)) # tensor([[0, 1, 2],
# [0, 1, 2]])
```
## GPU computation
Deep Learning frameworks take advantage of the powerful computational capabilities of modern graphic processing units (GPUs). GPUs were originally designed to perform frequent operations for graphics very efficiently and fast, such as linear algebra operations, which makes them ideal for our interests. PyTorch makes it very easy to use the GPU: the common scenario is to tell the framework that we want to instantiate a tensor with a type that makes it a GPU tensor, or move a given CPU tensor to the GPU. All the tensors that we have seen above are CPU tensors, and PyTorch has the counterparts for GPU tensors in the `torch.cuda` module. Let's see how this works.
A common way to explicitly declare the tensor type as a GPU tensor is through the use of the constructor method for tensor creation inside the `torch.cuda` module:
```
try:
t_gpu = torch.cuda.FloatTensor(3, 3) # creation of a GPU tensor
t_gpu.zero_() # initialization to zero
except TypeError as err:
print(err)
```
However, a more common approach that gives us flexibility is through the use of devices. A device in PyTorch refers to either the CPU (indicated by the string "cpu") or one of the possible GPU cards in the machine (indicated by the string "cuda:$n$", where $n$ is the index of the card). Let's create a random gaussian matrix using a method from the `torch` package, and set the computational device to be the GPU by specifying the `device` to be `cuda:0`, the first GPU card in our machine (this code will fail if you don't have a GPU, but we will work around that below):
```
try:
t_gpu = torch.randn(3, 3, device="cuda:0")
except AssertionError as err:
print(err)
t_gpu = None
t_gpu
```
As you can notice, the tensor now has the explicit device set to be a CUDA device, not a CPU device. Let's now create a tensor in the CPU and move it to the GPU:
```
# we could also state explicitly the device to be the
# CPU with torch.randn(3,3,device="cpu")
t = torch.randn(3, 3)
t
```
In this case, the device is the CPU, but PyTorch does not explicitly say that given that this is the default behavior. To copy the tensor to the GPU we use the `.to()` method that every tensor implements, passing the device as an argument. This method creates a copy in the specified device or, if the tensor already resides in that device, it returns the original tensor ([Lapan 2018](#References)):
```
try:
t_gpu = t.to("cuda:0") # copies the tensor from CPU to GPU
# note that if we do now t_to_gpu.to("cuda:0") it will
# return the same tensor without doing anything else
# as this tensor already resides on the GPU
print(t_gpu)
print(t_gpu.device)
except AssertionError as err:
print(err)
```
**Tip**: When we program PyTorch models, we will have to specify the device in several places (not so many, but definitely more than once). A good practice that is consistent accross the implementation and makes the code more portable is to declare early in the code a device variable by querying the framework if there is a GPU available that we can use. We can do this by writing
```
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
```
We can then use `device` as an argument of the `.to()` method in the rest of our code:
```
# moves t to the device (this code will **not** fail if the
# local machine has not access to a GPU)
t.to(device)
```
**Side note**: having good GPU backend support is a critical aspect of a deep learning framework. Some models depend crucially on performing computations on a GPU. Most frameworks, including PyTorch, only provide good support for GPUs manufactured by Nvidia. This is mostly due to the heavy investment this company made on CUDA (Compute Unified Device Architecture), the underlying parallel computing platform that enables this type of scientific computing (and the reason for the device label), with specific implementations targeted to Deep Neural Networks as cuDNN. Other GPU manufacturers, most notably AMD, are making efforts to towards enabling ML computing in their cards, but their support is still partial.
## Neural network foundations
Computing gradients is a crucial feature in deep learning, given that the training procedure of neural networks relies on optimization techniques that update the parameters of the model by using the gradient information of a scalar magnitude – the loss function. How is it possible to compute the derivatives? There are different methods, namely
- **Symbolic Differentiation**: given a symbolic expression, the software provides the derivative by performing symbolic transformations (e.g. Wolfram Alpha). The benefits are clear, but it is not always possible to compute an analytical expression.
- **Numerical Differentiation**: computes the derivatives using expressions that are suitable to be evaluated numerically, using the finite differences method to several orders of approximation. A big drawback is that these methods are slow.
- **Automatic Differentiation**: a library adds to the set of functional primitives an implementation of the derivative for each of these functions. Thus, if the library contains the function $sin(x)$, it also implements the derivative of this function, $\frac{d}{dx}sin(x) = cos(x)$. Then, given a composition of functions, the library can compute the derivative with respect a variable by successive application of the chain rule, a method that is known in deep learning as backpropagation.
### Automatic differentiation
Modern deep learning libraries are capable of performing automatic differentiation. The two main approaches to computing the graph are _static_ and _dynamic_ processing ([Lapan 2018](#References)):
- **Static graphs**: the deep learning framework converts the computational graph into a static representation that cannot be modified. This allows the library developers to do very aggressive optimizations on this static graph ahead of computation time, pruning some areas and transforming others so that the final product is highly optimized and fast. The drawback is that some models can be really hard to implement with this approach. For example, TensorFlow uses static graphs. Having static graphs is part of the reason why TensorFlow has excellent support for sequence processing, which makes it very popular in NLP.
- **Dynamic graphs**: the framework does not create a graph ahead of computation, but records the operations that are performed, which can be quite different for different inputs. When it is time to compute the gradients, it unrolls the graph and perform the computations. A major benefit of this approach is that implementing complex models can be easier in this paradigm. This flexibility comes at the expense of the major drawback of this approach: speed. Dynamic graphs cannot leverage the same level of ahead-of-time optimization as static graphs, which makes them slower. PyTorch uses dynamic graphs as the underlying paradigm for gradient computation.
Here is simple graph to compute $y = wx + b$ (from [Rao and MacMahan 2019](#References-and-Further-Reading)):
<img src="fig/simple_computation_graph.png" width=500 />
PyTorch computes the graph using the Autograd system. Autograd records a graph when performing the forward pass (function application), keeping track of all the tensors defined as inputs. These are the leaves of the graph. The output tensors are the roots of the graph. By navigating this graph from root to leaves, the gradients are automatically computed using the chain rule. In summary,
- Forward pass (the successive function application) goes from leaves to root. We use the `apply` method in PyTorch.
- Once the forward pass is completed, Autograd has recorded the graph and the backward pass (chain rule) can be done. We use the method `backwards` on the root of the graph.
### Modules
The base implementation for all neural network models in PyTorch is the class `Module` in the package `torch.nn`:
```
import torch.nn as nn
```
All our models subclass this base `nn.Module` class, which provides an interface to important methods used for constructing and working with our models, and which contains sensible initializations for our models. Modules can contain other modules (and usually do).
Let's see a simple, custom implementation of a multi-layer feed forward network. In the example below, our simple mathematical model is
$$\mathbf{y} = \mathbf{U}(f(\mathbf{W}(\mathbf{x})))$$
where $f$ is a non-linear function (a `ReLU`), is directly translated into a similar expression in PyTorch. To do that, we simply subclass `nn.Module`, register the two affine transformations and the non-linearity, and implement their composition within the `forward` method:
```
class MyCustomModule(nn.Module):
def __init__(self, n_inputs, n_hidden, n_output_classes):
# call super to initialize the class above in the hierarchy
super(MyCustomModule, self).__init__()
# first affine transformation
self.W = nn.Linear(n_inputs, n_hidden)
# non-linearity (here it is also a layer!)
self.f = nn.ReLU()
# final affine transformation
self.U = nn.Linear(n_hidden, n_output_classes)
def forward(self, x):
y = self.U(self.f(self.W(x)))
return y
```
Then, we can use our new module as follows:
```
# set the network's architectural parameters
n_inputs = 3
n_hidden= 4
n_output_classes = 2
# instantiate the model
model = MyCustomModule(n_inputs, n_hidden, n_output_classes)
# create a simple input tensor
# size is [1,3]: a mini-batch of one example,
# this example having dimension 3
x = torch.FloatTensor([[0.3, 0.8, -0.4]])
# compute the model output by **applying** the input to the module
y = model(x)
# inspect the output
print(y)
```
As we see, the output is a tensor with its gradient function attached – Autograd tracks it for us.
**Tip**: modules overrides the `__call__()` method, where the framework does some work. Thus, instead of directly calling the `forward()` method, we apply the input to the model instead.
### Sequential
A powerful class in the `nn` package is `Sequential`, which allows us to express the code above more succinctly:
```
class MyCustomModule(nn.Module):
def __init__(self, n_inputs, n_hidden, n_output_classes):
super(MyCustomModule, self).__init__()
self.network = nn.Sequential(
nn.Linear(n_inputs, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, n_output_classes))
def forward(self, x):
y = self.network(x)
return y
```
As you can imagine, this can be handy when we have a large number of layers for which the actual names are not that meaningful. It also improves readability:
```
class MyCustomModule(nn.Module):
def __init__(self, n_inputs, n_hidden, n_output_classes):
super(MyCustomModule, self).__init__()
self.p_keep = 0.7
self.network = nn.Sequential(
nn.Linear(n_inputs, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, 2*n_hidden),
nn.ReLU(),
nn.Linear(2*n_hidden, n_output_classes),
# dropout argument is probability of dropping
nn.Dropout(1 - self.p_keep),
# applies softmax in the data dimension
nn.Softmax(dim=1)
)
def forward(self, x):
y = self.network(x)
return y
```
**Side note**: Another important package in `torch.nn` is `Functional`, typically imported as `F`. Functional contains many useful functions, from non-linear activations to convolutional, dropout, and even distance functions. Many of these functions have counterpart implementations as layers in the `nn` package so that they can be easily used in pipelines like the one above implemented using `nn.Sequential`.
```
import torch.nn.functional as F
y = F.relu(torch.FloatTensor([[-5, -1, 0, 5]]))
y
```
### Criteria and loss functions
PyTorch has implementations for the most common criteria in the `torch.nn` package. You may notice that, as with many of the other functions, there are two implementations of loss functions: the reference functions in `torch.nn.functional` and practical class in `torch.nn`, which are the ones we typically use. Probably the two most common ones are ([Lapan 2018](#References)):
- `nn.MSELoss` (mean squared error): squared $L_2$ norm used for regression.
- `nn.CrossEntropyLoss`: criterion used for classification as the result of combining `nn.LogSoftmax()` and `nn.NLLLoss()` (negative log likelihood), operating on the input scores directly. When possible, we recommend using this class instead of using a softmax layer plus a log conversion and `nn.NLLLoss`, given that the `LossSoftmax` implementation guards against common numerical errors, resulting in less instabilities.
Once our model produces a prediction, we pass it to the criteria to obtain a measure of the loss:
```
# the true label (in this case, 2) from our dataset wrapped
# as a tensor of minibatch size of 1
y_gold = torch.tensor([1])
# our simple classification criterion for this simple example
criterion = nn.CrossEntropyLoss()
# forward pass of our model (remember, using apply instead of forward)
y = model(x)
# apply the criterion to get the loss corresponding to the pair (x, y)
# with respect to the real y (y_gold)
loss = criterion(y, y_gold)
# the loss contains a gradient function that we can use to compute
# the gradient dL/dw (gradient with respect to the parameters
# for a given fixed input)
print(loss)
```
### Optimization
Once we have computed the loss for a training example or minibatch of examples, we update the parameters of the model guided by the information contained in the gradient. The role of updating the parameters belongs to the optimizer, and PyTorch has a number of implementations available right away – and if you don't find your preferred optimizer as part of the library, chances are that you will find an existing implementation. Also, coding your own optimizer is indeed quite easy in PyTorch.
**Side Note** The following is a summary of the most common optimizers. It is intended to serve as a reference (I use this table myself quite a lot). In practice, most people pick an optimizer that has been proven to behave well on a given domain, but optimizers are also a very active area of research on numerical analysis, so it is a good idea to pay some attention to this subfield. We recommend using second-order dynamics with an adaptive time step:
- First-order dynamics
- Search direction only: `optim.SGD`
- Adaptive: `optim.RMSprop`, `optim.Adagrad`, `optim.Adadelta`
- Second-order dynamics
- Search direction only: Momentum `optim.SGD(momentum=0.9)`, Nesterov, `optim.SGD(nesterov=True)`
- Adaptive: `optim.Adam`, `optim.Adamax` (Adam with $L_\infty$)
### Training a simple model
In order to illustrate the different concepts and techniques above, let's put them together in a very simple example: our objective will be to fit a very simple non-linear function, a sine wave:
$$y = a \sin(x + \phi)$$
where $a, \phi$ are the given amplitude and phase of the sine function. Our objective is to learn to adjust this function using a feed forward network, this is:
$$ \hat{y} = f(x)$$
such that the error between $y$ and $\hat{y}$ is minimal according to our criterion. A natural criterion is to minimize the squared distance between the actual value of the sine wave and the value predicted by our function approximator, measured using the $L_2$ norm.
**Side Note**: Although this example is easy, simple variations of this setting can pose a big challenge, and are used currently to illustrate difficult problems in learning, especially in a very active subfield known as meta-learning.
Let's import all the modules that we are going to need:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import numpy as np
import matplotlib.pyplot as plt
import math
```
Early on the code, we define the device that we want to use:
```
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
```
Let's fix $a=1$, $\phi=1$ and generate traning data in the interval $x \in [0,2\pi)$ using NumPy:
```
M = 1200
# sample from the x axis M points
x = np.random.rand(M) * 2*math.pi
# add noise
eta = np.random.rand(M) * 0.01
# compute the function
y = np.sin(x) + eta
# plot
_ = plt.scatter(x,y)
# use the NumPy-PyTorch bridge
x_train = torch.tensor(x[0:1000]).float().view(-1, 1).to(device)
y_train = torch.tensor(y[0:1000]).float().view(-1, 1).to(device)
x_test = torch.tensor(x[1000:]).float().view(-1, 1).to(device)
y_test = torch.tensor(y[1000:]).float().view(-1, 1).to(device)
class SineDataset(data.Dataset):
def __init__(self, x, y):
super(SineDataset, self).__init__()
assert x.shape[0] == y.shape[0]
self.x = x
self.y = y
def __len__(self):
return self.y.shape[0]
def __getitem__(self, index):
return self.x[index], self.y[index]
sine_dataset = SineDataset(x_train, y_train)
sine_dataset_test = SineDataset(x_test, y_test)
sine_loader = torch.utils.data.DataLoader(
sine_dataset, batch_size=32, shuffle=True)
sine_loader_test = torch.utils.data.DataLoader(
sine_dataset_test, batch_size=32)
class SineModel(nn.Module):
def __init__(self):
super(SineModel, self).__init__()
self.network = nn.Sequential(
nn.Linear(1, 5),
nn.ReLU(),
nn.Linear(5, 5),
nn.ReLU(),
nn.Linear(5, 5),
nn.ReLU(),
nn.Linear(5, 1))
def forward(self, x):
return self.network(x)
# declare the model
model = SineModel().to(device)
# define the criterion
criterion = nn.MSELoss()
# select the optimizer and pass to it the parameters of the model it will optimize
optimizer = torch.optim.Adam(model.parameters(), lr = 0.01)
epochs = 1000
# training loop
for epoch in range(epochs):
for i, (x_i, y_i) in enumerate(sine_loader):
y_hat_i = model(x_i) # forward pass
loss = criterion(y_hat_i, y_i) # compute the loss and perform the backward pass
optimizer.zero_grad() # cleans the gradients
loss.backward() # computes the gradients
optimizer.step() # update the parameters
if epoch % 20:
plt.scatter(x_i.data.cpu().numpy(), y_hat_i.data.cpu().numpy())
# testing
with torch.no_grad():
model.eval()
total_loss = 0.
for k, (x_k, y_k) in enumerate(sine_loader_test):
y_hat_k = model(x_k)
loss_test = criterion(y_hat_k, y_k)
total_loss += float(loss_test)
print(total_loss)
```
## Reproducibility
```
def enforce_reproducibility(seed=42):
# Sets seed manually for both CPU and CUDA
torch.manual_seed(seed)
# For atomic operations there is currently
# no simple way to enforce determinism, as
# the order of parallel operations is not known.
#
# CUDNN
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# System based
np.random.seed(seed)
enforce_reproducibility()
```
The function `utils.fix_random_seeds()` extends the above to the random seeds for NumPy and the Python `random` library.
## References
Lapan, Maxim (2018) *Deep Reinforcement Learning Hands-On*. Birmingham: Packt Publishing
Rao, Delip and Brian McMahan (2019) *Natural Language Processing with PyTorch*. Sebastopol, CA: O'Reilly Media
| github_jupyter |
# How to create Popups
## Simple popups
You can define your popup at the feature creation, but you can also overwrite them afterwards:
```
import folium
m = folium.Map([45, 0], zoom_start=4)
folium.Marker([45, -30], popup="inline implicit popup").add_to(m)
folium.CircleMarker(
location=[45, -10],
radius=25,
fill=True,
popup=folium.Popup("inline explicit Popup"),
).add_to(m)
ls = folium.PolyLine(
locations=[[43, 7], [43, 13], [47, 13], [47, 7], [43, 7]], color="red"
)
ls.add_child(folium.Popup("outline Popup on Polyline"))
ls.add_to(m)
gj = folium.GeoJson(
data={"type": "Polygon", "coordinates": [[[27, 43], [33, 43], [33, 47], [27, 47]]]}
)
gj.add_child(folium.Popup("outline Popup on GeoJSON"))
gj.add_to(m)
m
m = folium.Map([45, 0], zoom_start=2)
folium.Marker(
location=[45, -10],
popup=folium.Popup("Let's try quotes", parse_html=True, max_width=100),
).add_to(m)
folium.Marker(
location=[45, -30],
popup=folium.Popup(u"Ça c'est chouette", parse_html=True, max_width="100%"),
).add_to(m)
m
```
## Vega Popup
You may know that it's possible to create awesome Vega charts with (or without) `vincent`. If you're willing to put one inside a popup, it's possible thanks to `folium.Vega`.
```
import json
import numpy as np
import vincent
scatter_points = {
"x": np.random.uniform(size=(100,)),
"y": np.random.uniform(size=(100,)),
}
# Let's create the vincent chart.
scatter_chart = vincent.Scatter(scatter_points, iter_idx="x", width=600, height=300)
# Let's convert it to JSON.
scatter_json = scatter_chart.to_json()
# Let's convert it to dict.
scatter_dict = json.loads(scatter_json)
m = folium.Map([43, -100], zoom_start=4)
popup = folium.Popup()
folium.Vega(scatter_chart, height=350, width=650).add_to(popup)
folium.Marker([30, -120], popup=popup).add_to(m)
# Let's create a Vega popup based on scatter_json.
popup = folium.Popup(max_width=0)
folium.Vega(scatter_json, height=350, width=650).add_to(popup)
folium.Marker([30, -100], popup=popup).add_to(m)
# Let's create a Vega popup based on scatter_dict.
popup = folium.Popup(max_width=650)
folium.Vega(scatter_dict, height=350, width=650).add_to(popup)
folium.Marker([30, -80], popup=popup).add_to(m)
m
```
## Fancy HTML popup
```
import branca
m = folium.Map([43, -100], zoom_start=4)
html = """
<h1> This is a big popup</h1><br>
With a few lines of code...
<p>
<code>
from numpy import *<br>
exp(-2*pi)
</code>
</p>
"""
folium.Marker([30, -100], popup=html).add_to(m)
m
```
You can also put any HTML code inside of a Popup, thaks to the `IFrame` object.
```
m = folium.Map([43, -100], zoom_start=4)
html = """
<h1> This popup is an Iframe</h1><br>
With a few lines of code...
<p>
<code>
from numpy import *<br>
exp(-2*pi)
</code>
</p>
"""
iframe = branca.element.IFrame(html=html, width=500, height=300)
popup = folium.Popup(iframe, max_width=500)
folium.Marker([30, -100], popup=popup).add_to(m)
m
import pandas as pd
df = pd.DataFrame(
data=[["apple", "oranges"], ["other", "stuff"]], columns=["cats", "dogs"]
)
m = folium.Map([43, -100], zoom_start=4)
html = df.to_html(
classes="table table-striped table-hover table-condensed table-responsive"
)
popup = folium.Popup(html)
folium.Marker([30, -100], popup=popup).add_to(m)
m
```
Note that you can put another `Figure` into an `IFrame` ; this should let you do stange things...
```
# Let's create a Figure, with a map inside.
f = branca.element.Figure()
folium.Map([-25, 150], zoom_start=3).add_to(f)
# Let's put the figure into an IFrame.
iframe = branca.element.IFrame(width=500, height=300)
f.add_to(iframe)
# Let's put the IFrame in a Popup
popup = folium.Popup(iframe, max_width=2650)
# Let's create another map.
m = folium.Map([43, -100], zoom_start=4)
# Let's put the Popup on a marker, in the second map.
folium.Marker([30, -100], popup=popup).add_to(m)
# We get a map in a Popup. Not really useful, but powerful.
m
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Eight schools
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Eight_Schools"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The eight schools problem ([Rubin 1981](https://www.jstor.org/stable/1164617)) considers the effectiveness of SAT coaching programs conducted in parallel at eight schools. It has become a classic problem ([Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/), [Stan](https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started)) that illustrates the usefulness of hierarchical modeling for sharing information between exchangeable groups.
The implemention below is an adaptation of an Edward 1.0 [tutorial](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb).
# Imports
```
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
import warnings
tf.enable_v2_behavior()
plt.style.use("ggplot")
warnings.filterwarnings('ignore')
```
# The Data
From Bayesian Data Analysis, section 5.5 (Gelman et al. 2013):
> *A study was performed for the Educational Testing Service to analyze the effects of special coaching programs for SAT-V (Scholastic Aptitude Test-Verbal) in each of eight high schools. The outcome variable in each study was the score on a special administration of the SAT-V, a standardized multiple choice test administered by the Educational Testing Service and used to help colleges make admissions decisions; the scores can vary between 200 and 800, with mean about 500 and standard deviation about 100. The SAT examinations are designed to be resistant to short-term efforts directed specifically toward improving performance on the test; instead they are designed to reflect knowledge acquired and abilities developed over many years of education. Nevertheless, each of the eight schools in this study considered its short-term coaching program to be very successful at increasing SAT scores. Also, there was no prior reason to believe that any of the eight programs was more effective than any other or that some were more similar in effect to each other than to any other.*
For each of the eight schools ($J = 8$), we have an estimated treatment effect $y_j$ and a standard error of the effect estimate $\sigma_j$. The treatment effects in the study were obtained by a linear regression on the treatment group using PSAT-M and PSAT-V scores as control variables. As there was no prior belief that any of the schools were more or less similar or that any of the coaching programs would be more effective, we can consider the treatment effects as [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables).
```
num_schools = 8 # number of schools
treatment_effects = np.array(
[28, 8, -3, 7, -1, 1, 18, 12], dtype=np.float32) # treatment effects
treatment_stddevs = np.array(
[15, 10, 16, 11, 9, 11, 10, 18], dtype=np.float32) # treatment SE
fig, ax = plt.subplots()
plt.bar(range(num_schools), treatment_effects, yerr=treatment_stddevs)
plt.title("8 Schools treatment effects")
plt.xlabel("School")
plt.ylabel("Treatment effect")
fig.set_size_inches(10, 8)
plt.show()
```
# Model
To capture the data, we use a hierarchical normal model. It follows the generative process,
$$
\begin{align*}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \\
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \\
\text{for } & i=1\ldots 8:\\
& \theta_i \sim \text{Normal}\left(\text{loc}{=}\mu,\ \text{scale}{=}\tau \right) \\
& y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right)
\end{align*}
$$
where $\mu$ represents the prior average treatment effect and $\tau$ controls how much variance there is between schools. The $y_i$ and $\sigma_i$ are observed. As $\tau \rightarrow \infty$, the model approaches the no-pooling model, i.e., each of the school treatment effect estimates are allowed to be more independent. As $\tau \rightarrow 0$, the model approaches the complete-pooling model, i.e., all of the school treatment effects are closer to the group average $\mu$. To restrict the standard deviation to be positive, we draw $\tau$ from a lognormal distribution (which is equivalent to drawing $log(\tau)$ from a normal distribution).
Following [Diagnosing Biased Inference with Divergences](http://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html), we transform the model above into an equivalent non-centered model:
$$
\begin{align*}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \\
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \\
\text{for } & i=1\ldots 8:\\
& \theta_i' \sim \text{Normal}\left(\text{loc}{=}0,\ \text{scale}{=}1 \right) \\
& \theta_i = \mu + \tau \theta_i' \\
& y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right)
\end{align*}
$$
We reify this model as a [JointDistributionSequential](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/JointDistributionSequential) instance:
```
model = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=10., name="avg_effect"), # `mu` above
tfd.Normal(loc=5., scale=1., name="avg_stddev"), # `log(tau)` above
tfd.Independent(tfd.Normal(loc=tf.zeros(num_schools),
scale=tf.ones(num_schools),
name="school_effects_standard"), # `theta_prime`
reinterpreted_batch_ndims=1),
lambda school_effects_standard, avg_stddev, avg_effect: (
tfd.Independent(tfd.Normal(loc=(avg_effect[..., tf.newaxis] +
tf.exp(avg_stddev[..., tf.newaxis]) *
school_effects_standard), # `theta` above
scale=treatment_stddevs),
name="treatment_effects", # `y` above
reinterpreted_batch_ndims=1))
])
def target_log_prob_fn(avg_effect, avg_stddev, school_effects_standard):
"""Unnormalized target density as a function of states."""
return model.log_prob((
avg_effect, avg_stddev, school_effects_standard, treatment_effects))
```
# Bayesian Inference
Given data, we perform Hamiltonian Monte Carlo (HMC) to calculate the posterior distribution over the model's parameters.
```
num_results = 5000
num_burnin_steps = 3000
# Improve performance by tracing the sampler using `tf.function`
# and compiling it using XLA.
@tf.function(autograph=False, experimental_compile=True)
def do_sampling():
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
tf.zeros([], name='init_avg_effect'),
tf.zeros([], name='init_avg_stddev'),
tf.ones([num_schools], name='init_school_effects_standard'),
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.4,
num_leapfrog_steps=3))
states, kernel_results = do_sampling()
avg_effect, avg_stddev, school_effects_standard = states
school_effects_samples = (
avg_effect[:, np.newaxis] +
np.exp(avg_stddev)[:, np.newaxis] * school_effects_standard)
num_accepted = np.sum(kernel_results.is_accepted)
print('Acceptance rate: {}'.format(num_accepted / num_results))
fig, axes = plt.subplots(8, 2, sharex='col', sharey='col')
fig.set_size_inches(12, 10)
for i in range(num_schools):
axes[i][0].plot(school_effects_samples[:,i].numpy())
axes[i][0].title.set_text("School {} treatment effect chain".format(i))
sns.kdeplot(school_effects_samples[:,i].numpy(), ax=axes[i][1], shade=True)
axes[i][1].title.set_text("School {} treatment effect distribution".format(i))
axes[num_schools - 1][0].set_xlabel("Iteration")
axes[num_schools - 1][1].set_xlabel("School effect")
fig.tight_layout()
plt.show()
print("E[avg_effect] = {}".format(np.mean(avg_effect)))
print("E[avg_stddev] = {}".format(np.mean(avg_stddev)))
print("E[school_effects_standard] =")
print(np.mean(school_effects_standard[:, ]))
print("E[school_effects] =")
print(np.mean(school_effects_samples[:, ], axis=0))
# Compute the 95% interval for school_effects
school_effects_low = np.array([
np.percentile(school_effects_samples[:, i], 2.5) for i in range(num_schools)
])
school_effects_med = np.array([
np.percentile(school_effects_samples[:, i], 50) for i in range(num_schools)
])
school_effects_hi = np.array([
np.percentile(school_effects_samples[:, i], 97.5)
for i in range(num_schools)
])
fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True)
ax.scatter(np.array(range(num_schools)), school_effects_med, color='red', s=60)
ax.scatter(
np.array(range(num_schools)) + 0.1, treatment_effects, color='blue', s=60)
plt.plot([-0.2, 7.4], [np.mean(avg_effect),
np.mean(avg_effect)], 'k', linestyle='--')
ax.errorbar(
np.array(range(8)),
school_effects_med,
yerr=[
school_effects_med - school_effects_low,
school_effects_hi - school_effects_med
],
fmt='none')
ax.legend(('avg_effect', 'HMC', 'Observed effect'), fontsize=14)
plt.xlabel('School')
plt.ylabel('Treatment effect')
plt.title('HMC estimated school treatment effects vs. observed data')
fig.set_size_inches(10, 8)
plt.show()
```
We can observe the shrinkage toward the group `avg_effect` above.
```
print("Inferred posterior mean: {0:.2f}".format(
np.mean(school_effects_samples[:,])))
print("Inferred posterior mean se: {0:.2f}".format(
np.std(school_effects_samples[:,])))
```
# Criticism
To get the posterior predictive distribution, i.e., a model of new data $y^*$ given the observed data $y$:
$$ p(y^*|y) \propto \int_\theta p(y^* | \theta)p(\theta |y)d\theta$$
we override the values of the random variables in the model to set them to the mean of the posterior distribution, and sample from that model to generate new data $y^*$.
```
sample_shape = [5000]
_, _, _, predictive_treatment_effects = model.sample(
value=(tf.broadcast_to(np.mean(avg_effect, 0), sample_shape),
tf.broadcast_to(np.mean(avg_stddev, 0), sample_shape),
tf.broadcast_to(np.mean(school_effects_standard, 0),
sample_shape + [num_schools]),
None))
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(predictive_treatment_effects[:, 2*i].numpy(),
ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect posterior predictive".format(2*i))
sns.kdeplot(predictive_treatment_effects[:, 2*i + 1].numpy(),
ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect posterior predictive".format(2*i + 1))
plt.show()
# The mean predicted treatment effects for each of the eight schools.
prediction = np.mean(predictive_treatment_effects, axis=0)
```
We can look at the residuals between the treatment effects data and the predictions of the model posterior. These correspond with the plot above which shows the shrinkage of the estimated effects toward the population average.
```
treatment_effects - prediction
```
Because we have a distribution of predictions for each school, we can consider the distribution of residuals as well.
```
residuals = treatment_effects - predictive_treatment_effects
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(residuals[:, 2*i].numpy(), ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect residuals".format(2*i))
sns.kdeplot(residuals[:, 2*i + 1].numpy(), ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect residuals".format(2*i + 1))
plt.show()
```
# Acknowledgements
This tutorial was originally written in Edward 1.0 ([source](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb)). We thank all contributors to writing and revising that version.
# References
1. Donald B. Rubin. Estimation in parallel randomized experiments. Journal of Educational Statistics, 6(4):377-401, 1981.
2. Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin. Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC, 2013.
| github_jupyter |
# Introduction to Python
In this lesson we will learn the basics of the Python programming language (version 3). We won't learn everything about Python but enough to do some basic machine learning.
<img src="figures/python.png" width=350>
# Variables
Variables are objects in Python that can hold anything with numbers or text. Let's look at how to create some variables.
```
# Numerical example
x = 99
print (x)
x = 27 # Added numerical value of 27
print (x)
x=55 # Added numerical value of 55
print (x)
"""changed numerical values of x"""
# Text example
x = "learning to code is fun" # Changed text to "learning to code is fun" and "tomorrow"
print (x)
x="tomorrow"
print(x)
"""changed sentences and value of x. Modified spacing to see if it altered the output"""
# Variables can be used with each other
a = 2 # Changed values of a, b, and c
b = 298
c = a + b
print (c)
a = 3
b = 4
c = 27
d = 22
e = a + b + c + d
print (e)
"""Changed values of a, b, and c. Created additional values for new variables."""
```
Variables can come in lots of different types. Even within numerical variables, you can have integers (int), floats (float), etc. All text based variables are of type string (str). We can see what type a variable is by printing its type.
```
# int variable
x = 2
print (x)
print (type(x))
x = 1
print (x)
print (type(x))
# float variable
x = 7.7
print (x)
print (type(x))
x = 2.25
print (x)
print (type(x))
# text variable
x = "hello Sheri"
print (x)
print (type(x))
x = "Thunderstorms"
print (x)
print (type(x))
# boolean variable
x = False
print (x)
print (type(x))
x = True
print (x)
print (type(x))
"""Created new values for the variable x"""
```
It's good practice to know what types your variables are. When you want to use numerical operations on them, they need to be compatible.
```
# int variables
a = 6
b = 2
print (a + b)
# string variables
a = "6"
b = "2"
print (a + b)
a = "4"
b = "3"
c = "5"
print (a + b + c)
a = 4
b = 3
c = 5
print (a + b + c)
"""Changed existing value of int and string variables. Created new variables"""
```
# Lists
Lists are objects in Python that can hold a ordered sequence of numbers **and** text.
```
# Creating a list
list_x = [2, "hello", 1]
print (list_x)
list_a = [1, "sheri lamb", 4]
print (list_a)
"""Created a new list a"""
# Adding to a list
list_x.append(7)
print (list_x)
list_a.append("tomorrow")
print (list_a)
"""Added 'tomorrow' to my list"""
# Accessing items at specific location in a list
print ("list_x[0]: ", list_x[0])
print ("list_x[1]: ", list_x[1])
print ("list_x[2]: ", list_x[2])
print ("list_x[-1]: ", list_x[-1]) # the last item
print ("list_x[-2]: ", list_x[-2]) # the second to last item
print ("list_x[5]:", list_x[5])
"""accessed item #5"""
# Slicing
print ("list_x[:]: ", list_x[:])
print ("list_x[2:]: ", list_x[2:])
print ("list_x[1:3]: ", list_x[1:3])
print ("list_x[:-1]: ", list_x[:-1])
print ("list_x[5:]: ", list_x[5:])
"""added #5 to # slicing"""
# Length of a list
len(list_x)
len(list_x)
len(list_a)
"""calculated the length of list_a"""
# Replacing items in a list
list_x[1] = "hi"
print (list_x)
list_a[1] = "yes"
print (list_a)
"""replaced item 1 with yes"""
# Combining lists
list_y = [2.4, "world"]
list_z = list_x + list_y
print (list_z)
list_h = [1, 2,"fire"]
list_i = [4, 7, "Stella"]
list_4 = list_h + list_i
print (list_4)
"""Created 2 new lists and combined them to create a third (list_4)"""
```
# Tuples
Tuples are also objects in Python that can hold data but you cannot replace their values (for this reason, tuples are called immutable, whereas lists are known as mutable).
```
# Creating a tuple
tuple_x = (3.0, "hello")
print (tuple_x)
tuple_y = (5.0, "Star")
print (tuple_y)
"""Created tuple y"""
# Adding values to a tuple
tuple_x = tuple_x + (5.6,)
print (tuple_x)
tuple_z = tuple_y + (2.4,)
print (tuple_z)
"""added 2.4 to tuple_z"""
# Trying to change a tuples value (you can't, this should produce an error.)
tuple_x[1] = "world"
tuple_z[1] = "sunrise"
"""attempted to change the value of tuple_z"""
```
# Dictionaries
Dictionaries are Python objects that hold key-value pairs. In the example dictionary below, the keys are the "name" and "eye_color" variables. They each have a value associated with them. A dictionary cannot have two of the same keys.
```
# Creating a dictionary
dog = {"name": "dog",
"eye_color": "brown"}
print (dog)
print (dog["name"])
print (dog["eye_color"])
MAC = {"brand": "MAC", "color": "red"}
print (MAC)
print (MAC["brand"])
print (MAC["color"])
"""Created a dictionary for MAC lipstick"""
# Changing the value for a key
dog["eye_color"] = "green"
print (dog)
MAC["color"] = "pink"
print (MAC)
"""Changed the lipstick color from red to pink"""
# Adding new key-value pairs
dog["age"] = 5
print (dog)
MAC["age"] = 1
print (MAC)
"""Added an aditional value (age)"""
# Length of a dictionary
print (len(dog))
print (len(MAC))
"""Calculated length of MAC dictionary"""
```
# If statements
You can use `if` statements to conditionally do something.
```
# If statement
x = 4
if x < 1:
score = "low"
elif x <= 4:
score = "medium"
else:
score = "high"
print (score)
x = 5
if x < 2:
score = "low"
elif x <= 5:
score = "medium"
else:
score = "high"
print (score)
x = 10
print (score)
x = 1
print (score)
"""Added additional if statements (x = 5)"""
# If statment with a boolean
x = True
if x:
print ("it worked")
y = False
if y:
print ("it did not work")
z = True
if z:
print ("it almost worked")
"""Created true / false boolean statements"""
```
# Loops
In Python, you can use `for` loop to iterate over the elements of a sequence such as a list or tuple, or use `while` loop to do something repeatedly as long as a condition holds.
```
# For loop
x = 2 # x variable will start at 2 instead of 1
for i in range(5): # goes from i=0 to i=4 range is 5 instead of 3
x += 1 # same as x = x + 1
print ("i={0}, x={1}".format(i, x)) # printing with multiple variables(
# Loop through items in a list
x = 2 # changed x variable to 2, now x will start at 3 instead of 2
for i in [0, 1, 2, 3, 4]: # added two additional numbers to the list
x += 1 # same as x = x +1
print ("i={0}, x={1}".format(i, x))
# While loop
x = 10 # Changed variable from 3 to 10
while x > 3: # Changed the condition to 3
x -= 1 # same as x = x - 1
print (x)
```
# Functions
Functions are a way to modularize reusable pieces of code.
```
# Create a function
def Shamel(x): # Redefined function's name
x += 5 # Changed value of x
return x
# Use the function
score = 1
score = Shamel(x=score)
print (score)
# Function with multiple inputs
def join_name (first_name, middle_name, last_name): # Re-defined function
joined_name = first_name + " " + middle_name + " " + last_name # Added middle name
return joined_name
# Use the function
first_name = "Sheri" # Change display information
middle_name = "Nicole"
last_name = "Lamb"
joined_name = join_name(first_name=first_name, middle_name=middle_name, last_name=last_name)
print (joined_name)
```
# Classes
Classes are a fundamental piece of object oriented programming in Python.
```
# Creating the class
class Cars(object): # Changed class to Cars
# Initialize the class
def __init__(self, brand, color, name): # Changed "species" to "brand"
self.brand = brand
self.color = color
self.name = name
# For printing
def __str__(self):
return "{0} {1} named {2}.".format(self.color, self.brand, self.name)
# Example function
def change_name(self, new_name):
self.name = new_name
# Creating an instance of a class
my_car = Cars(brand="Jeep", color="Spitfire Orange", name="Rover",) # Changed instances of car class
print (my_car)
print (my_car.name)
# Using a class's function
my_car.change_name(new_name="Sunshine") # Changes cars name
print (my_car)
print (my_car.name)
```
# Additional resources
This was a very quick look at Python and we'll be learning more in future lessons. If you want to learn more right now before diving into machine learning, check out this free course: [Free Python Course](https://www.codecademy.com/learn/learn-python)
| github_jupyter |
# CHAPTER 14 - Probabilistic Reasoning over Time
### George Tzanetakis, University of Victoria
## WORKPLAN
The section number is based on the 4th edition of the AIMA textbook and is the suggested
reading for this week. Each list entry provides just the additional sections. For example the Expected reading include the sections listed under Basic as well as the sections listed under Expected. Some additional readings are suggested for Advanced.
1. Basic: Sections **14.1**, **14.3, and **Summary**
2. Expected: Same as Basic plus 14.2
3. Advanced: All the chapter including bibligraphical and historical notes
## Time and Uncertainty
Agents operate over time. They need to maintain a **belief state** (a set of variables (or random variables) indexed by time) that represents which states of the world are currently possible. From the **belief** state and a transition model, the agent can predict how the world might evolve in the next time step. From the percepts observed and a **sensor** model, the agent can update the **belief state**.
* CSP: belief states are variables with domains
* Logic: logical formulaes which belief states are possible
* Probablities: probabilities which belief states are likely
* **Transition model:** describe the probability distribution of the variables at time $t$ given the state of the world at past time
* **Sensor model:** the probability of each percept at time $t$, given the current state of the world
* Dynamic Bayesian Networks
* Hidden Markov Models
* Kalman Filters
### States and Observations
**Discret-time** models, the world is views as a series of **time slices**
Each time slide contains a set of **random variables**, some observable and some not.
*Example scenario:* you are the security guard stationed at a secret underground installation.
You want to know whether it is raining today, but your only access to the outside world
occurs each morning when you see the director coming in with, or without an umbrella.
For each day $t$, the evidence set $E_t$ contains a single evidence variable $Umbrella_{t}$ or $U_t$.
The state set $S_t$ contains a single state variable $Rain_{t}$ or $R_t$.
<img src="images/rain_umbrella_hmm.png" width="75%"/>
### Transition and Sensor Models
**TRANSITION MODEL**
* General form: $P(X_t | X_{0:t-1})$
**Markov Assumption**: Andrei Markov (1856-1922) the current state only depends on a fixed number of previous states
* First-order markov process: $P(X_t | X_{0:t-1}) = P(X_t | X_{t-1})$
Time homegeneous process: the conditional transition probabilities is the same for all time steps $t$.
A Markov chain is a sequence of random variables
$X_1, X_2, X_3, . . .$ with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:
* $P(X_{n+1} = x|X_{1} = x_1,X_2 = x_2,...,X_n = x_n) = P(X_{n+1} = x|X_n = x_n)$
<img src="images/markov.png" width="30%"/>
The possible values of $X_i$ form a countable set $S$ called the state space of the chain. A **Markov Chain** can be specified by a transition matrix with the probabilities of going from a particular state to another state at every time step.
## Sensor model/observations
There are many application areas, for example speech recognition, in which we are interesting in modeling probability distributions over sequences of observations. We will denote the observation at time $t$ by the variable $Y_t$. The variable can be a symbol from a discrete alphabet or a continuous variable and we assume that the observations are sampled at discrete equally-spaced time intervals so $t$ can be an integer-valued time index.
## Inference in Temporal Models
* **Filtering:** we want to compute the posterior distribution over the current state, given all evidence to date. $P(X_t|e_{1:t})$. An almost identical calculation provides the likelihood of the evidence sequence $P(e_{1:T})$.
* **Prediction:** we want to computer the posterior distribution over the future state, given all evidence to date. $P(Xt+k|e_{1:t})$ for some $k > 0$.
* **Smoothing or hindsight:** computing the posterior distribution over a past state, given all evidence up to the present: $P(X_{t-k}|e_{1:t})$ for some $k < t$. It provides a better estimate of the state than what was available at the time, because it incorporates more evidence.
* **Most likely explanation:** Given a sequence of observations, we might wish to find the sequence of states that is most likely to have generated these observations. That is we wish to compute:
$argmax_{x_{1:t}} P(x_{1:t}|e_{1:t})$. This is the typical inference task in Speech Recognition using Hidden Markov Models.
### Sidenote: Speech Recognition
In phonology and linguistics, a phoneme is a unit of sound that can distinguish one word from another in a particular language. For example the english words **book** and **took** differ in one phoneme (the b vs t sound)
and contain the same two remaining phonemes the **oo** sound and **k** sound. There is a clear correspondence between the written alphabet symbols of a word and the corresponding phonemes but in English there is a lot of confusing variation. For example the writtern symbols **oo** correspond to a different phoneme in the word **door**. In languages like Spanish or Greek there is a stronger direct correspondance between the written symbols and phonemes making it possible to "read" a Greek text without making phoneme errors even if you
don't know the underlying words something much harder to do in English.
The task of speech recognition is to take as input an audio recording a human talking and convert that recording to written words. It is possible to convert written words to sequences of phonemes and vice versa using a phonetic dictionary. For example check: http://www.speech.cs.cmu.edu/cgi-bin/cmudict
There are different symbolic representations for phonemes. For example the international phonetic alphabet is an alphabetic system of phonetic notation based primarily on the Latin script that tries to cover the sounds of all languages around the world. Interesting sidenote: all babies are born with the ability to recongize and also reproduce all phonemes but as they age in a particular linguistic environment their ability gets restricted/pruned to the phonemes of the particular languages they are exposed to.
So once we have the phonetic dictionary our task becomes to convert an audio recording of a human voice to a sequence of phonemes that can then be converted to written words using a phonetic dictionary.
Without going into details we form different phonemes by appropriately shaping our mouths and tongue and using our vocal folds to produce pitched and unpitched phonemes/sounds (vowels and consonants). It is possible to compute features such as **Mel-Frequency Cepstral Coefficients (MFCC)** using Digital Signal Processing techniques that characterizes these configurations over short intervals of time (typically 20-40 milliseconds).
So now, the task of automatic speech recognition becomes given a time sequence of feature vectors (computed from the audio recording) find the most likely sequence of phonemes that produced that sequence of feature vectors.
Phonemes and especially vowels can have different durations so a particular word can be represented as a sequence of states corresponding to phonemes with repetitions. For example for the word **book** we might have the following sequence: $b,b,oo,oo,oo,oo,oo,oo,oo,oo,oo,oo,oo,k,k$ with informal state notation corresponding to the phonemes. Further complicating our task is the fact that depending on speakers and inflection there are many possible ways to render a particular phoneme. So we can also think of each phoneme as a distribution of feature vectors.
So let's look at some possible approaches to solve this problem in order of increasing complexity but
also improved accuracy:
1. We can train a classifiers that given a feature vector predicts the corresponding phoneme. However this approach does not take into account that different phonemes have different probabilities (for example the phoneme correpsonding to the written symbol $z$ is less likely than the phoneme corresponding to the vowel $a$ as in the word apple), different phonemes have different typical durations (for example vowels tend to be longer than consonants), and certain transitions between phonemes for example $z$ followed by $b$ are very unlikely if not impossible whereas other ones are are much more common for example $r$ followed by $a$ as in the word apple).
2. We can model the probabilities of diffeerent phonemes and their transitions as a first order Markove chain where the state is the phoneme and then the observation output of each state can be modelled as a continuous probability distribution over the **MFCCs** feature space. That way duration and transition information is taken into account when performing automatic speech recognition.
Automatic Speech Recognition Systems based on Hidden Markov Models (HMMs) dominated the field for about 20 years until they were superseded by deep learning models in the last decade or so. They are still widely used especially in situations with restricted computational resources where deep learning systems are not practical.
## Hidden Markov Models
Properties:
* The observation at time $t$ is generated by some random process whose state $S_t$ is hidden from the observer.
* The hidden states form a **Markov Chain** i.e given the value of $S_{t−1}$, the current state $S_t$ is independent of all states prior to $t − 1$. The outputs also satisfy a Markov property which is that given state $S_t$, the observation $Y_t$ is independent of all previous states and observations.
* The hidden state variable $S_t$ is discrete
We can write the joint distribution of a sequence of states and observations by using the Markov assumptions to factorize:
* $ P(S_{1:T},Y_{1:T}) = P(S_1)P(Y_1|S_1) \prod_{t=2}^{T}P(St|S_{t−1})P(Yt|St)$
where the notation $X_{1:T}$ indicates thesequence $X_1,X_2,...,X_T$.
We can view the Hiddean Markov Model graphically as a Bayesian network by unrolling over time - think of the HMM as a template for generating a Bayesian Network and the corresponding CPTs over time. In fact, it is possible
to perform the temporal inference tasks using exact or approximate inference of the corresponding Bayesian network but for **HMMs** there are significantly more efficient algorithms.
<img src="images/hmm2bayesnet.png" width="50%"/>
### Specifying an HMM
So all we need to do to specify an HMM are the following components:
* A probability distribution over the intial state $P(S_1)$
* The $K$ by $K$ state transition matrix $P(St|St−1)$, where $K$ is the number of states
* The $K$ by $L$ emission matrix $P(Yt|St)$ if $Y_t$ is discrete and has $L$ values, or the parameters $θ_t$ of some form of continuous probability density function if $Yt$ is continuous.
### Learning the transition and sensor models
In addition to these tasks, we need methods for learning the transition and sensor models from observations. The basic idea is that inference provides an estimate of what transitions actually occurred and what states generated the observations. These estimates can then be used to update the models and the process can be repeated. This is an instance of the expectation-maximization (EM) algorithm. We will talk about learning probabilistic models in Chapter 20 Learning Probabilistic Models.
### Sketch of filtering and prediction (Forward)
We perform recursive estimation. First the current state distribution is projected forward from $t$ to $t + 1$. Then it is updated using the new evidence $e_{t+1}$. We will not cover the details but it can be done by recursive application of Bayes rule and the Markov property of evidence and the sum/product rules.
We can think of the filtered estimate $P(X_t|e_{1:t})$ as a “message” that is propagated forward along the sequence, modified by each transition, and updated by each new observation.
### Sketch of smoothing (Backward)
There are two parts to computing the distribution over past states given evidence up to the present. The first is the evidence up to $k$, and then the evidence from $k + 1$ to $t$. The forward message can be computed as by filtering from $1$ to $k$. Using conditional independence and the sum and product rules we can form a backward message that runs backwards from $t$. It is possible to combine both steps in one pass to smooth the entire sequence. This is, not surprisingly, called the **Foward-Backward** algorithm.
### Finding the most likely sequence
View each sequence of states as a path through a graph whose nodes are the possible states at each time step. The task is to find the most likely path through this graph, where the likelihood of any path is the product of the transition probabilities along the path and the probabilities of the given observations at each state. Because of the **Markov** property there is a recursive relationshtip between the most likely paths to each state $x_{t+1}$ and most likely paths to each state $x_t$. By running forward along the sequence, and computing m messages at each time step we will have the probaiblity for the most likely sequence reaching each of the final states. Then we simply select the most likely one. This is called the **Vitterbi** algorithm.
### Markov Chains and Hidden Markov Models Example
We start with random variables and a simple independent, identically distributed model for weather. Then we look into how to form a Markov Chain to transition between states and finally we sample a Hidden Markov Model to show how the samples are generated based on the Markov Chain of the hidden states. The results are visualized as strips of colored rectangles. Experiments with the transition probabilities and the emission probabilities can lead to better understanding of how Hidden Markov Models work in terms of generating data.
```
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import stats
import numpy as np
from hmmlearn import hmm
class Random_Variable:
def __init__(self, name, values, probability_distribution):
self.name = name
self.values = values
self.probability_distribution = probability_distribution
if all(type(item) is np.int64 for item in values):
self.type = 'numeric'
self.rv = stats.rv_discrete(name = name, values = (values, probability_distribution))
elif all(type(item) is str for item in values):
self.type = 'symbolic'
self.rv = stats.rv_discrete(name = name, values = (np.arange(len(values)), probability_distribution))
self.symbolic_values = values
else:
self.type = 'undefined'
def sample(self,size):
if (self.type =='numeric'):
return self.rv.rvs(size=size)
elif (self.type == 'symbolic'):
numeric_samples = self.rv.rvs(size=size)
mapped_samples = [self.values[x] for x in numeric_samples]
return mapped_samples
def probs(self):
return self.probability_distribution
def vals(self):
print(self.type)
return self.values
```
### Generating random weather samples with a IID model with no time dependencies
Let's first create some random samples of a symbolic random variable corresponding to the weather with two values Sunny (S) and cloudy (C) and generate random weather for 365 days. The assumption in this model is that the weather of each day is indepedent of the previous days and drawn from the same probability distribution.
```
values = ['S', 'C']
probabilities = [0.9, 0.1]
weather = Random_Variable('weather', values, probabilities)
samples = weather.sample(365)
print(",".join(samples))
```
Now let lets visualize these samples using yellow for sunny and grey for cloudy
```
state2color = {}
state2color['S'] = 'yellow'
state2color['C'] = 'grey'
def plot_weather_samples(samples, state2color, title):
colors = [state2color[x] for x in samples]
x = np.arange(0, len(colors))
y = np.ones(len(colors))
plt.figure(figsize=(10,1))
plt.bar(x, y, color=colors, width=1)
plt.title(title)
plot_weather_samples(samples, state2color, 'iid')
```
### Markov Chain
Now instead of independently sampling the weather random variable lets form a markov chain. The Markov chain will start at a particular state and then will either stay in the same state or transition to a different state based on a transition probability matrix. To accomplish that we basically create a random variable for each row of the transition matrix that basically corresponds to the probabilities of the transitions emanating fromt the state corresponding to that row. Then we can use the markov chain to generate sequences of samples and contrast these sequence with the iid weather model. By adjusting the transition probabilities you can in a probabilistic way control the different lengths of "stretches" of the same state.
```
def markov_chain(transmat, state, state_names, samples):
(rows, cols) = transmat.shape
rvs = []
values = list(np.arange(0,rows))
# create random variables for each row of transition matrix
for r in range(rows):
rv = Random_Variable("row" + str(r), values, transmat[r])
rvs.append(rv)
# start from initial state and then sample the appropriate
# random variable based on the state following the transitions
states = []
for n in range(samples):
state = rvs[state].sample(1)[0]
states.append(state_names[state])
return states
# transition matrices for the Markov Chain
transmat1 = np.array([[0.7, 0.3],
[0.2, 0.8]])
transmat2 = np.array([[0.9, 0.1],
[0.1, 0.9]])
transmat3 = np.array([[0.5, 0.5],
[0.5, 0.5]])
state2color = {}
state2color['S'] = 'yellow'
state2color['C'] = 'grey'
# plot the iid model too
samples = weather.sample(365)
plot_weather_samples(samples, state2color, 'iid')
samples1 = markov_chain(transmat1,0,['S','C'], 365)
plot_weather_samples(samples1, state2color, 'markov chain 1')
samples2 = markov_chain(transmat2,0,['S','C'],365)
plot_weather_samples(samples2, state2color, 'marov_chain 2')
samples3 = markov_chain(transmat3,0,['S','C'], 365)
plot_weather_samples(samples3, state2color, 'markov_chain 3')
```
### Generating samples using a Hidden Markov Model
Lets now look at how a Hidden Markov Model would work by having a Markov Chain to generate
a sequence of states and for each state having a different emission probability. When sunny we will output red or yellow with higher probabilities and when cloudy black or blue. First we will write the code directly and then we will use the hmmlearn package.
```
state2color = {}
state2color['S'] = 'yellow'
state2color['C'] = 'grey'
# generate random samples for a year
samples = weather.sample(365)
states = markov_chain(transmat1,0,['S','C'], 365)
plot_weather_samples(states, state2color, "markov chain 1")
# create two random variables one of the sunny state and one for the cloudy
sunny_colors = Random_Variable('sunny_colors', ['y', 'r', 'b', 'g'],
[0.6, 0.3, 0.1, 0.0])
cloudy_colors = Random_Variable('cloudy_colors', ['y', 'r', 'b', 'g'],
[0.0, 0.1, 0.4, 0.5])
def emit_obs(state, sunny_colors, cloudy_colors):
if (state == 'S'):
obs = sunny_colors.sample(1)[0]
else:
obs = cloudy_colors.sample(1)[0]
return obs
# iterate over the sequence of states and emit color based on the emission probabilities
obs = [emit_obs(s, sunny_colors, cloudy_colors) for s in states]
obs2color = {}
obs2color['y'] = 'yellow'
obs2color['r'] = 'red'
obs2color['b'] = 'blue'
obs2color['g'] = 'grey'
plot_weather_samples(obs, obs2color, "Observed sky color")
# let's zoom in a month
plot_weather_samples(states[0:30], state2color, 'states for a month')
plot_weather_samples(obs[0:30], obs2color, 'observations for a month')
```
### Multinomial HMM
Lets do the same generation process using the multinomail HMM model supported by the *hmmlearn* python package.
```
transmat = np.array([[0.7, 0.3],
[0.2, 0.8]])
start_prob = np.array([1.0, 0.0])
# yellow and red have high probs for sunny
# blue and grey have high probs for cloudy
emission_probs = np.array([[0.6, 0.3, 0.1, 0.0],
[0.0, 0.1, 0.4, 0.5]])
model = hmm.MultinomialHMM(n_components=2)
model.startprob_ = start_prob
model.transmat_ = transmat
model.emissionprob_ = emission_probs
# sample the model - X is the observed values
# and Z is the "hidden" states
X, Z = model.sample(365)
# we have to re-define state2color and obj2color as the hmm-learn
# package just outputs numbers for the states
state2color = {}
state2color[0] = 'yellow'
state2color[1] = 'grey'
plot_weather_samples(Z, state2color, 'states')
samples = [item for sublist in X for item in sublist]
obj2color = {}
obj2color[0] = 'yellow'
obj2color[1] = 'red'
obj2color[2] = 'blue'
obj2color[3] = 'grey'
plot_weather_samples(samples, obj2color, 'observations')
```
### Estimating the parameters of an HMM
Let's sample the generative HMM and get a sequence of 1000 observations. Now we can learn in an unsupervised way the paraemters of a two component multinomial HMM just using these observations. Then we can compare the learned parameters with the original parameters of the model used to generate the observations. Notice that the order of the components is different between the original and estimated models. Notice that hmmlearn does NOT directly support supervised training where you have both the labels and observations. It is possible to initialize a HMM model with some of the parameters and learn the others. For example you can initialize the transition matrix and learn the emission probabilities. That way you could implement supervised learning for a multinomial HMM. In many practical applications the hidden labels are not available and that's the hard case that is actually implemented in hmmlearn.
The following two cells take a few minutes to compute on a typical laptop.
```
# generate the samples
X, Z = model.sample(10000)
# learn a new model
estimated_model = hmm.MultinomialHMM(n_components=2, n_iter=10000).fit(X)
```
Let's compare the estimated model parameters with the original model.
```
print("Transition matrix")
print("Estimated model:")
print(estimated_model.transmat_)
print("Original model:")
print(model.transmat_)
print("Emission probabilities")
print("Estimated model")
print(estimated_model.emissionprob_)
print("Original model")
print(model.emissionprob_)
```
### Predicting a sequence of states given a sequence of observations
We can also use the trained HMM model to predict a sequence of hidden states given a sequence of observations. This is the task of maximum likelihood sequence estimation. For example in Speech Recognition it would correspond to estimating a sequence of phonemes (hidden states) from a sequence of observations (acoustic vectors).
This cell also takes a few minutes to compute. Note that whether the predicted or flipped predicted states correspond to the original depends on which state is selected as state0 and state1. So sometimes when you run the notebook the predicted states will be the right color some times the flipped states will be the right ones.
```
Z2 = estimated_model.predict(X)
state2color = {}
state2color[0] = 'yellow'
state2color[1] = 'grey'
plot_weather_samples(Z, state2color, 'Original states')
plot_weather_samples(Z2, state2color, 'Predicted states')
# note the reversal of colors for the states as the order of components is not the same.
# we can easily fix this by change the state2color
state2color = {}
state2color[1] = 'yellow'
state2color[0] = 'grey'
plot_weather_samples(Z2, state2color, 'Flipped Predicted states')
```
The estimated model can be sampled just like the original model
```
X, Z = estimated_model.sample(365)
state2color = {}
state2color[0] = 'yellow'
state2color[1] = 'grey'
plot_weather_samples(Z, state2color, 'states generated by estimated model ')
samples = [item for sublist in X for item in sublist]
obs2color = {}
obs2color[0] = 'yellow'
obs2color[1] = 'red'
obs2color[2] = 'blue'
obs2color[3] = 'grey'
plot_weather_samples(samples, obs2color, 'observations generated by estimated model')
```
### An example of filtering
<img src="images/rain_umbrella_hmm.png" width="75%"/>
* Day 0: no observations $P(R_0) = <0.5, 0.5>$
* Day 1: let's say umbrella appears, $U_{1} = true$.
* The prediction step from $t=0$ to $t=1$ is
$P(R_1) = \sum_{r_0} P(R_1 | r_0) P(r_0) = \langle 0.7, 0.3 \rangle \times 0.5 + \langle 0.3, 0.7 \rangle \times 0.5 = \langle 0.5, 0.5\rangle $
* The update step simply multiplies the probability of the evidence for $t=1$ and normalizes:
$P(R_1|u1) = \alpha P(u_{1} | R_{1}) P(R_1) = \alpha \langle 0.9, 0.2 \rangle \times \langle 0.5, 0.5 \rangle = \alpha \langle 0.45, 0.1 \rangle \approx \langle 0.818, 0.182 \rangle $
* Day 2: let's say umbrella appears, $U_{2} = true$.
* Prediction step from $t=1$ to $t=2$ is $P(R_1 | u1) = \alpha P(u_1 | R_1) P(R_1) = \langle 0.7, 0.3 \rangle \times 0.818 + \langle 0.3 0.7 \rangle \times 0.182 \approx \langle 0.627, 0.373 \rangle $
* Updating with evidence for t=2 gives: $P(R_2 | u_1, u_2) = \alpha P(u_2/R_2)P(R2|u_1)= \alpha \langle 0.9, 0.2 \rangle \times \langle 0.627, 0.373 \rangle = \alpha \langle 0.565, 0.0075 \rangle \approx \langle 0.883, 0.117 \rangle $
Intuitively, the probability of rain increases from day 1 to day 2 becaus ethe rain persists.
| github_jupyter |
# <span style="color:green"> Numerical Simulation Laboratory (NSL) </span>
## <span style="color:blue"> Numerical exercises 10</span>
### Exercise 10.1
By adapting your Genetic Algorithm code, developed during the Numerical Exercise 9, write a C++ code to solve the TSP with a **Simulated Annealing** (SA) algorithm. Apply your code to the optimization of a path among
- 30 cities randomly placed on a circumference
Show your results via:
- a picture of the length of the best path as a function of the iteration of your algorithm
- a picture of the best path
```
import numpy as np
import matplotlib.pyplot as plt
from os import system
from matplotlib.pyplot import figure
from time import time
start = time()
```
# CIRCUMFERENCE
```
form = 0
system('sh clean.sh')
system('./genetic.exe '+str(form))
results = np.loadtxt("results.txt", skiprows=1)
temps = results[:,0]
paths = results[:,1]
figure(figsize=(10,7), dpi=70)
plt.plot(temps, paths)
plt.axhline(2*np.pi, color='red', label='2π')
plt.xlabel("Temperature")
plt.ylabel("Path lengths")
plt.legend(loc='best')
plt.grid(True)
plt.show()
print('-- Best path reached:', paths[-1])
positions = np.loadtxt("best_conf.txt", skiprows=1)
x = positions[:,0]
y = positions[:,1]
figure(figsize=(10,10), dpi=70)
plt.plot(x, y, marker="o")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
# SQUARE
```
form = 1
system('sh clean.sh')
system('./genetic.exe '+str(form))
results = np.loadtxt("results.txt", skiprows=1)
temps = results[:,0]
paths = results[:,1]
figure(figsize=(10,7), dpi=70)
plt.plot(temps, paths)
plt.xlabel("Temperature")
plt.ylabel("Path lengths")
plt.grid(True)
plt.show()
print('-- Best path reached:', paths[-1])
positions = np.loadtxt("best_conf.txt", skiprows=1)
x = positions[:,0]
y = positions[:,1]
figure(figsize=(10,10), dpi=70)
plt.plot(x, y, marker="o")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
end = time()
print("-- Time for computation: ", int((end-start)*100.)/100., 'sec')
```
### Exercise 10.2
Parallelize with MPI libraries your Simulated Annealing code in order to solve the TSP by performing a *Random Search* with **parallel SA searches of the optimal path**:
each node should perform an independent SA search and only in the end you will compare the results of each node.
Apply your code to the *usual* TSP problems above.
_I run only on the square, bacause the circumference is too easy and every node gets the same result._
_This part of the exercise has been run separately using MPI. I didn't dare to attempt to use ```os.system``` with it._
```
best_conf = np.zeros((4, 31, 2))
results = np.zeros((4, 1000, 2))
for rank in range(0,4):
init = np.loadtxt("Parallel/best_conf"+str(rank)+".txt", skiprows=1)
init_2 = np.loadtxt("Parallel/results"+str(rank)+".txt", skiprows=1)
best_conf[rank, :, :] = init[:,:]
results[rank,:,:] = init_2[:,:]
for rank in range(0,4):
figure(figsize=(10,5), dpi=100)
plt.subplot(1,2,1)
plt.plot(best_conf[rank,:,0], best_conf[rank, :, 1], marker="o")
plt.title("Path rank: "+str(rank))
plt.xlabel("x")
plt.ylabel("y")
plt.subplot(1,2,2)
plt.plot(results[rank,:,0], results[rank,:,1])
plt.title("Path length to temperature, rank: "+str(rank))
plt.xlabel("Temperature")
plt.ylabel("Path length")
plt.show()
print("BEST: ", results[rank,-1,1], "------------ \n \n")
```
| github_jupyter |
# OpEn Rust Examples: with General Gradient Function
In this example, we are going to use a function that can obtain the gradient of any given function. This sort of function was used in relaxed_ik rust version. Now, we are trying to use this approach for [the previous example that we implemented before](https://github.com/inmo-jang/optimisation_tutorial/blob/master/tools_examples/OpEn/examples_rust/OpEn_Rust_examples_obs_avoidance_simplified.ipynb).
## Problem Formulation (Remind from the previous example)
Minimise $$f(\mathbf{u}) = u_2$$
<div style="text-align: right"> (P1) </div>
subject to
$$ \psi_{O}(\mathbf{x}) = [1 - (\mathbf{u} - \mathbf{c})^{\top}(\mathbf{u} - \mathbf{c})]_{+} = 0$$
<div style="text-align: right"> (P1C) </div>
$$ u_2 = p_1 \cdot (u_1 - p_2)^2 + p_3$$
<div style="text-align: right"> (P2C) </div>
## OpEn Implementation
First, we need to import "optimization_engine". Also, we use "nalgebra" for linear algebra calculation.
- In your local PC, it should be also declared in "Cargo.toml".
- Instead, in this jupyter notebook, we need to have "extern crate" as follows.
```
extern crate optimization_engine;
use optimization_engine::{
alm::*,
constraints::*, panoc::*, *
};
extern crate nalgebra;
use nalgebra::base::{*};
// use nalgebra::base::{Matrix4, Matrix4x2, Matrix4x1};
// use std::cmp;
```
#### Problem Master Class
You should note that `AlmFactory` should have `f` and `df` being with `
fn f(u: &[f64], cost: &mut f64) -> Result<(), SolverError>` and `fn df(u: &[f64], grad: &mut [f64]) -> Result<(), SolverError>`, respectively. It means that it could be nicer if we have a master class that can simply turn out `f` or `df` values. Such an architecture is used in [`relaxed_ik` rust version](https://github.com/uwgraphics/relaxed_ik/blob/dev/src/RelaxedIK_Rust/src/bin/lib/groove/objective_master.rs), which is a good example to be worth having a look. In this example as well, we are going to implement a problem master class as follows.
```
pub struct ProblemMaster{
p_obs: Matrix2x1<f64>, // Obstacle Position
p: Vec<f64> // parameters (slice)
}
impl ProblemMaster{
pub fn init(_p: Vec<f64>, _p_obs: Matrix2x1<f64>) -> Self {
let p = _p;
let p_obs = _p_obs;
Self{p, p_obs}
}
// Cost function
pub fn f_call(&self, u: &[f64]) -> f64{
let cost = u[1];
cost
}
pub fn f(&self, u: &[f64], cost: &mut f64){
*cost = self.f_call(u);
}
// Gradient of the cost function
pub fn df(&self, u: &[f64], grad: &mut [f64]){
let mut f_0 = self.f_call(u);
for i in 0..u.len() {
let mut u_h = u.to_vec();
u_h[i] += 0.000001;
let f_h = self.f_call(u_h.as_slice());
grad[i] = (-f_0 + f_h) / 0.000001;
}
}
// F1 Constraint
pub fn f1_call(&self, u: &[f64])-> Vec<f64> {
let mut f1u = vec![0.0; u.len()];
f1u[0] = (1.0 - (u[0]-self.p_obs[(0,0)]).powi(2) - (u[1]-self.p_obs[(1,0)]).powi(2) ).max(0.0);
f1u[1] = self.p[0]*(u[0] - self.p[1]).powi(2) + self.p[2] - u[1];
return f1u;
}
pub fn f1(&self, u: &[f64], f1u: &mut [f64]){
let mut f1u_vec = self.f1_call(u);
for i in 0..f1u_vec.len(){
f1u[i] = f1u_vec[i];
}
}
// Jacobian of F1
pub fn jf1_call(&self, u: &[f64])-> Matrix2<f64> {
let mut jf1 = Matrix2::new(0.0, 0.0,
0.0, 0.0);
let mut f1_0 = self.f1_call(u);
for i in 0..f1_0.len(){
for j in 0..u.len() {
let mut u_h = u.to_vec();
u_h[j] += 0.000001;
let f_h = self.f1_call(u_h.as_slice());
jf1[(i,j)] = (-f1_0[i] + f_h[i]) / 0.000001;
}
}
return jf1;
}
// Jacobian Product (JF_1^{\top}*d)
pub fn f1_jacobian_product(&self, u: &[f64], d: &[f64], res: &mut [f64]){
let test = self.f1_call(u);
let mut jf1_matrix = self.jf1_call(u);
if test[0] < 0.0{ // Outside the obstacle
jf1_matrix[(0,0)] = 0.0;
jf1_matrix[(0,1)] = 0.0;
}
let mut d_matrix = Matrix2x1::new(0.0, 0.0);
for i in 0..d.len(){
d_matrix[(i,0)] = d[i];
}
let mut res_matrix = jf1_matrix.transpose()*d_matrix;
res[0] = res_matrix[(0,0)];
res[1] = res_matrix[(1,0)];
}
}
```
#### Main function
```
fn main(_p: &[f64], _centre: &[f64]) {
/// ===========================================
let mut p_obs = Matrix2x1::new(0.0, 0.0);
for i in 0.._centre.len(){
p_obs[(i,0)] = _centre[i];
}
let mut p: Vec<f64> = Vec::new();
for i in 0.._p.len(){
p.push(_p[i]);
}
let mut pm = ProblemMaster::init(p, p_obs);
/// ===========================================
let tolerance = 1e-5;
let nx = 2; // problem_size: dimension of the decision variables
let n1 = 2; // range dimensions of mappings F1
let n2 = 0; // range dimensions of mappings F2
let lbfgs_mem = 5; // memory of the LBFGS buffer
// PANOCCache: All the information needed at every step of the algorithm
let panoc_cache = PANOCCache::new(nx, tolerance, lbfgs_mem);
// AlmCache: A cache structure that contains all the data
// that make up the state of the ALM/PM algorithm
// (i.e., all those data that the algorithm updates)
let mut alm_cache = AlmCache::new(panoc_cache, n1, n2);
let set_c = Zero::new(); // Set C
let bounds = Ball2::new(None, 100.0); // Set U
let set_y = Ball2::new(None, 1e12); // Set Y
// =============
// Re-define the functions linked to user parameters
let f = |u: &[f64], cost: &mut f64| -> Result<(), SolverError> {
pm.f(u, cost);
Ok(())
};
let df = |u: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {
pm.df(u, grad);
Ok(())
};
let f1 = |u: &[f64], f1u: &mut [f64]| -> Result<(), SolverError> {
pm.f1(u, f1u);
Ok(())
};
let f1_jacobian_product = |u: &[f64], d: &[f64], res: &mut [f64]| -> Result<(), SolverError> {
pm.f1_jacobian_product(u,d,res);
Ok(())
};
// ==============
// AlmFactory: Prepare function psi and its gradient
// given the problem data such as f, del_f and
// optionally F_1, JF_1, C, F_2
let factory = AlmFactory::new(
f, // Cost function
df, // Cost Gradient
Some(f1), // MappingF1
Some(f1_jacobian_product), // Jacobian Mapping F1 Trans
NO_MAPPING, // MappingF2
NO_JACOBIAN_MAPPING, // Jacobian Mapping F2 Trans
Some(set_c), // Constraint set
n2,
);
// Define an optimisation problem
// to be solved with AlmOptimizer
let alm_problem = AlmProblem::new(
bounds,
Some(set_c),
Some(set_y),
|u: &[f64], xi: &[f64], cost: &mut f64| -> Result<(), SolverError> {
factory.psi(u, xi, cost)
},
|u: &[f64], xi: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {
factory.d_psi(u, xi, grad)
},
Some(f1),
NO_MAPPING,
n1,
n2,
);
let mut alm_optimizer = AlmOptimizer::new(&mut alm_cache, alm_problem)
.with_delta_tolerance(1e-5)
.with_max_outer_iterations(200)
.with_epsilon_tolerance(1e-6)
.with_initial_inner_tolerance(1e-2)
.with_inner_tolerance_update_factor(0.5)
.with_initial_penalty(100.0)
.with_penalty_update_factor(1.05)
.with_sufficient_decrease_coefficient(0.2)
.with_initial_lagrange_multipliers(&vec![5.0; n1]);
let mut u = vec![0.0; nx]; // Initial guess
let solver_result = alm_optimizer.solve(&mut u);
let r = solver_result.unwrap();
println!("\n\nSolver result : {:#.7?}\n", r);
println!("Solution u = {:#.6?}", u);
}
```
#### Result
##### Case (1) : $\mathbf{p} = [0.5, 0.0, 0.0]$; and $\mathbf{c} = [0.0, 0.0]$
```
main(&[0.5, 0.0, 0.0], &[0.0, 0.0]);
```
##### Case (2) : $\mathbf{p} = [0.5, 0.0, 0.0]$; and $\mathbf{c} = [-0.25, 0.0]$
```
main(&[0.5, 0.0, 0.0], &[-0.25, 0.0]);
```
##### Case (3) : $\mathbf{p} = [0.5, -0.5, 0.0]$; and $\mathbf{c} = [0.0, 0.0]$
```
main(&[0.5, -0.5, 0.0], &[0.0, 0.0]);
```
##### Case (4) : $\mathbf{p} = [0.1, 0.0, 0.0]$; and $\mathbf{c} = [0.5, 0.0]$
```
main(&[0.1, 0.0, 0.0], &[0.5, 0.0]);
```
##### Case (5) : $\mathbf{p} = [-0.5, -0.5864, 0.0]$; and $\mathbf{c} = [-
3.0, 2.0]$
```
main(&[0.5, -0.5864, 0.0], &[-3.0, 2.0]);
```
In this case, the circle constraint does not cover the minimum point of the cost function. Thus, the optimal value should be zero, where $u_1 = p_2$.
## Summary
- Compared with the previous example, the solver can converge in all the test cases (In the previous example, we had to tune initial guess and max_iteration number).
- Also, solution times are less than those in the preivous example.
- We do not need to manually implement the gradients of the cost function and the constraints.
## Future work
- Let's make the `ProblemMaster` have resizable parameters (e.g. not binding to `Matrix2x1`).
- Let's implement a path planner based on Model Predictive Control **without consideration of the system dynamics (only concerning the path)**
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Plot style
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (4, 4)
# Avoid inaccurate floating values (for inverse matrices in dot product for instance)
# See https://stackoverflow.com/questions/24537791/numpy-matrix-inversion-rounding-errors
np.set_printoptions(suppress=True)
def plotVectors(vecs, cols, alpha=1):
"""
Plot set of vectors.
Parameters
----------
vecs : array-like
Coordinates of the vectors to plot. Each vectors is in an array. For
instance: [[1, 3], [2, 2]] can be used to plot 2 vectors.
cols : array-like
Colors of the vectors. For instance: ['red', 'blue'] will display the
first vector in red and the second in blue.
alpha : float
Opacity of vectors
Returns:
fig : instance of matplotlib.figure.Figure
The figure of the vectors
"""
plt.axvline(x=0, color='#A9A9A9', zorder=0)
plt.axhline(y=0, color='#A9A9A9', zorder=0)
for i in range(len(vecs)):
if (isinstance(alpha, list)):
alpha_i = alpha[i]
else:
alpha_i = alpha
x = np.concatenate([[0,0],vecs[i]])
plt.quiver([x[0]],
[x[1]],
[x[2]],
[x[3]],
angles='xy', scale_units='xy', scale=1, color=cols[i],
alpha=alpha_i)
```
$$
\newcommand\bs[1]{\boldsymbol{#1}}
\newcommand\norm[1]{\left\lVert#1\right\rVert}
$$
# Introduction
We will see some major concepts of linear algebra in this chapter. It is also quite heavy so hang on! We will start with getting some ideas on eigenvectors and eigenvalues. We will develop on the idea that a matrix can be seen as a linear transformation and that applying a matrix on its eigenvectors gives new vectors with the same direction. Then we will see how to express quadratic equations into the matrix form. We will see that the eigendecomposition of the matrix corresponding to a quadratic equation can be used to find the minimum and maximum of this function. As a bonus, we will also see how to visualize linear transformations in Python!
# 2.7 Eigendecomposition
The eigendecomposition is one form of matrix decomposition. Decomposing a matrix means that we want to find a product of matrices that is equal to the initial matrix. In the case of the eigendecomposition, we decompose the initial matrix into the product of its eigenvectors and eigenvalues. Before all, let's see what are eigenvectors and eigenvalues.
# Matrices as linear transformations
As we have seen in [2.3](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices/) with the example of the identity matrix, you can think of matrices as linear transformations. Some matrices will rotate your space, others will rescale it etc. So when we apply a matrix to a vector, we end up with a transformed version of the vector. When we say that we 'apply' the matrix to the vector it means that we calculate the dot product of the matrix with the vector. We will start with a basic example of this kind of transformation.
### Example 1.
```
A = np.array([[-1, 3], [2, -2]])
A
v = np.array([[2], [1]])
v
```
Let's plot this vector:
```
plotVectors([v.flatten()], cols=['#1190FF'])
plt.ylim(-1, 4)
plt.xlim(-1, 4)
```
Now, we will apply the matrix $\bs{A}$ to this vector and plot the old vector (light blue) and the new one (orange):
```
Av = A.dot(v)
print(Av)
plotVectors([v.flatten(), Av.flatten()], cols=['#1190FF', '#FF9A13'])
plt.ylim(-1, 4)
plt.xlim(-1, 4)
```
We can see that applying the matrix $\bs{A}$ has the effect of modifying the vector.
Now that you can think of matrices as linear transformation recipes, let's see the case of a very special type of vector: the eigenvector.
# Eigenvectors and eigenvalues
We have seen an example of a vector transformed by a matrix. Now imagine that the transformation of the initial vector gives us a new vector that has the exact same direction. The scale can be different but the direction is the same. Applying the matrix didn't change the direction of the vector. This special vector is called an eigenvector of the matrix. We will see that finding the eigenvectors of a matrix can be very useful.
<span class='pquote'>
Imagine that the transformation of the initial vector by the matrix gives a new vector with the exact same direction. This vector is called an eigenvector of $\bs{A}$.
</span>
This means that $\bs{v}$ is a eigenvector of $\bs{A}$ if $\bs{v}$ and $\bs{Av}$ are in the same direction or to rephrase it if the vectors $\bs{Av}$ and $\bs{v}$ are parallel. The output vector is just a scaled version of the input vector. This scalling factor is $\lambda$ which is called the **eigenvalue** of $\bs{A}$.
$$
\bs{Av} = \lambda\bs{v}
$$
### Example 2.
Let's $\bs{A}$ be the following matrix:
$$
\bs{A}=
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
$$
We know that one eigenvector of A is:
$$
\bs{v}=
\begin{bmatrix}
1\\\\
1
\end{bmatrix}
$$
We can check that $\bs{Av} = \lambda\bs{v}$:
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
\begin{bmatrix}
1\\\\
1
\end{bmatrix}=\begin{bmatrix}
6\\\\
6
\end{bmatrix}
$$
We can see that:
$$
6\times \begin{bmatrix}
1\\\\
1
\end{bmatrix} = \begin{bmatrix}
6\\\\
6
\end{bmatrix}
$$
which means that $\bs{v}$ is well an eigenvector of $\bs{A}$. Also, the corresponding eigenvalue is $\lambda=6$.
We can represent $\bs{v}$ and $\bs{Av}$ to check if their directions are the same:
```
A = np.array([[5, 1], [3, 3]])
A
v = np.array([[1], [1]])
v
Av = A.dot(v)
orange = '#FF9A13'
blue = '#1190FF'
plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])
plt.ylim(-1, 7)
plt.xlim(-1, 7)
```
We can see that their directions are the same!
Another eigenvector of $\bs{A}$ is
$$
\bs{v}=
\begin{bmatrix}
1\\\\
-3
\end{bmatrix}
$$
because
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}\begin{bmatrix}
1\\\\
-3
\end{bmatrix} = \begin{bmatrix}
2\\\\
-6
\end{bmatrix}
$$
and
$$
2 \times \begin{bmatrix}
1\\\\
-3
\end{bmatrix} =
\begin{bmatrix}
2\\\\
-6
\end{bmatrix}
$$
So the corresponding eigenvalue is $\lambda=2$.
```
v = np.array([[1], [-3]])
v
Av = A.dot(v)
plotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])
plt.ylim(-7, 1)
plt.xlim(-1, 3)
```
This example shows that the eigenvectors $\bs{v}$ are vectors that change only in scale when we apply the matrix $\bs{A}$ to them. Here the scales were 6 for the first eigenvector and 2 to the second but $\lambda$ can take any real or even complex value.
## Find eigenvalues and eigenvectors in Python
Numpy provides a function returning eigenvectors and eigenvalues (the first array corresponds to the eigenvalues and the second to the eigenvectors concatenated in columns):
```python
(array([ 6., 2.]), array([[ 0.70710678, -0.31622777],
[ 0.70710678, 0.9486833 ]]))
```
Here a demonstration with the preceding example.
```
A = np.array([[5, 1], [3, 3]])
A
np.linalg.eig(A)
```
We can see that the eigenvalues are the same than the ones we used before: 6 and 2 (first array).
The eigenvectors correspond to the columns of the second array. This means that the eigenvector corresponding to $\lambda=6$ is:
$$
\begin{bmatrix}
0.70710678\\\\
0.70710678
\end{bmatrix}
$$
The eigenvector corresponding to $\lambda=2$ is:
$$
\begin{bmatrix}
-0.31622777\\\\
0.9486833
\end{bmatrix}
$$
The eigenvectors look different because they have not necessarly the same scaling than the ones we gave in the example. We can easily see that the first corresponds to a scaled version of our $\begin{bmatrix}
1\\\\
1
\end{bmatrix}$. But the same property stands. We have still $\bs{Av} = \lambda\bs{v}$:
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
\begin{bmatrix}
0.70710678\\\\
0.70710678
\end{bmatrix}=
\begin{bmatrix}
4.24264069\\\\
4.24264069
\end{bmatrix}
$$
With $0.70710678 \times 6 = 4.24264069$. So there are an infinite number of eigenvectors corresponding to the eigenvalue $6$. They are equivalent because we are interested by their directions.
For the second eigenvector we can check that it corresponds to a scaled version of $\begin{bmatrix}
1\\\\
-3
\end{bmatrix}$. We can draw these vectors and see if they are parallel.
```
v = np.array([[1], [-3]])
Av = A.dot(v)
v_np = [-0.31622777, 0.9486833]
plotVectors([Av.flatten(), v.flatten(), v_np], cols=[blue, orange, 'blue'])
plt.ylim(-7, 1)
plt.xlim(-1, 3)
```
We can see that the vector found with Numpy (in dark blue) is a scaled version of our preceding $\begin{bmatrix}
1\\\\
-3
\end{bmatrix}$.
## Rescaled vectors
As we saw it with numpy, if $\bs{v}$ is an eigenvector of $\bs{A}$, then any rescaled vector $s\bs{v}$ is also an eigenvector of $\bs{A}$. The eigenvalue of the rescaled vector is the same.
Let's try to rescale
$$
\bs{v}=
\begin{bmatrix}
1\\\\
-3
\end{bmatrix}
$$
from our preceding example.
For instance,
$$
\bs{3v}=
\begin{bmatrix}
3\\\\
-9
\end{bmatrix}
$$
$$
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}
\begin{bmatrix}
3\\\\
-9
\end{bmatrix} =
\begin{bmatrix}
6\\\\
18
\end{bmatrix} = 2 \times
\begin{bmatrix}
3\\\\
-9
\end{bmatrix}
$$
We have well $\bs{A}\times 3\bs{v} = \lambda\bs{v}$ and the eigenvalue is still $\lambda=2$.
## Concatenating eigenvalues and eigenvectors
Now that we have an idea of what eigenvectors and eigenvalues are we can see how it can be used to decompose a matrix. All eigenvectors of a matrix $\bs{A}$ can be concatenated in a matrix with each column corresponding to each eigenvector (like in the second array return by `np.linalg.eig(A)`):
$$
\bs{V}=
\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
$$
The first column $
\begin{bmatrix}
1\\\\
1
\end{bmatrix}
$ corresponds to $\lambda=6$ and the second $
\begin{bmatrix}
1\\\\
-3
\end{bmatrix}
$ to $\lambda=2$.
The vector $\bs{\lambda}$ can be created from all eigenvalues:
$$
\bs{\lambda}=
\begin{bmatrix}
6\\\\
2
\end{bmatrix}
$$
Then the eigendecomposition is given by
$$
\bs{A}=\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}
$$
<span class='pquote'>
We can decompose the matrix $\bs{A}$ with eigenvectors and eigenvalues. It is done with: $\bs{A}=\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}$
</span>
$diag(\bs{v})$ is a diagonal matrix (see [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)) containing all the eigenvalues. Continuing with our example we have
$$
\bs{V}=\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
$$
The diagonal matrix is all zeros except the diagonal that is our vector $\bs{\lambda}$.
$$
diag(\bs{v})=
\begin{bmatrix}
6 & 0\\\\
0 & 2
\end{bmatrix}
$$
The inverse matrix of $\bs{V}$ can be calculated with numpy:
```
V = np.array([[1, 1], [1, -3]])
V
V_inv = np.linalg.inv(V)
V_inv
```
So let's plug
$$
\bs{V}^{-1}=\begin{bmatrix}
0.75 & 0.25\\\\
0.25 & -0.25
\end{bmatrix}
$$
into our equation:
$$
\begin{align*}
&\bs{V}\cdot diag(\bs{\lambda}) \cdot \bs{V}^{-1}\\\\
&=
\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
\begin{bmatrix}
6 & 0\\\\
0 & 2
\end{bmatrix}
\begin{bmatrix}
0.75 & 0.25\\\\
0.25 & -0.25
\end{bmatrix}
\end{align*}
$$
If we do the dot product of the first two matrices we have:
$$
\begin{bmatrix}
1 & 1\\\\
1 & -3
\end{bmatrix}
\begin{bmatrix}
6 & 0\\\\
0 & 2
\end{bmatrix} =
\begin{bmatrix}
6 & 2\\\\
6 & -6
\end{bmatrix}
$$
So with replacing into the equation:
$$
\begin{align*}
&\begin{bmatrix}
6 & 2\\\\
6 & -6
\end{bmatrix}
\begin{bmatrix}
0.75 & 0.25\\\\
0.25 & -0.25
\end{bmatrix}\\\\
&=
\begin{bmatrix}
6\times0.75 + (2\times0.25) & 6\times0.25 + (2\times-0.25)\\\\
6\times0.75 + (-6\times0.25) & 6\times0.25 + (-6\times-0.25)
\end{bmatrix}\\\\
&=
\begin{bmatrix}
5 & 1\\\\
3 & 3
\end{bmatrix}=
\bs{A}
\end{align*}
$$
Let's check our result with Python:
```
lambdas = np.diag([6,2])
lambdas
V.dot(lambdas).dot(V_inv)
```
That confirms our previous calculation.
## Real symmetric matrix
In the case of real symmetric matrices (more details about symmetric matrices in [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)), the eigendecomposition can be expressed as
$$
\bs{A} = \bs{Q}\Lambda \bs{Q}^\text{T}
$$
where $\bs{Q}$ is the matrix with eigenvectors as columns and $\Lambda$ is $diag(\lambda)$.
### Example 3.
$$
\bs{A}=\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
$$
This matrix is symmetric because $\bs{A}=\bs{A}^\text{T}$. Its eigenvectors are:
$$
\bs{Q}=
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
$$
and its eigenvalues put in a diagonal matrix gives:
$$
\bs{\Lambda}=
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
$$
So let's begin to calculate $\bs{Q\Lambda}$:
$$
\begin{align*}
\bs{Q\Lambda}&=
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
0.89442719 \times 7 & -0.4472136\times 2\\\\
0.4472136 \times 7 & 0.89442719\times 2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
6.26099033 & -0.8944272\\\\
3.1304952 & 1.78885438
\end{bmatrix}
\end{align*}
$$
with:
$$
\bs{Q}^\text{T}=
\begin{bmatrix}
0.89442719 & 0.4472136\\\\
-0.4472136 & 0.89442719
\end{bmatrix}
$$
So we have:
$$
\begin{align*}
\bs{Q\Lambda} \bs{Q}^\text{T}&=
\begin{bmatrix}
6.26099033 & -0.8944272\\\\
3.1304952 & 1.78885438
\end{bmatrix}
\begin{bmatrix}
0.89442719 & 0.4472136\\\\
-0.4472136 & 0.89442719
\end{bmatrix}\\\\
&=
\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
\end{align*}
$$
It works! For that reason, it can useful to use symmetric matrices! Let's do the same things easily with `linalg` from numpy:
```
A = np.array([[6, 2], [2, 3]])
A
eigVals, eigVecs = np.linalg.eig(A)
eigVecs
eigVals = np.diag(eigVals)
eigVals
eigVecs.dot(eigVals).dot(eigVecs.T)
```
We can see that the result corresponds to our initial matrix.
# Quadratic form to matrix form
Eigendecomposition can be used to optimize quadratic functions. We will see that when $\bs{x}$ takes the values of an eigenvector, $f(\bs{x})$ takes the value of its corresponding eigenvalue.
<span class='pquote'>
When $\bs{x}$ takes the values of an eigenvector, $f(\bs{x})$ takes the value of its corresponding eigenvalue.
</span>
We will see in the following points how we can show that with different methods.
Let's have the following quadratic equation:
$$
f(\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2
$$
These quadratic forms can be generated by matrices:
$$
f(\bs{x})= \begin{bmatrix}
x_1 & x_2
\end{bmatrix}\begin{bmatrix}
a & b\\\\
c & d
\end{bmatrix}\begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix} = \bs{x^\text{T}Ax}
$$
with:
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$
\bs{A}=\begin{bmatrix}
a & b\\\\
c & d
\end{bmatrix}
$$
We call them matrix forms. This form is useful to do various things on the quadratic equation like constrained optimization (see bellow).
<span class='pquote'>
Quadratic equations can be expressed under the matrix form
</span>
If you look at the relation between these forms you can see that $a$ gives you the number of $x_1^2$, $(b + c)$ the number of $x_1x_2$ and $d$ the number of $x_2^2$. This means that the same quadratic form can be obtained from infinite number of matrices $\bs{A}$ by changing $b$ and $c$ while preserving their sum.
### Example 4.
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$
\bs{A}=\begin{bmatrix}
2 & 4\\\\
2 & 5
\end{bmatrix}
$$
gives the following quadratic form:
$$
2x_1^2 + (4+2)x_1x_2 + 5x_2^2\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2
$$
but if:
$$
\bs{A}=\begin{bmatrix}
2 & -3\\\\
9 & 5
\end{bmatrix}
$$
we still have the quadratic same form:
$$
2x_1^2 + (-3+9)x_1x_2 + 5x_2^2\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2
$$
### Example 5
For this example, we will go from the matrix form to the quadratic form using a symmetric matrix $\bs{A}$. Let's use the matrix of the example 3.
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$\bs{A}=\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
$$
$$
\begin{align*}
\bs{x^\text{T}Ax}&=
\begin{bmatrix}
x_1 & x_2
\end{bmatrix}
\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
\begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
x_1 & x_2
\end{bmatrix}
\begin{bmatrix}
6 x_1 + 2 x_2\\\\
2 x_1 + 3 x_2
\end{bmatrix}\\\\
&=
x_1(6 x_1 + 2 x_2) + x_2(2 x_1 + 3 x_2)\\\\
&=
6 x_1^2 + 4 x_1x_2 + 3 x_2^2
\end{align*}
$$
Our quadratic equation is thus $6 x_1^2 + 4 x_1x_2 + 3 x_2^2$.
### Note
If $\bs{A}$ is a diagonal matrix (all 0 except the diagonal), the quadratic form of $\bs{x^\text{T}Ax}$ will have no cross term. Take the following matrix form:
$$
\bs{A}=\begin{bmatrix}
a & b\\\\
c & d
\end{bmatrix}
$$
If $\bs{A}$ is diagonal, then $b$ and $c$ are 0 and since $f(\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2$ there is no cross term. A quadratic form without cross term is called diagonal form since it comes from a diagonal matrix.
# Change of variable
A change of variable (or linear substitution) simply means that we replace a variable by another one. We will see that it can be used to remove the cross terms in our quadratic equation. Without the cross term, it will then be easier to characterize the function and eventually optimize it (i.e finding its maximum or minimum).
## With the quadratic form
### Example 6.
Let's take again our previous quadratic form:
$$
\bs{x^\text{T}Ax} = 6 x_1^2 + 4 x_1x_2 + 3 x_2^2
$$
The change of variable will concern $x_1$ and $x_2$. We can replace $x_1$ with any combination of $y_1$ and $y_2$ and $x_2$ with any combination $y_1$ and $y_2$. We will of course end up with a new equation. The nice thing is that we can find a specific substitution that will lead to a simplification of our statement. Specifically, it can be used to get rid of the cross term (in our example: $4 x_1x_2$). We will see later why it is interesting.
Actually, the right substitution is given by the eigenvectors of the matrix used to generate the quadratic form. Let's recall that the matrix form of our equation is:
$$
\bs{x} = \begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix}
$$
and
$$\bs{A}=\begin{bmatrix}
6 & 2\\\\
2 & 3
\end{bmatrix}
$$
and that the eigenvectors of $\bs{A}$ are:
$$
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
$$
With the purpose of simplification, we can replace these values with:
$$
\begin{bmatrix}
\frac{2}{\sqrt{5}} & -\frac{1}{\sqrt{5}}\\\\
\frac{1}{\sqrt{5}} & \frac{2}{\sqrt{5}}
\end{bmatrix} =
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2 & -1\\\\
1 & 2
\end{bmatrix}
$$
So our first eigenvector is:
$$
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2\\\\
1
\end{bmatrix}
$$
and our second eigenvector is:
$$
\frac{1}{\sqrt{5}}
\begin{bmatrix}
-1\\\\
2
\end{bmatrix}
$$
The change of variable will lead to:
$$
\begin{bmatrix}
x_1\\\\
x_2
\end{bmatrix} =
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2 & -1\\\\
1 & 2
\end{bmatrix}
\begin{bmatrix}
y_1\\\\
y_2
\end{bmatrix} =
\frac{1}{\sqrt{5}}
\begin{bmatrix}
2y_1 - y_2\\\\
y_1 + 2y_2
\end{bmatrix}
$$
so we have
$$
\begin{cases}
x_1 = \frac{1}{\sqrt{5}}(2y_1 - y_2)\\\\
x_2 = \frac{1}{\sqrt{5}}(y_1 + 2y_2)
\end{cases}
$$
So far so good! Let's replace that in our example:
$$
\begin{align*}
\bs{x^\text{T}Ax}
&=
6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\
&=
6 [\frac{1}{\sqrt{5}}(2y_1 - y_2)]^2 + 4 [\frac{1}{\sqrt{5}}(2y_1 - y_2)\frac{1}{\sqrt{5}}(y_1 + 2y_2)] + 3 [\frac{1}{\sqrt{5}}(y_1 + 2y_2)]^2\\\\
&=
\frac{1}{5}[6 (2y_1 - y_2)^2 + 4 (2y_1 - y_2)(y_1 + 2y_2) + 3 (y_1 + 2y_2)^2]\\\\
&=
\frac{1}{5}[6 (4y_1^2 - 4y_1y_2 + y_2^2) + 4 (2y_1^2 + 4y_1y_2 - y_1y_2 - 2y_2^2) + 3 (y_1^2 + 4y_1y_2 + 4y_2^2)]\\\\
&=
\frac{1}{5}(24y_1^2 - 24y_1y_2 + 6y_2^2 + 8y_1^2 + 16y_1y_2 - 4y_1y_2 - 8y_2^2 + 3y_1^2 + 12y_1y_2 + 12y_2^2)\\\\
&=
\frac{1}{5}(35y_1^2 + 10y_2^2)\\\\
&=
7y_1^2 + 2y_2^2
\end{align*}
$$
That's great! Our new equation doesn't have any cross terms!
## With the Principal Axes Theorem
Actually there is a simpler way to do the change of variable. We can stay in the matrix form. Recall that we start with the form:
<div>
$$
f(\bs{x})=\bs{x^\text{T}Ax}
$$
</div>
The linear substitution can be wrote in these terms. We want replace the variables $\bs{x}$ by $\bs{y}$ that relates by:
<div>
$$
\bs{x}=P\bs{y}
$$
</div>
We want to find $P$ such as our new equation (after the change of variable) doesn't contain the cross terms. The first step is to replace that in the first equation:
<div>
$$
\begin{align*}
\bs{x^\text{T}Ax}
&=
(\bs{Py})^\text{T}\bs{A}(\bs{Py})\\\\
&=
\bs{y}^\text{T}(\bs{P}^\text{T}\bs{AP})\bs{y}
\end{align*}
$$
</div>
Can you see the how to transform the left hand side ($\bs{x}$) into the right hand side ($\bs{y}$)? The substitution is done by replacing $\bs{A}$ with $\bs{P^\text{T}AP}$. We also know that $\bs{A}$ is symmetric and thus that there is a diagonal matrix $\bs{D}$ containing the eigenvectors of $\bs{A}$ and such as $\bs{D}=\bs{P}^\text{T}\bs{AP}$. We thus end up with:
<div>
$$
\bs{x^\text{T}Ax}=\bs{y^\text{T}\bs{D} y}
$$
</div>
<span class='pquote'>
We can use $\bs{D}$ to simplify our quadratic equation and remove the cross terms
</span>
All of this implies that we can use $\bs{D}$ to simplify our quadratic equation and remove the cross terms. If you remember from example 2 we know that the eigenvalues of $\bs{A}$ are:
<div>
$$
\bs{D}=
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
$$
</div>
<div>
$$
\begin{align*}
\bs{x^\text{T}Ax}
&=
\bs{y^\text{T}\bs{D} y}\\\\
&=
\bs{y}^\text{T}
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
\bs{y}\\\\
&=
\begin{bmatrix}
y_1 & y_2
\end{bmatrix}
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
\begin{bmatrix}
y_1\\\\
y_2
\end{bmatrix}\\\\
&=
\begin{bmatrix}
7y_1 +0y_2 & 0y_1 + 2y_2
\end{bmatrix}
\begin{bmatrix}
y_1\\\\
y_2
\end{bmatrix}\\\\
&=
7y_1^2 + 2y_2^2
\end{align*}
$$
</div>
That's nice! If you look back to the change of variable that we have done in the quadratic form, you will see that we have found the same values!
This form (without cross-term) is called the **principal axes form**.
### Summary
To summarise, the principal axes form can be found with
$$
\bs{x^\text{T}Ax} = \lambda_1y_1^2 + \lambda_2y_2^2
$$
where $\lambda_1$ is the eigenvalue corresponding to the first eigenvector and $\lambda_2$ the eigenvalue corresponding to the second eigenvector (second column of $\bs{x}$).
# Finding f(x) with eigendecomposition
We will see that there is a way to find $f(\bs{x})$ with eigenvectors and eigenvalues when $\bs{x}$ is a unit vector.
Let's start from:
$$
f(\bs{x}) =\bs{x^\text{T}Ax}
$$
We know that if $\bs{x}$ is an eigenvector of $\bs{A}$ and $\lambda$ the corresponding eigenvalue, then $
\bs{Ax}=\lambda \bs{x}
$. By replacing the term in the last equation we have:
$$
f(\bs{x}) =\bs{x^\text{T}\lambda x} = \bs{x^\text{T}x}\lambda
$$
Since $\bs{x}$ is a unit vector, $\norm{\bs{x}}_2=1$ and $\bs{x^\text{T}x}=1$ (cf. [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/) Norms). We end up with
$$
f(\bs{x}) = \lambda
$$
This is a usefull property. If $\bs{x}$ is an eigenvector of $\bs{A}$, $
f(\bs{x}) =\bs{x^\text{T}Ax}$ will take the value of the corresponding eigenvalue. We can see that this is working only if the euclidean norm of $\bs{x}$ is 1 (i.e $\bs{x}$ is a unit vector).
### Example 7
This example will show that $f(\bs{x}) = \lambda$. Let's take again the last example, the eigenvectors of $\bs{A}$ were
$$
\bs{Q}=
\begin{bmatrix}
0.89442719 & -0.4472136\\\\
0.4472136 & 0.89442719
\end{bmatrix}
$$
and the eigenvalues
$$
\bs{\Lambda}=
\begin{bmatrix}
7 & 0\\\\
0 & 2
\end{bmatrix}
$$
So if:
$$
\bs{x}=\begin{bmatrix}
0.89442719 & 0.4472136
\end{bmatrix}
$$
$f(\bs{x})$ should be equal to 7. Let's check that's true.
$$
\begin{align*}
f(\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\
&= 6\times 0.89442719^2 + 4\times 0.89442719\times 0.4472136 + 3 \times 0.4472136^2\\\\
&= 7
\end{align*}
$$
In the same way, if $\bs{x}=\begin{bmatrix}
-0.4472136 & 0.89442719
\end{bmatrix}$, $f(\bs{x})$ should be equal to 2.
$$
\begin{align*}
f(\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\
&= 6\times -0.4472136^2 + 4\times -0.4472136\times 0.89442719 + 3 \times 0.89442719^2\\\\
&= 2
\end{align*}
$$
# Quadratic form optimization
Depending to the context, optimizing a function means finding its maximum or its minimum. It is for instance widely used to minimize the error of cost functions in machine learning.
Here we will see how eigendecomposition can be used to optimize quadratic functions and why this can be done easily without cross terms. The difficulty is that we want a constrained optimization, that is to find the minimum or the maximum of the function for $f(\bs{x})$ being a unit vector.
### Example 7.
We want to optimize:
$$
f(\bs{x}) =\bs{x^\text{T}Ax} \textrm{ subject to }||\bs{x}||_2= 1
$$
In our last example we ended up with:
$$
f(\bs{x}) = 7y_1^2 + 2y_2^2
$$
And the constraint of $\bs{x}$ being a unit vector imply:
$$
||\bs{x}||_2 = 1 \Leftrightarrow x_1^2 + x_2^2 = 1
$$
We can also show that $\bs{y}$ has to be a unit vector if it is the case for $\bs{x}$. Recall first that $\bs{x}=\bs{Py}$:
$$
\begin{align*}
||\bs{x}||^2 &= \bs{x^\text{T}x}\\\\
&= (\bs{Py})^\text{T}(\bs{Py})\\\\
&= \bs{P^\text{T}y^\text{T}Py}\\\\
&= \bs{PP^\text{T}y^\text{T}y}\\\\
&= \bs{y^\text{T}y} = ||\bs{y}||^2
\end{align*}
$$
So $\norm{\bs{x}}^2 = \norm{\bs{y}}^2 = 1$ and thus $y_1^2 + y_2^2 = 1$
Since $y_1^2$ and $y_2^2$ cannot be negative because they are squared values, we can be sure that $2y_2^2\leq7y_2^2$. Hence:
$$
\begin{align*}
f(\bs{x}) &= 7y_1^2 + 2y_2^2\\\\
&\leq
7y_1^2 + 7y_2^2\\\\
&=
7(y_1^2+y_2^2)\\\\
&=
7
\end{align*}
$$
This means that the maximum value of $f(\bs{x})$ is 7.
The same way can lead to find the minimum of $f(\bs{x})$. $7y_1^2\geq2y_1^2$ and:
$$
\begin{align*}
f(\bs{x}) &= 7y_1^2 + 2y_2^2\\\\
&\geq
2y_1^2 + 2y_2^2\\\\
&=
2(y_1^2+y_2^2)\\\\
&=
2
\end{align*}
$$
And the minimum of $f(\bs{x})$ is 2.
### Summary
We can note that the minimum of $f(\bs{x})$ is the minimum eigenvalue of the corresponding matrix $\bs{A}$. Another useful fact is that this value is obtained when $\bs{x}$ takes the value of the corresponding eigenvector (check back the preceding paragraph). In that way, $f(\bs{x})=7$ when $\bs{x}=\begin{bmatrix}0.89442719 & 0.4472136\end{bmatrix}$. This shows how useful are the eigenvalues and eigenvector in this kind of constrained optimization.
## Graphical views
We saw that the quadratic functions $f(\bs{x}) = ax_1^2 +2bx_1x_2 + cx_2^2$ can be represented by the symmetric matrix $\bs{A}$:
$$
\bs{A}=\begin{bmatrix}
a & b\\\\
b & c
\end{bmatrix}
$$
Graphically, these functions can take one of three general shapes (click on the links to go to the Surface Plotter and move the shapes):
1.[Positive-definite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x%2By*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49) | 2.[Negative-definite form](https://academo.org/demos/3d-surface-plotter/?expression=-x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=25) | 3.[Indefinite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49)
:-------------------------:|:-------------------------:|:-------:
<img src="images/quadratic-functions-positive-definite-form.png" alt="Quadratic function with a positive definite form" title="Quadratic function with a positive definite form"> | <img src="images/quadratic-functions-negative-definite-form.png" alt="Quadratic function with a negative definite form" title="Quadratic function with a negative definite form"> | <img src="images/quadratic-functions-indefinite-form.png" alt="Quadratic function with a indefinite form" title="Quadratic function with a indefinite form">
With the constraints that $\bs{x}$ is a unit vector, the minimum of the function $f(\bs{x})$ corresponds to the smallest eigenvalue and is obtained with its corresponding eigenvector. The maximum corresponds to the biggest eigenvalue and is obtained with its corresponding eigenvector.
# Conclusion
We have seen a lot of things in this chapter. We saw that linear algebra can be used to solve a variety of mathematical problems and more specifically that eigendecomposition is a powerful tool! However, it cannot be used for non square matrices. In the next chapter, we will see the Singular Value Decomposition (SVD) which is another way of decomposing matrices. The advantage of the SVD is that you can use it also with non-square matrices.
# BONUS: visualizing linear transformations
We can see the effect of eigenvectors and eigenvalues in linear transformation. We will see first how linear transformation works. Linear transformation is a mapping between an input vector and an output vector. Different operations like projection or rotation are linear transformations. Every linear transformations can be though as applying a matrix on the input vector. We will see the meaning of this graphically. For that purpose, let's start by drawing the set of unit vectors (they are all vectors with a norm of 1).
```
t = np.linspace(0, 2*np.pi, 100)
x = np.cos(t)
y = np.sin(t)
plt.figure()
plt.plot(x, y)
plt.xlim(-1.5, 1.5)
plt.ylim(-1.5, 1.5)
plt.show()
```
Then, we will transform each of these points by applying a matrix $\bs{A}$. This is the goal of the function bellow that takes a matrix as input and will draw
- the origin set of unit vectors
- the transformed set of unit vectors
- the eigenvectors
- the eigenvectors scalled by their eigenvalues
```
def linearTransformation(transformMatrix):
orange = '#FF9A13'
blue = '#1190FF'
# Create original set of unit vectors
t = np.linspace(0, 2*np.pi, 100)
x = np.cos(t)
y = np.sin(t)
# Calculate eigenvectors and eigenvalues
eigVecs = np.linalg.eig(transformMatrix)[1]
eigVals = np.diag(np.linalg.eig(transformMatrix)[0])
# Create vectors of 0 to store new transformed values
newX = np.zeros(len(x))
newY = np.zeros(len(x))
for i in range(len(x)):
unitVector_i = np.array([x[i], y[i]])
# Apply the matrix to the vector
newXY = transformMatrix.dot(unitVector_i)
newX[i] = newXY[0]
newY[i] = newXY[1]
plotVectors([eigVecs[:,0], eigVecs[:,1]],
cols=[blue, blue])
plt.plot(x, y)
plotVectors([eigVals[0,0]*eigVecs[:,0], eigVals[1,1]*eigVecs[:,1]],
cols=[orange, orange])
plt.plot(newX, newY)
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.show()
A = np.array([[1,-1], [-1, 4]])
linearTransformation(A)
```
We can see the unit circle in dark blue, the non scaled eigenvectors in light blue, the transformed unit circle in green and the scaled eigenvectors in yellow.
It is worth noting that the eigenvectors are orthogonal here because the matrix is symmetric. Let's try with a non-symmetric matrix:
```
A = np.array([[1,1], [-1, 4]])
linearTransformation(A)
```
In this case, the eigenvectors are not orthogonal!
# References
## Videos of Gilbert Strang
- [Gilbert Strang, Lec21 MIT - Eigenvalues and eigenvectors](https://www.youtube.com/watch?v=lXNXrLcoerU)
- [Gilbert Strang, Lec 21 MIT, Spring 2005](https://www.youtube.com/watch?v=lXNXrLcoerU)
## Quadratic forms
- [David Lay, University of Colorado, Denver](http://math.ucdenver.edu/~esulliva/LinearAlgebra/SlideShows/07_02.pdf)
- [math.stackexchange QA](https://math.stackexchange.com/questions/2207111/eigendecomposition-optimization-of-quadratic-expressions)
## Eigenvectors
- [Victor Powell and Lewis Lehe - Interactive representation of eigenvectors](http://setosa.io/ev/eigenvectors-and-eigenvalues/)
## Linear transformations
- [Gilbert Strang - Linear transformation](http://ia802205.us.archive.org/18/items/MIT18.06S05_MP4/30.mp4)
- [Linear transformation - demo video](https://www.youtube.com/watch?v=wXCRcnbCsJA)
| github_jupyter |
# 正则化和模型选择 Regularization and Model Selection
设想现在对于一个学习问题,需要从一组不同的模型中进行挑选。比如多元回归模型 $h_\theta(x)=g(\theta_0+\theta_1x+\theta_2x^2+\cdots+\theta_kx^k)$,如何自动地确定 $k$ 的取值,从而在偏差和方差之间达到较好的权衡?或者对于局部加权线性回归,如何确定带宽 $\tau$ 的值,以及对于 $\ell_1$ 正则化的支持向量机,如何确定参数 $C$ 的值?
为了方便后续的讨论,统一假定有一组有限数量的模型集合 $\mathcal{M}=\{M_1,\cdots,M_d\}$。(推广到无限数量的集合也非常容易,比如对于局部加权线性模型的带宽 $\tau$,其取值范围为 $\mathbb{R}^+$,只需要将 $\tau$ 离散化,考虑有限的若干个值即可。更一般地,这里讨论的绝大多数算法,都可以看做在模型空间范围内的优化搜索问题)
本节包括以下内容:
1. 交叉验证 Cross validation
2. 特征选择 Feature selection
3. 贝叶斯统计学和正则化 Bayesian statistics and regularization
### 1. 交叉验证 Cross Validation
设想有训练集 $S$。回顾经验风险最小化,一个直观的模型选择过程如下:
1. 对于每一个模型 $M_i$,用 $S$ 进行训练,得到相应的假设函数 $h_i$。
2. 挑选训练误差最小的假设函数。
这个算法的表现可能会很差。考虑多元线性回归,模型的阶越高,它对训练集 $S$ 的拟合情况就越好,从而可以得到越低的训练误差。因此,上面这个方法,总是会挑选出高方差、高阶的多元模型。
下面是 **hold-out 交叉验证**(也称简单交叉验证)的思路:
1. 随机将 $S$ 分为 $S_{train}$(例如大约70%的数据)和 $S_{cv}$(剩下的30%)。这里,$S_{cv}$ 称作hold-out交叉验证集。
2. 对于每一个模型 $M_i$,仅使用 $S_{train}$ 进行训练,得到相应的假设函数 $h_i$。
3. 从 $h_i$ 中挑选对hold-out交叉验证集误差 $\hat{\epsilon}_{cv}(h_i)$ 最小的假设函数。
对未用来训练数据的 $S_{cv}$ 计算的交叉验证集误差,是泛化误差的一个更好的估计量。通常,在hold-out交叉验证中会保留四分之一到三分之一的数据,30%是最常见的选择。
hold-out交叉验证还有一个可选的步骤,在上述流程完成后,可以用模型 $M_i$ 针对整个训练集 $S$ 重新训练。(通常这会训练出稍好的模型,不过也有例外,比如学习算法非常容易收到初始条件或初始数据影响的情况,这时 $M_i$ 在 $S_{train}$ 上表现良好,并不一定也会在 $S_{cv}$ 上表现良好,这种情况最好不要执行这个重新训练的过程)
hold-out交叉验证的一个劣势在于,它“浪费”了30%的数据。即便最后使用整个训练集重新训练了模型,看上去模型挑选的过程还是只针对 $0.7m$ 的训练样本,而不是所有 $m$ 个样本,因为我们的测试的模型只使用了 $0.7m$ 的数据。当数据量非常大的时候,这通常没什么问题,但数据量小的时候,可能就需要更好的策略。
**k折交叉验证**,每次训练时,都只保留更少的数据:
1. 随机将 $S$ 分为 $k$ 个不相交的子集,每个子集包含 $m/k$ 个训练样本。记做 $S_1,\cdots,S_k$。
2. 对于每一个模型 $M_i$:对 $j=1,\cdots,k$,使用 $S_1 \cup \cdots \cup S_{j-1} \cup S_{j+1} \cup \cdots \cup S_k$(即对训练集中除去 $S_j$ 的部分进行训练),得到假设函数 $h_{ij}$。在 $S_j$ 上测试,得到 $\hat{\epsilon}_{S_{j}}(h_{ij})$。最终 $M_i$ 的泛化误差估计量表示为 $\hat{\epsilon}_{S_{j}}(h_{ij})$ 的平均值。
3. 挑选预计泛化误差最小的模型 $M_i$,用整个训练集 $S$ 重新训练,得到最终的假设函数。
最常见的选择是令 $k=10$。由于每次训练时保留的数据量更小,而每个模型都需要训练 $k$ 次,k折交叉验证的计算开销会比hold-out交叉验证更大。
尽管 $k=10$ 是最常用的,但当数据量非常小的时候,有时也会采用 $k=m$ 来保证每次训练尽可能保留最少的数据用于验证。这种特殊情况的k折验证,也叫作**留一交叉验证 leave-one-out cross validation**。
最后,尽管这里介绍交叉验证用于模型选择,实际上,交叉验证也可以用来做单个模型的效果评估。
### 2. 特征选择 Feature Selection
模型选择中的一个特殊应用是特征选择。设想一个监督学习问题拥有非常大数量的特征(甚至可能 $n \gg m$),但实际上能只有一小部分特征与学习任务“有关”。即便使用了最简单的线性分类器,假设函数类的VC维依然是 $O(n)$,除非训练集足够大,否则就会有潜在的过拟合风险。
在上面的设定下,可以使用一个特征选择算法来减少特征。给定 $n$ 个特征,最多有 $2^n$ 种特征组合,所以特征选择可以转换成一个 $2^n$ 种模型的模型选择问题。如果 $n$ 很大,枚举所有 $2^n$ 种模型的计算开销将会非常大。所以,通常会采用一些探索性的搜索策略,来找到一个不错的特征子集。下面这个策略,叫做**前向搜索 forward search**:
1. 初始化 $\mathcal{F}=\emptyset$
2. 重复以下两个步骤:(a) 对于 $i=1,\cdots,n$,如果 $i \notin \mathcal{F}$,令 $\mathcal{F}_i=\mathcal{F} \cup \{i\}$,然后使用交叉验证方法来测试特征集 $\mathcal{F}_i$。(也即只使用 $\mathcal{F}_i$ 中的特征来训练模型,并预估泛化误差)。(b) 令 $\mathcal{F}$ 等于步骤(a)中最好的特征子集。
3. 选择整个搜索评估过程中表现最好的特征子集。
最外层的循环,既可以在 $\mathcal{F}={1,\cdots,n}$ 时终止,也可以中止于 $|\mathcal{F}|$ 超过某个预设的阈值。(比如,预先评估了模型最多只使用 $x$ 个特征)
上述这个算法属于**模型特征选择包装 wrapper model feature selection**的一个实例,因为它是一个包装在学习算法之外的步骤,通过一定策略重复调用学习算法来评估其表现。除去前项选择之外,还有一些别的搜索策略。比如**后向选择 backward search**:从 $\mathcal{F}=\{1,\cdots,n\}$ 开始,每次删除一个特征直到 $\mathcal{F}=\emptyset$。
特征选择包装算法通常表现不错,但是计算开销很大。完成整个搜索过程需要 $O(n^2)$ 次对学习算法的调用。
**过滤特征选择 filter feature selection** 则是一个计算开销小的探索性特征选择策略。这个策略的核心,是计算出某种能表示特征 $x_i$ 对于标签 $y$ 贡献的信息量的评分 $S(i)$。这样,只要再选择其中得分最大的 $k$ 个特征即可。
可以选择皮尔森相关性的绝对值,作为评分标准。但在实际中,最长使用(尤其对于离散特征)的方法叫做**互信息mutual information**,定义如下:
$$ MI(x_i,y) = \sum_{x_i \in \{0,1\}}\sum_{y \in \{0,1\}}p(x_i,y)log\frac{p(x_i,y)}{p(x_i)p(y)} $$
(这里的等式假定 $x_i,y$ 都是二元值。更广泛地定义会根据变量的定义域来计算)概率 $p(x_i,y),p(x_i),p(y)$ 都可以通过训练集的经验分布来进行预测。
注意到,互信息也可以表示为**KL散度 Kullback-Leibler divergence**:
$$ MI(x_i,y)=KL(p(x_i,y)||p(x_i)p(y)) $$
KL散度度量的是 $p(x_i,y)$ 与 $p(x_i)p(y))$ 之间分布的差异程度。如果 $x_i$ 和 $y$ 是独立随机变量,那么 $p(x_i,y)=p(x_i)p(y))$,这时二者的KL散度为零。这和我们的直觉相符,如果 $x_i$ 和 $y$ 相互独立,那么 $x_i$ 对于 $y$ 就没有贡献任何信息量,$S(i)$ 就应当非常小。
最后一个细节:当已经计算好 $S(i)$ 将特征根据重要性进行排序了之后,如何决定使用多少个特征呢?标准的做法是,使用交叉验证来确定 $k$ 值。
### 3. 贝叶斯统计学和正则化 Bayesian statistics and regularization
正则化是应对过拟合的一个有效方法。之前,在参数拟合的过程中多使用最大似然估计法,根据下面这个公式来选择参数
$$ \theta_{ML}=arg \max_{\theta}\prod_{i=1}^m p(y^{(i)}|x^{(i)};\theta) $$
在这整个过程中,$\theta$ 都被视作一个确定但未知的常数,这是**频率学派 frequentist statistics**的视角。在频率学派看来,参数 $\theta$ 并不是随机而仅仅是未知的,我们需要通过某种统计推断(比如最大似然估计法)的手段来估计这个参数。
与此相对,**贝叶斯学派 Bayesian statistics**将 $\theta$ 看做一个值未知的随机变量。从而,我们可以假设一个关于 $\theta$ 的**先验概率 prior distribution** $p(\theta)$。而给定训练集 $S=\{(x^{(i)}, y^{(i)})\}_{i=1}^m$,如果这时要对一个新的输入值 $x$ 做预测,我们需要先计算 $\theta$ 的**后验概率 posterior distribution**
$$
\begin{split}
p(\theta|S) &= \frac{p(S|\theta)p(\theta)}{p(S)} \\
&= \frac{\prod_{i=1}^m p(y^{(i)}|x^{(i)},\theta)p(\theta)}{\int_{\theta}(\prod_{i=1}^m p(y^{(i)}|x^{(i)},\theta)p(\theta))d\theta}
\end{split}
$$
注意到,这时 $\theta$ 已经是一个随机变量,是可以 $p(y^{(i)}|x^{(i)},\theta)$ 的形式作为条件概率的条件出现的(而不是之前的 $p(y^{(i)}|x^{(i)};\theta)$)。
之后,对于新的数据 $x$ 进行预测时,计算其标签的后验概率
$$ p(y|x,S)=\int_{\theta}p(y|x,\theta)p(\theta|S)d\theta $$
因此,如果目标是预测给定 $x$ 时 $y$ 的期望值,那么
$$ E[y|x,S]=\int_{y}yp(y|x,S)dy $$
上面的过程全部都使用了贝叶斯统计学的思路,这样计算开销会无比巨大。因而,在实际中,会使用一些近似的方法来估计 $\theta$ 的后验概率以简化计算。最常见的近似,是使用点估计,**最大后验概率估计 maximum a posteriori **
$$ \theta_{MAP}=arg \max_\theta \prod_{i=1}^m p(y^{(i)}|x^{(i)},\theta)p(\theta) $$
注意到这个公示仅仅比最大似然估计法在因子中增加了一项 $p(\theta)$。
而实际中,对先验概率 $p(\theta)$ 的假设,通常是 $\theta \sim \mathcal{N}(0,\tau^2I)$。使用这样的先验概率假设,拟合出的参数 $\theta_{MAP}$ 的模会比最大似然估计法小很多。这使得贝叶斯最大后验概率估计更不易受到过拟合的影响。
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
# fp = os.path.join('..\scripts', 'the_hunchback_of_notre_dame.txt ')
# # os.listdir('scripts')
# chars = np.array([])
# words = np.array([])
# scene_setup = np.array([])
# new_char = True
# with open(fp, 'r', encoding='utf-8') as infile:
# # print(infile.readlines())
# print(infile.read())
# for line in infile:
# # print(line)
# if line == '\n':
# continue
# if not new_char:
# words[-1] += line
# if ':' in line:
# # print([line.split(':')[0]])
# if ' ' in line:
# words[-1] += line
# continue
# words = np.append(words,[line.split(':')[1]])
# chars = np.append(chars,[line.split(':')[0]])
# new_char = False
# # print(line)
# # print(new_char)
# # print('\t\thello')
# chars.shape
# words.shape
# words[-4]
# fp = os.path.join('..\scripts', 'the_hunchback_of_notre_dame.txt ')
# # os.listdir('scripts')
# chars = np.array([])
# words = np.array([])
# scene_setup = np.array([])
# scene = False
# new_char = True
# # def find_setup(line):
# # if '(' in line:
# # s = line
# # scene_setup += s[s.find("(")+1:s.find(")")]
# # if s.find(")") == -1:
# # scene = True
# # continue
# # if scene:
# # if ')' in line:
# # scene_setup[0] += s[s.find("(")+1:s.find(")")]
# with open(fp, 'r', encoding='utf-8') as infile:
# # print(infile.readlines())
# for line in infile:
# # print([line])
# # if line == '\n':
# # continue
# s = line
# if '(' in line:
# # print(scene_setup)
# # print( [s[s.find("(")+1:s.find(")")]])
# if s.find(")") == -1:
# scene_setup = np.append(scene_setup,[s[s.find("("):s.find(")")]])
# scene = True
# continue
# else:
# scene_setup = np.append(scene_setup,[s[s.find("("):s.find(")")+1]])
# if scene:
# # print(line)
# # print(scene_setup[-1])
# # print(line)
# if ')' in line:
# # print(True)
# print([scene_setup[-1] +s[:s.find(")")+1]])
# scene_setup[-1] += s[:s.find(")")+1]
# # print(s.find(")")+1)
# # print(s[:s.find(")")+1])
# # print(scene_setup[-1] +s[:s.find(")")+1])
# scene = False
# else:
# scene_setup[-1] += s[:]
# if not new_char:
# words[-1] += line
# if ':' in line:
# if ' ' in line:
# words[-1] += line
# continue
# words = np.append(words,[line.split(':')[1]])
# chars = np.append(chars,[line.split(':')[0]])
# new_char = False
# # print(line)
# # print(new_char)
# # print('\t\thello')
# chars.shape
# words.shape
# words[-4]
# scene_setup
print('\t\thello')
' ' in' of witchcraft. The sentence'
import re
s = "this is a ki_te message"
s[s.find("(")+1:s.find(")")]
s.find(")")
### fp = os.path.join('..\scripts', 'the_hunchback_of_notre_dame.txt')
# os.listdir('scripts')
chars = []
words = []
scene_setup = []
scene = False
new_char = True
def get_scene_setup(string):
'''
get scence setup
'''
s = string
scene_setup = s[s.find("("):s.find(")")+1]
return scene_setup
def remove_scene_setup(string):
'''
get scence setup
'''
s = string
scene_setup = s[s.find("("):s.find(")")+1]
s = string.replace(scene_setup, ' ')
return scene_setup
# if '(' in line:
# if s.find(")") == -1:
# scene_setup += [s[s.find("("):]]
# scene = True
# continue
# else:
# scene_setup += [s[s.find("("):s.find(")")+1]]
# if scene:
# if ')' in line:
# scene_setup[-1] += s[:s.find(")")+1]
# scene = False
# else:
# scene_setup[-1] += s[:]
line_nums = []
with open(fp, 'r', encoding='utf-8') as infile:
# print(infile.readlines())
for num_l, line in enumerate(infile):
num_l += 1
# print(line)
s = line
# get scene setup
if not new_char:
if ':' in line and ' ' not in line:
pass
else:
words[-1] += line
if ':' in line:
if ' ' in line:
words[-1] += line
continue
line_nums += [num_l]
words += [line.split(':')[1]]
chars += [line.split(':')[0]]
new_char = False
df = pd.DataFrame()
# np.array(chars).shape
# np.array(words).shape
chars_words = np.array(list(zip(chars, words,line_nums)))
draft = pd.DataFrame(chars_words,columns = ['chars','lines','line_num'])
# 2) put lines into a df & store it
# 3)
draft['scene_setup'] = draft.lines.apply(get_scene_setup)
draft
# draft['mod_lines'] = draft.lines.str.replace(draft.scene_setup,'')
'ádasda'.replace('a','')
path = "..\\scripts\\sleeping_beauty.txt"
fp = os.path.join('..\scripts', 'the_hunchback_of_notre_dame.txt')
# with open(fp, 'r', encoding='utf-8') as infile:
# print(infile.readlines())
file = open(fp, 'r')
script = file.readlines()
file.close()
start, end = 25, 1156
line_count = 0 # number of lines total
characters = set() # all the characters
lines = [] # character with their lines, index are the line numbers
#TODO: get rid of '[...]', stage setup instructions, from the lines
def remove_setup(string):
stack = False
new_str = ""
for s in string:
if s=='(':
stack = True
elif s==')':
stack = False
else:
if not stack:
new_str += s
return new_str.strip()
for i in range(start, end+1):
holder = script[i] # line that is being read
if ':' in holder:
character = holder[:-1]
characters.add(character) # record for unique characters
lines.append([character, ""])
line_count += 1
else:
if len(holder) != 0 and line_count != 0:
lines[line_count-1][1] += holder
lines
# lines = np.array(lines)
# d = {'Char':lines[:,0]
# ,'line': lines[:,1]}
# pd.DataFrame(d)
lines
m = np.array(['asdsad'])
m[-1] += 'asdsadasd'
m
```
| github_jupyter |
<a href="https://colab.research.google.com/github/constantinpape/dl-teaching-resources/blob/main/exercises/classification/5_data_augmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Augmentation on CIFAR10
In this exercise we will use data augmentation to increase the available training data and thus improve the network training performance. We will use the same network architecture as in the previous exercise.
## Preparation
```
# load tensorboard extension
%load_ext tensorboard
# import torch and other libraries
import os
import numpy as np
import sklearn.metrics as metrics
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.optim import Adam
!pip install cifar2png
# check if we have gpu support
# colab offers free gpus, however they are not activated by default.
# to activate the gpu, go to 'Runtime->Change runtime type'.
# Then select 'GPU' in 'Hardware accelerator' and click 'Save'
have_gpu = torch.cuda.is_available()
# we need to define the device for torch, yadda yadda
if have_gpu:
print("GPU is available")
device = torch.device('cuda')
else:
print("GPU is not available, training will run on the CPU")
device = torch.device('cpu')
# run this in google colab to get the utils.py file
!wget https://raw.githubusercontent.com/constantinpape/training-deep-learning-models-for-vison/master/day1/utils.py
# we will reuse the training function, validation function and
# data preparation from the previous notebook
import utils
cifar_dir = './cifar10'
!cifar2png cifar10 cifar10
categories = os.listdir('./cifar10/train')
categories.sort()
images, labels = utils.load_cifar(os.path.join(cifar_dir, 'train'))
(train_images, train_labels,
val_images, val_labels) = utils.make_cifar_train_val_split(images, labels)
```
## Data Augmentation
The goal of data augmentation is to increase the amount of training data by transforming the input images in a way that they still resemble realistic images. Popular transformations used in data augmentation include rotations, image flips, color jitter or additive noise.
Here, we will start with two transformations:
- random flips along the vertical centerline
- random color jitters
```
# define random augmentations
import skimage.color as color
def random_flip(image, target, probability=.5):
""" Randomly mirror the image across the vertical axis.
"""
if np.random.rand() < probability:
image = np.array([np.fliplr(im) for im in image])
return image, target
def random_color_jitter(image, target, probability=.5):
""" Randomly jitter the saturation, hue and brightness of the image.
"""
if np.random.rand() > probability:
# skimage expects WHC instead of CHW
image = image.transpose((1, 2, 0))
# transform image to hsv color space to apply jitter
image = color.rgb2hsv(image)
# compute jitter factors in range 0.66 - 1.5
jitter_factors = 1.5 * np.random.rand(3)
jitter_factors = np.clip(jitter_factors, 0.66, 1.5)
# apply the jitter factors, making sure we stay in correct value range
image *= jitter_factors
image = np.clip(image, 0, 1)
# transform back to rgb and CHW
image = color.hsv2rgb(image)
image = image.transpose((2, 0, 1))
return image, target
# create training dataset with augmentations
from functools import partial
train_trafos = [
utils.to_channel_first,
utils.normalize,
random_color_jitter,
random_flip,
utils.to_tensor
]
train_trafos = partial(utils.compose, transforms=train_trafos)
train_dataset = utils.DatasetWithTransform(train_images, train_labels,
transform=train_trafos)
# we don't use data augmentations for the validation set
val_dataset = utils.DatasetWithTransform(val_images, val_labels,
transform=utils.get_default_cifar_transform())
# sample augmentations
def show_image(ax, image):
# need to go back to numpy array and WHC axis order
image = image.numpy().transpose((1, 2, 0))
ax.imshow(image)
n_samples = 8
image_id = 0
fig, ax = plt.subplots(1, n_samples, figsize=(18, 4))
for sample in range(n_samples):
image, _ = train_dataset[0]
show_image(ax[sample], image)
# we reuse the model from the previous exercise
# if you want you can also use a different CNN architecture that
# you have designed in the tasks part of that exercise
model = utils.SimpleCNN(10)
model = model.to(device)
# instantiate loaders and optimizer and start tensorboard
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=25)
optimizer = Adam(model.parameters(), lr=1.e-3)
%tensorboard --logdir runs
# we have moved all the boilerplate for the full training procedure to utils now
n_epochs = 10
utils.run_cifar_training(model, optimizer,
train_loader, val_loader,
device=device, name='da1',
n_epochs=n_epochs)
# evaluate the model on test data
test_dataset = utils.make_cifar_test_dataset(cifar_dir)
test_loader = DataLoader(test_dataset, batch_size=25)
predictions, labels = utils.validate(model, test_loader, nn.NLLLoss(),
device, step=0, tb_logger=None)
print("Test accuracy:")
accuracy = metrics.accuracy_score(labels, predictions)
print(accuracy)
fig, ax = plt.subplots(1, figsize=(8, 8))
utils.make_confusion_matrix(labels, predictions, categories, ax)
```
## Normalization layers
In addition to convolutional layers and pooling layers, another important part of neural networks are normalization layers.
These layers keep their input normalized using a learned normalization. The first type of normalization introduced has been [BatchNorm](https://arxiv.org/abs/1502.03167), which we will now add to the CNN architecture from the previous exercise.
```
import torch.nn.functional as F
class CNNBatchNorm(nn.Module):
def __init__(self, n_classes):
super().__init__()
self.n_classes = n_classes
# the convolutions
self.conv1 = nn.Conv2d(in_channels=3, out_channels=12, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=12, out_channels=24, kernel_size=3)
# the pooling layer
self.pool = nn.MaxPool2d(2, 2)
# the normalization layers
self.bn1 = nn.BatchNorm2d(12)
self.bn2 = nn.BatchNorm2d(24)
# the fully connected part of the network
# after applying the convolutions and poolings, the tensor
# has the shape 24 x 6 x 6, see below
self.fc = nn.Sequential(
nn.Linear(24 * 6 * 6, 120),
nn.ReLU(),
nn.Linear(120, 60),
nn.ReLU(),
nn.Linear(60, self.n_classes)
)
self.activation = nn.LogSoftmax(dim=1)
def apply_convs(self, x):
# input image has shape 3 x 32 x 32
x = self.pool(F.relu(self.bn1(self.conv1(x))))
# shape after conv: 12 x 28 x 28
# shape after pooling: 12 x 14 X 14
x = self.pool(F.relu(self.bn2(self.conv2(x))))
# shape after conv: 24 x 12 x 12
# shape after pooling: 24 x 6 x 6
return x
def forward(self, x):
x = self.apply_convs(x)
x = x.view(-1, 24 * 6 * 6)
x = self.fc(x)
x = self.activation(x)
return x
# instantiate model and optimizer
model = CNNBatchNorm(10)
model = model.to(device)
optimizer = Adam(model.parameters(), lr=1.e-3)
n_epochs = 10
utils.run_cifar_training(model, optimizer,
train_loader, val_loader,
device=device, name='batch-norm',
n_epochs=n_epochs)
model = utils.load_checkpoin("best_checkpoint_batch-norm.tar", model, optimizer)[0]
predictions, labels = utils.validate(model, test_loader, nn.NLLLoss(),
device, step=0, tb_logger=None)
print("Test accuracy:")
accuracy = metrics.accuracy_score(labels, predictions)
print(accuracy)
fig, ax = plt.subplots(1, figsize=(8, 8))
utils.make_confusion_matrix(labels, predictions, categories, ax)
```
## Tasks and Questions
Tasks:
- Implement one or two additional augmentations and train the model again using these. You can use [the torchvision transformations](https://pytorch.org/docs/stable/torchvision/transforms.html) for inspiration.
Questions:
- Compare the model results in this exercise.
- Can you think of any transformations that make use of symmetries/invariances not present here but present in other kinds of images (e.g. biomedical images)?
Advanced:
- Check out the other [normalization layers available in pytorch](https://pytorch.org/docs/stable/nn.html#normalization-layers). Which layers could be beneficial to BatchNorm here? Try training with them and see if this improves performance further.
| github_jupyter |
# KFServing Sample
In this notebook, we provide two samples for demonstrating KFServing SDK and YAML versions.
### Setup
1. Your ~/.kube/config should point to a cluster with [KFServing installed](https://github.com/kubeflow/kfserving/blob/master/docs/DEVELOPER_GUIDE.md#deploy-kfserving).
2. Your cluster's Istio Ingress gateway must be network accessible, you can do:
`kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80`.
## 1. KFServing SDK sample
Below is a sample for KFServing SDK.
It shows how to use KFServing SDK to create, get, rollout_canary, promote and delete InferenceService.
### Prerequisites
```
!pip install kfserving kubernetes --user
from kubernetes import client
from kfserving import KFServingClient
from kfserving import constants
from kfserving import utils
from kfserving import V1alpha2EndpointSpec
from kfserving import V1alpha2PredictorSpec
from kfserving import V1alpha2TensorflowSpec
from kfserving import V1alpha2InferenceServiceSpec
from kfserving import V1alpha2InferenceService
from kubernetes.client import V1ResourceRequirements
```
Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.
```
namespace = utils.get_default_target_namespace()
```
### Label namespace so you can run inference tasks in it
```
!kubectl label namespace $namespace serving.kubeflow.org/inferenceservice=enabled
```
### Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.
```
api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION
default_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
tensorflow=V1alpha2TensorflowSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers',
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'}
)
)
)
)
isvc = V1alpha2InferenceService(
api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(name='flower-sample', namespace=namespace),
spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec)
)
```
### Create InferenceService
Call KFServingClient to create InferenceService.
```
KFServing = KFServingClient()
KFServing.create(isvc)
```
### Check the InferenceService
```
KFServing.get('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)
```
### Invoke Endpoint
If you want to invoke endpoint by yourself, you can copy and paste below code block and execute in your local environment. Remember you should have a `kfserving-flowers-input.json` file in the same directory when you execute.
```
%%bash
MODEL_NAME=flower-sample
INPUT_PATH=@./kfserving-flowers-input.json
INGRESS_GATEWAY=istio-ingressgateway
SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d "/" -f 3)
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH
```
Expected Output
```
* Trying 34.83.190.188...
* TCP_NODELAY set
* Connected to 34.83.190.188 (34.83.190.188) port 80 (#0)
> POST /v1/models/flowers-sample:predict HTTP/1.1
> Host: flowers-sample.default.svc.cluster.local
> User-Agent: curl/7.60.0
> Accept: */*
> Content-Length: 16201
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< content-length: 204
< content-type: application/json
< date: Fri, 10 May 2019 23:22:04 GMT
< server: envoy
< x-envoy-upstream-service-time: 19162
<
{
"predictions": [
{
"scores": [0.999115, 9.20988e-05, 0.000136786, 0.000337257, 0.000300533, 1.84814e-05],
"prediction": 0,
"key": " 1"
}
]
* Connection #0 to host 34.83.190.188 left intact
}%
```
### Add Canary to InferenceService
Firstly define canary endpoint spec, and then rollout 10% traffic to the canary version, watch the rollout process.
```
canary_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
tensorflow=V1alpha2TensorflowSpec(
storage_uri='gs://kfserving-samples/models/tensorflow/flowers-2',
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'}
)
)
)
)
KFServing.rollout_canary('flower-sample', canary=canary_endpoint_spec, percent=10,
namespace=namespace, watch=True, timeout_seconds=120)
```
### Rollout more traffic to canary of the InferenceService
Rollout traffice percent to 50% to canary version.
```
KFServing.rollout_canary('flower-sample', percent=50, namespace=namespace,
watch=True, timeout_seconds=120)
```
Users send request to service 100 times.
```
%%bash
MODEL_NAME=flowers-sample
INPUT_PATH=@./kfserving-flowers-input.json
INGRESS_GATEWAY=istio-ingressgateway
SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d "/" -f 3)
for i in {0..100};
do
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH;
done
```
check if traffic is split
```
%%bash
default_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-default -o jsonpath='{.items[0].status.observedGeneration}')
canary_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-canary -o jsonpath='{.items[0].status.observedGeneration}')
echo "\nThe count of traffic route to default: $default_count"
echo "The count of traffic route to canary: $canary_count"
```
### Promote Canary to Default
```
KFServing.promote('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)
```
### Delete the InferenceService
```
KFServing.delete('flower-sample', namespace=namespace)
```
## 2. Sample for Kfserving YAML
Note: You should execute all the code blocks in your local environment.
### Create the InferenceService
Apply the CRD
```
!kubectl apply -n $namespace -f kfserving-flowers.yaml
```
Expected Output
```
$ inferenceservice.serving.kubeflow.org/flowers-sample configured
```
### Run a prediction
Use `istio-ingressgateway` as your `INGRESS_GATEWAY` if you are deploying KFServing as part of Kubeflow install, and not independently.
```
%%bash
MODEL_NAME=flowers-sample
INPUT_PATH=@./kfserving-flowers-input.json
INGRESS_GATEWAY=istio-ingressgateway
SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d "/" -f 3)
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH
```
If you stop making requests to the application, you should eventually see that your application scales itself back down to zero. Watch the pod until you see that it is `Terminating`. This should take approximately 90 seconds.
```
!kubectl get pods --watch -n $namespace
```
Note: To exit the watch, use `ctrl + c`.
### Canary Rollout
To test a canary rollout, you can use the tensorflow-canary.yaml
Apply the CRD
```
!kubectl apply -n $namespace -f kfserving-flowers-canary.yaml
```
To verify if your traffic split percenage is applied correctly, you can use the following command:
```
!kubectl get inferenceservices -n $namespace
```
The output should looks the similar as below:
```
NAME READY URL DEFAULT TRAFFIC CANARY TRAFFIC AGE
flowers-sample True http://flowers-sample.default.example.com 90 10 48s
```
```
%%bash
MODEL_NAME=flowers-sample
INPUT_PATH=@./kfserving-flowers-input.json
INGRESS_GATEWAY=istio-ingressgateway
SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d "/" -f 3)
for i in {0..100};
do
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH;
done
```
Verify if traffic split
```
%%bash
default_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-default -o jsonpath='{.items[0].status.observedGeneration}')
canary_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-canary -o jsonpath='{.items[0].status.observedGeneration}')
echo "\nThe count of traffic route to default: $default_count"
echo "The count of traffic route to canary: $canary_count"
```
### Clean Up Resources
```
!kubectl delete inferenceservices flowers-sample -n $namespace
```
| github_jupyter |
# "Folio 03: MLP Classifier"
> "[ML 3/3] Use Neural Networks for Data Classification with Keras"
- toc: true
- branch: master
- badges: true
- image: images/ipynb/mlp_clf_main.png
- comments: false
- author: Giaco Stantino
- categories: [portfolio project, machine learning]
- hide: false
- search_exclude: true
- permalink: /blog/folio-mlp-classifier
# <center> Intro </center>
I would like to try something new. I want to build something more *explainatory* and *exploratory*... to use my experience and knowledge gained while working on computer vision for my master's thesis.
I think this is the perfect oportunity! In the third part of the ML notebooks for the Folio series, we will build the MLP classifier with special focus on explaining techniques to improve accuracy and regularization. In this notebook I'll try to explain some core neural network features and training with Keras library.
***
**Task:** Classify existing clients into marketing segments based on bank statistics.
There are four client segments in the data. If you want to know more, check out [the clustering post](https://giacostantino.com/blog/folio-clustering), where the process of creating the segments is described.
| card master | bill payer | golden fish | barrel scraper |
|:---:|:---:|:---:|:---:|
| <img src="https://github.com/giastantino/repository/blob/main/images/ipynb/card_master.png?raw=true" width="200" height="100"> | <img src="https://github.com/giastantino/repository/blob/main/images/ipynb/bill_payer.png?raw=true" width="200" height="100"> | <img src="https://github.com/giastantino/repository/blob/main/images/ipynb/golden_fish.png?raw=true" width="200" height="100"> | <img src="https://github.com/giastantino/repository/blob/main/images/ipynb/barrel_scraper.png?raw=true" width="200" height="100"> |
***
```
#collapse-hide
import pandas as pd
import numpy as np
import tensorflow as tf
# # # set random seeds
import random
random.seed(42)
np.random.seed(42)
tf.random.set_seed(42)
# # #
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#collapse-hide
def getDataRepo(data='demographics'):
"""
returns specified data from Repository as DataFrame
Parameters:
path - ['demographics', 'statistics' or 'segments']; default: 'demographics'
"""
try:
url = 'https://github.com/giastantino/PortfolioProject/blob/main/Notebooks/Data/' + data + '.csv?raw=True'
client_df = pd.read_csv(url)
print(data + ' data has been read')
return client_df
except Exception as e:
print(e)
#collapse-hide
#hide-output
# read demographic data
demo_df = getDataRepo('statistics')
# read segments data
segment_df = getDataRepo('segments')
# create df for clients with known segment
client_df = demo_df.merge(segment_df, on='client_id')
# create df for clients with unknown segment
unknown_df = demo_df[~demo_df['client_id'].isin(segment_df['client_id'])].reset_index(drop=True)
```
# Data transformation
We are going to use power transform as in clustering notebook.
Let's check if anything is missing in imported data.
```
#collapse-hide
print(f"NaN values in the data:\n-----------------\n{client_df.isna().sum()}")
#hide
print(client_df.info())
```
***
Define data X and targets y for our neural network.
```
from sklearn.model_selection import train_test_split
# define target y and data X
X = client_df.drop(['client_id','segment', 'attrition_flag'],axis=1).values
y = client_df[['segment']]
```
Transform the data and encode the targets. We need to one-hot encode the target classes as neural network will return 'probability' for each class in separate neurons.
```
from sklearn.preprocessing import PowerTransformer, LabelEncoder
from keras.utils import np_utils
# encode data X
ptr = PowerTransformer()
X = ptr.fit_transform(X)
# encode target y
encoder = LabelEncoder()
encoded_y = encoder.fit_transform(np.ravel(y))
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(encoded_y)
```
Split the data into training and testing dataset.
```
from sklearn.model_selection import train_test_split
# define the 80_20 train_test splits
X_train, X_test, y_train, y_test = train_test_split(X, dummy_y, test_size=0.2, random_state=42)
```
# <center> Baseline model </center>
In this notebook we are using Multilayer Perceptron as our classifier. Let's define the base model with 10 neurons in the hidden layer.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout
# define number of input features
input_features = X_train.shape[1]
# uncompiled model
def uncompiled_base(input_dim=4):
# create model
model = Sequential()
model.add(Dense(10, input_dim=input_dim, activation='linear'))
model.add(Dense(4, activation='softmax'))
return model
# baseline model
def baseline_model(input_dim):
model = uncompiled_base(input_dim=input_dim)
# Compile model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
# define model
mlp = baseline_model(input_dim=input_features)
# checkup model
mlp.summary()
# train baseline model
history = mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)
print(f'BASELINE\taccuracy: {history.history["accuracy"][-1]*100:.2f} \t validation accuracy: {history.history["val_accuracy"][-1]*100:.2f}')
```
The baseline model with linear neuron scores around 97% in training. Let's try to best it with modifications.
# <center> Activation Function </center>
The activation function is performed on the sum of all inputs, for the classic perceptron it was a unipolar function, which output was a binary function. However, in modern neural networks, and thus MLP as well, a different approach is commonly used, such as the ReLU or Sigmoid function.
<center><img src="https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_neuron.png?raw=true" width="500"></center>
***
Let's consider three activation functions and how they affect our model: [sigmoid](#), [ReLU](#) and [softplus](#)
<center><img src="https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_activationfunction.png?raw=true" width="500"></center>
Sigmoid function squishes any real number into a range between 0 and 1, mathematically 1 / (1 + np.exp(-x)). ReLU is a function that for negative input returns 0 and for postive returns itself. SoftPlus is a smooth approximation to the ReLU function.
> Note: In the model we are using softmax fuction in the final layer to make multiclass predictions.
```
def get_active_model(input_dim=4, act_fun='relu'):
# create model
model = Sequential()
model.add(Dense(10, input_dim=input_dim, activation=act_fun))
model.add(Dense(4, activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
# define mlp with relu activation
sigm_mlp = get_active_model(input_dim=input_features, act_fun='sigmoid')
# define mlp with relu activation
relu_mlp = get_active_model(input_dim=input_features, act_fun='relu')
# define mlp with relu activation
soft_mlp = get_active_model(input_dim=input_features, act_fun='softmax')
#collapse-hide
# train sigmoid model
sigm_story = sigm_mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)
print(f'Sigmoid model \taccuracy: {sigm_story.history["accuracy"][-1]*100:.2f} \t validation accuracy: {sigm_story.history["val_accuracy"][-1]*100:.2f}')
# train relu model
relu_story = relu_mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)
print(f'ReLU model \taccuracy: {relu_story.history["accuracy"][-1]*100:.2f} \t validation accuracy: {relu_story.history["val_accuracy"][-1]*100:.2f}')
# train softmax model
soft_story = soft_mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)
print(f'Softmax model \taccuracy: {soft_story.history["accuracy"][-1]*100:.2f} \t validation accuracy: {soft_story.history["val_accuracy"][-1]*100:.2f}')
```
In our case model with ReLU activation function outperform other MLPs. It has 0.0.6% better accuracy than the baseline model with linear function, which is 17% error reduction! Let's use the ReLU function for further experiments.
> Warning: Sometimes sigmoid or softmax function may outperform ReLu based models when changing the architecture, eg. adding extra layers. Therefore it is important to conduct experiments in methodical fashion.
# <center> Number of layers and neurons </center>
<center><img src="https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_layers.png?raw=true" width="500"></center>
MLP stands for multi layer perceptron and so far we were using model with input, hidden and output layer. Let's try adding hidden layers and increasing number of neurons.
```
def get_model(input_dim=4, hidden_layers=[], act_fun='relu'):
# create model
model = Sequential()
# add initial layer
model.add(Dense(hidden_layers[0], input_dim=input_dim, activation=act_fun))
# add hidden layers
for layer in hidden_layers[1:]:
model.add(Dense(layer, activation=act_fun))
# add final layer
model.add(Dense(4, activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
# defince hidden layers to inspect
hidden = dict()
hidden[0] = [10]
hidden[1] = [50]
hidden[2] = [10, 10]
hidden[3] = [50, 50]
hidden[4] = [100, 100]
hidden[5] = [10, 10, 10]
hidden[6] = [20, 20, 20]
hidden[7] = [50, 50, 50]
hidden[8] = [100, 100, 100]
hidden[9] = [100, 50, 100]
# get accuracies for models with hidden layers
for hid in hidden:
model = get_model(input_dim=input_features, hidden_layers=hidden[hid])
story = model.fit(X_train, y_train, epochs=20, batch_size=50, validation_split=0.2, verbose=0)
print(f'hidden layers: {hidden[hid]} \taccuracy: {story.history["accuracy"][-1]*100:.2f} \t validation accuracy: {story.history["val_accuracy"][-1]*100:.2f}')
```
We can see that increasing number of layers and neurons has a positive impact on training accuracy, however it is not that obvious for validation subset. We can see that models bigger than `[50, 50]` hidden layers do better in training but not in validation - this suggests that bigger models are overfitting. In other words, bigger models are capable of learning traning data by heart, which is not desirable for test data.
Let's inspect further the best performing model: `[input, 50, 50, output]`
# <center> Epochs and Batches </center>
We can increase model performance with more training epochs - number of times MLP has seen traning data. Moreover, we can also manipulate number of data points model sees in each iteration of the epoch - so called batches of data.
## Batch Size
Let's inspect validation accuracy of chosen model with diffrent batch sizes: online training (batch_size=1), 10, 20, 50, 100.
```
# batch sizes
batches = [1, 10, 20, 50, 100]
# model results history dictionary
history = dict()
# define model
mlp = get_model(input_dim=input_features, hidden_layers=[50,50])
# iterate through batch sizes
for b in batches:
story = mlp.fit(X_train, y_train, validation_split=0.2, epochs=100, batch_size=b, verbose=0)
history[b] = story.history['val_accuracy']
#collapse-hide
# plot validation accuracies
fig, ax = plt.subplots(figsize=(14, 8))
line = sns.lineplot(data = pd.DataFrame(history))
# setup the plot
line.xaxis.grid(True)
line.set_xlabel('Epoch', fontsize=14, labelpad = 10)
line.set_ylabel('Accuracy', fontsize=14, labelpad = 10)
line.set_title('Validation accuracy', fontsize=20, pad=15)
line.set_xlim(1, 99)
line.set_ylim(0.955, None);
```
In the above figure we can clearly see that accuracy score for `batch_size = 1` is very irregular and thus not predictible. On the other hand, for large batch sizes score is stable, which suggest that network doesn't learn new features with consequent epochs. Therefore we are going to proceed with `batch_size = 10`, which looks like good middle ground in our case.
## Overfitting vs number of epochs
Having chosen the architecture and batch size. we shall check how our model behaves in terms of loss (cost function). How many epochs improve model predictions and if the overfitting occurs.
```
# fit the model
model = get_model(input_dim=input_features, hidden_layers=[50,50],)
history = model.fit(X_train, y_train, validation_split=0.2, epochs=150, batch_size=10, verbose=0)
#collapse-hide
# define dataframe with model metrics
df = pd.DataFrame(history.history, columns=history.history.keys())
# setup figure
fig, (ax1, ax2) = plt.subplots(figsize=(22, 6), ncols=2)
# plot model accuracy
fig1 = sns.lineplot(data=df[['accuracy','val_accuracy']], palette='YlGnBu', ax=ax1)
fig1.set_title('Model accuracy', fontsize=20, pad=15)
fig1.set_ylabel('accuracy', fontsize=14, labelpad = 10)
fig1.set_xlabel('epoch', fontsize=14, labelpad = 10)
fig1.legend(['train', 'valid'], loc='upper left')
# plot model loss function
fig2 = sns.lineplot(data=df[['loss','val_loss']], palette='YlGnBu', ax=ax2)
fig2.set_title('Loss function', fontsize=20, pad=15)
fig2.set_ylabel('loss', fontsize=14, labelpad = 10)
fig2.set_xlabel('epoch', fontsize=14, labelpad = 10)
fig2.legend(['train', 'valid'], loc='upper left');
```
In the above graphs, there are plotted model scores for accuracy on the left and for the value of loss function on the right. We can see that model doesn't improve in term's of accuracy after ca. 60 epochs. On the other hand it starts to overfit the data after ca. 40 epochs - the loss function for training and validation spreads out in different directions.
In order to prevent model from overfitting we will use some regularization techniques.
# Regularization
Having read this notebook we could relize how complex neural networks are. This makes them more prone to overfitting. Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model performance on the unseen data as well.
## Dropout
This is one of the most interesting types of regularization techniques. It also produces very good results and is consequently the most frequently used regularization technique in the field of deep learning.
So what does dropout do? At every iteration, it randomly selects some nodes and removes them along with all of their incoming and outgoing connections as shown below.
<center><img src="https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_dropout.png?raw=true" width="300"></center>
## Early stopping
Early stopping is a kind of cross-validation strategy where we keep one part of the training set as the validation set. When we see that the performance on the validation set is getting worse, we immediately stop the training on the model. This is known as early stopping.
<center><img src="https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_earlystopping.png?raw=true" width="500"></center>
# <center> Summary </center>
Let's sum up everything in final code cell. Firstly, we define the `get_model` function with hidden layer and drop rate. Then, we create callback objects which represent early stopping and saving the checkpoint model - currently the best model based on test accuracy.
```
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
from tqdm.keras import TqdmCallback
# get model
def get_model(input_dim=4, layers=[], drop_rate=0.2, act_fun='relu'):
# create model
model = Sequential()
# add initial layer with dropout
model.add(Dense(layers[0], input_dim=input_dim, activation=act_fun))
model.add(Dropout(drop_rate))
# add hidden layers
for layer in layers[1:]:
model.add(Dense(layer, activation=act_fun))
model.add(Dropout(drop_rate))
# add final layer
model.add(Dense(4, activation='softmax'))
# compile the model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
# early stopping
es = EarlyStopping(monitor="val_loss",
patience=20, # how long to wait for loss to decrease
verbose=0,
mode="min",
restore_best_weights=False,
)
# model checkpoints
mc = ModelCheckpoint('best_model.h5',
monitor='val_accuracy',
mode='max',
verbose=0, # set 1 to see when model is saved
save_best_only=True
)
# define model
model = get_model(input_dim=input_features, layers=[50,50], drop_rate=0.2)
# fit model
history = model.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=150,
verbose=0,
callbacks=[es, mc, TqdmCallback(verbose=1)],
)
# load the saved model
saved_model = load_model('best_model.h5')
# evaluate the model
_, train_acc = saved_model.evaluate(X_train, y_train, verbose=0)
_, test_acc = saved_model.evaluate(X_test, y_test, verbose=0)
print('\nTrain: %.3f, Test: %.3f' % (train_acc, test_acc))
```
We can see that the early stopping strategy worked and stopped training. The best model for completed epochs, and thus our final model, has the highest accuracy of all neural networks in this notebook.
Let's plot confusion matrix of the predictions made by the final MLP.
```
#collapse-hide
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
y_pred = saved_model.predict(X_test)
y_test_class = np.argmax(y_test, axis=1)
y_pred_class = np.argmax(y_pred, axis=1)
fig, ax = plt.subplots(figsize=(6, 6))
ConfusionMatrixDisplay.from_predictions(y_test_class, y_pred_class, cmap='YlGnBu_r', colorbar = False, ax = ax)
ax.set_title('Estimator Confusion Matrix', fontsize=20, pad=20)
ax.set_xlabel('Predicited', fontsize = 14, labelpad=10)
ax.set_ylabel('True', fontsize = 16, labelpad=10);
```
Our neural network has not only high accuracy, but also precision of predictions is very high, as show in the confusion matrix above.
# Final thoughts
The goal was to create the MLP classifier. To complete the task we used Keras. We have created a model that is very good at learning data, and thanks to regularization techniques, it generalizes and predicts with hich precision and accuracy on unseen data.
| github_jupyter |
```
!pip install splinter
! pip install bs4
! pip install datetime
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup as bs
from datetime import datetime
import os
import time
! brew cask install chromedriver
# Capture path to Chrome Driver & Initialize browser
browser = Browser("chrome", headless=False)
# Page to Visit
url = "https://mars.nasa.gov/news/"
browser.visit(url)
#using bs to write it into html
html = browser.html
soup = bs(html,"html.parser")
news_title = soup.find("div",class_="content_title").text
news_p = soup.find("div", class_="article_teaser_body").text
print(f"Title: {news_title}")
print(f"Para: {news_p}")
# Mars Image
url_image = "https://www.jpl.nasa.gov/spaceimages/?search=&category=featured#submit"
browser.visit(url_image)
from urllib.parse import urlsplit
base_url = "{0.scheme}://{0.netloc}/".format(urlsplit(url_image))
print(base_url)
#Design an xpath selector to grab the image
xpath = "//*[@id=\"page\"]/section[3]/div/ul/li[1]/a/div/div[2]/img"
#Use splinter to click on the mars featured image
#to bring the full resolution image
results = browser.find_by_xpath(xpath)
img = results[0]
img.click()
#get image url using BeautifulSoup
html_image = browser.html
soup = bs(html_image, "html.parser")
img_url = soup.find("img", class_="fancybox-image")["src"]
featured_image_url = base_url + img_url
print(featured_image_url)
#get mars weather's latest tweet from the website
url_weather = "https://twitter.com/marswxreport?lang=en"
browser.visit(url_weather)
html_weather = browser.html
soup = bs(html_weather, "html.parser")
mars_weather = soup.find("p", class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text").text
print(mars_weather)
# Mars Facts
url_facts = "https://space-facts.com/mars/"
table = pd.read_html(url_facts)
table[0]
df_mars_facts = table[0]
df_mars_facts.columns = ["Parameter", "Values"]
df_mars_facts.set_index(["Parameter"])
mars_html_table = df_mars_facts.to_html()
mars_html_table = mars_html_table.replace("\n", "")
mars_html_table
# Mars Hemisphere
url_hemisphere = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(url_hemisphere)
#Get base url
hemisphere_base_url = "{0.scheme}://{0.netloc}/".format(urlsplit(url_hemisphere))
print(hemisphere_base_url)
#1 Hemisphere
hemisphere_img_urls = []
results = browser.find_by_xpath( "//*[@id='product-section']/div[2]/div[1]/a/img").click()
time.sleep(2)
cerberus_open_click = browser.find_by_xpath( "//*[@id='wide-image-toggle']").click()
time.sleep(1)
cerberus_image = browser.html
soup = bs(cerberus_image, "html.parser")
cerberus_url = soup.find("img", class_="wide-image")["src"]
cerberus_img_url = hemisphere_base_url + cerberus_url
print(cerberus_img_url)
cerberus_title = soup.find("h2",class_="title").text
print(cerberus_title)
back_button = browser.find_by_xpath("//*[@id='splashy']/div[1]/div[1]/div[3]/section/a").click()
cerberus = {"image title":cerberus_title, "image url": cerberus_img_url}
hemisphere_img_urls.append(cerberus)
#2 Hemisphere
results1 = browser.find_by_xpath( "//*[@id='product-section']/div[2]/div[2]/a/img").click()
time.sleep(2)
schiaparelli_open_click = browser.find_by_xpath( "//*[@id='wide-image-toggle']").click()
time.sleep(1)
schiaparelli_image = browser.html
soup = bs(schiaparelli_image, "html.parser")
schiaparelli_url = soup.find("img", class_="wide-image")["src"]
schiaparelli_img_url = hemisphere_base_url + schiaparelli_url
print(schiaparelli_img_url)
schiaparelli_title = soup.find("h2",class_="title").text
print(schiaparelli_title)
back_button = browser.find_by_xpath("//*[@id='splashy']/div[1]/div[1]/div[3]/section/a").click()
schiaparelli = {"image title":schiaparelli_title, "image url": schiaparelli_img_url}
hemisphere_img_urls.append(schiaparelli)
#3 Hemisphere
results1 = browser.find_by_xpath( "//*[@id='product-section']/div[2]/div[3]/a/img").click()
time.sleep(2)
syrtis_major_open_click = browser.find_by_xpath( "//*[@id='wide-image-toggle']").click()
time.sleep(1)
syrtis_major_image = browser.html
soup = bs(syrtis_major_image, "html.parser")
syrtis_major_url = soup.find("img", class_="wide-image")["src"]
syrtis_major_img_url = hemisphere_base_url + syrtis_major_url
print(syrtis_major_img_url)
syrtis_major_title = soup.find("h2",class_="title").text
print(syrtis_major_title)
back_button = browser.find_by_xpath("//*[@id='splashy']/div[1]/div[1]/div[3]/section/a").click()
syrtis_major = {"image title":syrtis_major_title, "image url": syrtis_major_img_url}
hemisphere_img_urls.append(syrtis_major)
#4 Hemisphere
results1 = browser.find_by_xpath( "//*[@id='product-section']/div[2]/div[4]/a/img").click()
time.sleep(2)
valles_marineris_open_click = browser.find_by_xpath( "//*[@id='wide-image-toggle']").click()
time.sleep(1)
valles_marineris_image = browser.html
soup = bs(valles_marineris_image, "html.parser")
valles_marineris_url = soup.find("img", class_="wide-image")["src"]
valles_marineris_img_url = hemisphere_base_url + syrtis_major_url
print(valles_marineris_img_url)
valles_marineris_title = soup.find("h2",class_="title").text
print(valles_marineris_title)
back_button = browser.find_by_xpath("//*[@id='splashy']/div[1]/div[1]/div[3]/section/a").click()
valles_marineris = {"image title":valles_marineris_title, "image url": valles_marineris_img_url}
hemisphere_img_urls.append(valles_marineris)
hemisphere_img_urls
```
| github_jupyter |
```
# Realize ResAE
# The decoder part only have the symmetic sturcture as the encoder, but weights and biase are initialized.
# Let's have a try.
# Display the result
import matplotlib
matplotlib.use('Agg')
%matplotlib inline
import matplotlib.pyplot as plt
import utils
import Block
import os
import time
import numpy as np
import tensorflow as tf
import tensorflow.contrib.layers as layers
# Step1 load MNITST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data",
one_hot=True,
validation_size=2000)
x_in = tf.placeholder(tf.float32, shape=[None,28,28,1],name='inputs')
x_out = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='outputs')
code_length = 128
code = tf.placeholder(tf.float32, shape=[None,code_length],name='code')
is_training = tf.placeholder(tf.bool, name='is_training')
```
## Encoder part
```
# Reisudal blocks
encode_flag = True
net = x_in
blocks_en = [
[(16, 8, 2)],
[(32, 16, 2)],
]
for i, block in enumerate(blocks_en):
block_params = utils.get_block(block, is_training=is_training)
# build the net
block_obj = Block.Block(
inputs = net,
block_params = block_params,
is_training = is_training,
encode_flag=encode_flag,
scope = 'block'+str(i),
summary_flag = True
)
net = block_obj.get_block()
# get shape of last block
encode_last_block_shape = net.get_shape()
# flatten layer
with tf.name_scope('flatten_en'):
net = layers.flatten(net)
tf.summary.histogram('flatten_en',net)
flatten_length = int(net.get_shape()[-1])
```
## Encoder layer
```
with tf.name_scope('encoder_layer'):
net = layers.fully_connected(
inputs = net,
num_outputs=code_length,
activation_fn=tf.nn.relu,
)
tf.summary.histogram('encode_layer',net)
code = net
```
## Decoder block
```
encode_last_block_shape[2]
with tf.name_scope('flatten_de'):
net = layers.fully_connected(
inputs = net,
num_outputs=flatten_length,
activation_fn=tf.nn.relu,
)
tf.summary.histogram('flatten_en', net)
# flatten to convolve
with tf.name_scope('flatten_to_conv'):
net = tf.reshape(
net,
[-1, int(encode_last_block_shape[1]),
int(encode_last_block_shape[2]), int(encode_last_block_shape[3])])
net.get_shape()
# Residual blocks
blocks_de = [
[(16, 16, 2)],
[(1, 8, 2)],]
for i, block in enumerate(blocks_de):
block_params = utils.get_block(block, is_training=is_training)
# build the net
block_obj = Block.Block(
inputs = net,
block_params = block_params,
is_training = is_training,
encode_flag=False,
scope = 'block'+str(i),
summary_flag = True
)
net = block_obj.get_block()
x_out = net
# loss function
with tf.name_scope('loss'):
cost = tf.reduce_mean(tf.square(x_out-x_in))
tf.summary.scalar('loss', cost)
# learning rate
with tf.name_scope('learning_rate'):
init_lr = tf.placeholder(tf.float32, name='LR')
global_step = tf.placeholder(tf.float32, name="global_step")
decay_step = tf.placeholder(tf.float32, name="decay_step")
decay_rate = tf.placeholder(tf.float32, name="decay_rate")
learning_rate = tf.train.exponential_decay(
learning_rate = init_lr ,
global_step = global_step,
decay_steps = decay_step,
decay_rate = decay_rate,
staircase=False,
name=None)
def feed_dict(train,batchsize=100,drop=0.5, lr_dict=None):
"""Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""
if train:
xs, _ = mnist.train.next_batch(batchsize)
f_dict = {x_in: xs.reshape([-1,28,28,1]),
is_training: True}
f_dict.update(lr_dict)
else:
# validation
x_val,_ = mnist.validation.images,mnist.validation.labels
f_dict = {x_in: x_val.reshape([-1,28,28,1]),
is_training: False}
return f_dict
# Train step
# note: should add update_ops to the train graph
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost)
# Merge all the summaries and write to logdir
logdir = './log'
if not os.path.exists(logdir):
os.mkdir(logdir)
merged = tf.summary.merge_all()
# Initialize the variables
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter(logdir + '/train',
sess.graph)
val_writer = tf.summary.FileWriter(logdir + '/validation',
sess.graph)
# Training the model by repeatedly running train_step
import time
epochs = 100
batchsize= 100
num_batches = mnist.train.images.shape[0] // batchsize
# num_batches = 200
lr_init = 0.001
d_rate = 0.9
x_epoch = np.arange(0,epochs,1)
y_loss_trn = np.zeros(x_epoch.shape)
y_loss_val = np.zeros(x_epoch.shape)
# Init all variables
timestamp = time.strftime('%Y-%m-%d: %H:%M:%S', time.localtime(time.time()))
print("[%s]: Epochs Trn_loss Val_loss" % (timestamp))
for i in range(epochs):
lr_dict = {init_lr: lr_init, global_step:i,
decay_step: i, decay_step: batchsize,
decay_rate: d_rate}
loss_trn_all = 0.0
for b in range(num_batches):
train_dict = feed_dict(True,lr_dict=lr_dict)
# train
summary_trn, _, loss_trn = sess.run(
[merged, train_step, cost],
feed_dict=train_dict)
loss_trn_all += loss_trn
y_loss_trn[i] = loss_trn_all / num_batches
train_writer.add_summary(summary_trn, i)
# validation
val_dict = feed_dict(False)
summary_val, y_loss_val[i] = sess.run(
[merged, cost],feed_dict=val_dict)
val_writer.add_summary(summary_val, i)
if i % 10 == 0:
timestamp = time.strftime('%Y-%m-%d: %H:%M:%S', time.localtime(time.time()))
print('[%s]: %d %.4f %.4f' % (timestamp, i,
y_loss_trn[i], y_loss_val[i]))
plt.rcParams["figure.figsize"] = [8.0,6.0]
plt.plot(x_epoch, y_loss_trn)
plt.plot(x_epoch, y_loss_val)
plt.legend(['Training loss', 'Validation loss'])
plt.xlabel('Epochs')
plt.ylabel('Loss')
import pickle
data_dict = {
"x_epoch": x_epoch,
"y_loss_trn": y_loss_trn,
"y_loss_val": y_loss_val,
}
with open("./result_resae13.pkl", 'wb') as fp:
pickle.dump(data_dict, fp)
# test a image
img, _ = mnist.validation.next_batch(10)
img = img.reshape(-1,28,28,1)
img_est = sess.run(x_out, feed_dict={x_in: img, is_training: False})
def gen_norm(img):
return (img-img.min())/(img.max() - img.min())
n_examples = 10
fig, axs = plt.subplots(3, n_examples, figsize=(10, 2))
for example_i in range(n_examples):
# raw
axs[0][example_i].imshow(np.reshape(img[example_i, :], (28, 28)), cmap='gray')
axs[0][example_i].axis('off')
# learned
axs[1][example_i].imshow(np.reshape(img_est[example_i, :], (28, 28)), cmap='gray')
axs[1][example_i].axis('off')
# residual
norm_raw = gen_norm(np.reshape(img[example_i, :], (28, 28)))
norm_est = gen_norm(np.reshape(img_est[example_i, :],(28, 28)))
axs[2][example_i].imshow(norm_raw - norm_est, cmap='gray')
axs[2][example_i].axis('off')
fig.show()
plt.draw()
```
| github_jupyter |
```
%autosave 0
```
# 4. Evaluation Metrics for Classification
In the previous session we trained a model for predicting churn. How do we know if it's good?
## 4.1 Evaluation metrics: session overview
* Dataset: https://www.kaggle.com/blastchar/telco-customer-churn
* https://raw.githubusercontent.com/alexeygrigorev/mlbookcamp-code/master/chapter-03-churn-prediction/WA_Fn-UseC_-Telco-Customer-Churn.csv
*Metric* - function that compares the predictions with the actual values and outputs a single number that tells how good the predictions are
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import DictVectorizer
from sklearn.linear_model import LogisticRegression
df = pd.read_csv('data-week-3.csv')
df.columns = df.columns.str.lower().str.replace(' ', '_')
categorical_columns = list(df.dtypes[df.dtypes == 'object'].index)
for c in categorical_columns:
df[c] = df[c].str.lower().str.replace(' ', '_')
df.totalcharges = pd.to_numeric(df.totalcharges, errors='coerce')
df.totalcharges = df.totalcharges.fillna(0)
df.churn = (df.churn == 'yes').astype(int)
df_full_train, df_test = train_test_split(df, test_size=0.2, random_state=1)
df_train, df_val = train_test_split(df_full_train, test_size=0.25, random_state=1)
df_train = df_train.reset_index(drop=True)
df_val = df_val.reset_index(drop=True)
df_test = df_test.reset_index(drop=True)
y_train = df_train.churn.values
y_val = df_val.churn.values
y_test = df_test.churn.values
del df_train['churn']
del df_val['churn']
del df_test['churn']
numerical = ['tenure', 'monthlycharges', 'totalcharges']
categorical = [
'gender',
'seniorcitizen',
'partner',
'dependents',
'phoneservice',
'multiplelines',
'internetservice',
'onlinesecurity',
'onlinebackup',
'deviceprotection',
'techsupport',
'streamingtv',
'streamingmovies',
'contract',
'paperlessbilling',
'paymentmethod',
]
dv = DictVectorizer(sparse=False)
train_dict = df_train[categorical + numerical].to_dict(orient='records')
X_train = dv.fit_transform(train_dict)
model = LogisticRegression()
model.fit(X_train, y_train)
val_dict = df_val[categorical + numerical].to_dict(orient='records')
X_val = dv.transform(val_dict)
y_pred = model.predict_proba(X_val)[:, 1]
churn_decision = (y_pred >= 0.5)
(y_val == churn_decision).mean()
```
## 4.2 Accuracy and dummy model
* Evaluate the model on different thresholds
* Check the accuracy of dummy baselines
```
len(y_val)
(y_val == churn_decision).mean()
1132/ 1409
from sklearn.metrics import accuracy_score
accuracy_score(y_val, y_pred >= 0.5)
thresholds = np.linspace(0, 1, 21)
scores = []
for t in thresholds:
score = accuracy_score(y_val, y_pred >= t)
print('%.2f %.3f' % (t, score))
scores.append(score)
plt.plot(thresholds, scores)
from collections import Counter
Counter(y_pred >= 1.0)
1 - y_val.mean()
```
## 4.3 Confusion table
* Different types of errors and correct decisions
* Arranging them in a table
```
actual_positive = (y_val == 1)
actual_negative = (y_val == 0)
t = 0.5
predict_positive = (y_pred >= t)
predict_negative = (y_pred < t)
tp = (predict_positive & actual_positive).sum()
tn = (predict_negative & actual_negative).sum()
fp = (predict_positive & actual_negative).sum()
fn = (predict_negative & actual_positive).sum()
confusion_matrix = np.array([
[tn, fp],
[fn, tp]
])
confusion_matrix
(confusion_matrix / confusion_matrix.sum()).round(2)
```
## 4.4 Precision and Recall
```
p = tp / (tp + fp)
p
r = tp / (tp + fn)
r
```
## 4.5 ROC Curves
### TPR and FRP
```
tpr = tp / (tp + fn)
tpr
fpr = fp / (fp + tn)
fpr
scores = []
thresholds = np.linspace(0, 1, 101)
for t in thresholds:
actual_positive = (y_val == 1)
actual_negative = (y_val == 0)
predict_positive = (y_pred >= t)
predict_negative = (y_pred < t)
tp = (predict_positive & actual_positive).sum()
tn = (predict_negative & actual_negative).sum()
fp = (predict_positive & actual_negative).sum()
fn = (predict_negative & actual_positive).sum()
scores.append((t, tp, fp, fn, tn))
columns = ['threshold', 'tp', 'fp', 'fn', 'tn']
df_scores = pd.DataFrame(scores, columns=columns)
df_scores['tpr'] = df_scores.tp / (df_scores.tp + df_scores.fn)
df_scores['fpr'] = df_scores.fp / (df_scores.fp + df_scores.tn)
plt.plot(df_scores.threshold, df_scores['tpr'], label='TPR')
plt.plot(df_scores.threshold, df_scores['fpr'], label='FPR')
plt.legend()
```
### Random model
```
np.random.seed(1)
y_rand = np.random.uniform(0, 1, size=len(y_val))
((y_rand >= 0.5) == y_val).mean()
def tpr_fpr_dataframe(y_val, y_pred):
scores = []
thresholds = np.linspace(0, 1, 101)
for t in thresholds:
actual_positive = (y_val == 1)
actual_negative = (y_val == 0)
predict_positive = (y_pred >= t)
predict_negative = (y_pred < t)
tp = (predict_positive & actual_positive).sum()
tn = (predict_negative & actual_negative).sum()
fp = (predict_positive & actual_negative).sum()
fn = (predict_negative & actual_positive).sum()
scores.append((t, tp, fp, fn, tn))
columns = ['threshold', 'tp', 'fp', 'fn', 'tn']
df_scores = pd.DataFrame(scores, columns=columns)
df_scores['tpr'] = df_scores.tp / (df_scores.tp + df_scores.fn)
df_scores['fpr'] = df_scores.fp / (df_scores.fp + df_scores.tn)
return df_scores
df_rand = tpr_fpr_dataframe(y_val, y_rand)
plt.plot(df_rand.threshold, df_rand['tpr'], label='TPR')
plt.plot(df_rand.threshold, df_rand['fpr'], label='FPR')
plt.legend()
```
### Ideal model
```
num_neg = (y_val == 0).sum()
num_pos = (y_val == 1).sum()
num_neg, num_pos
y_ideal = np.repeat([0, 1], [num_neg, num_pos])
y_ideal
y_ideal_pred = np.linspace(0, 1, len(y_val))
1 - y_val.mean()
accuracy_score(y_ideal, y_ideal_pred >= 0.726)
df_ideal = tpr_fpr_dataframe(y_ideal, y_ideal_pred)
df_ideal[::10]
plt.plot(df_ideal.threshold, df_ideal['tpr'], label='TPR')
plt.plot(df_ideal.threshold, df_ideal['fpr'], label='FPR')
plt.legend()
```
### Putting everything together
```
plt.plot(df_scores.threshold, df_scores['tpr'], label='TPR', color='black')
plt.plot(df_scores.threshold, df_scores['fpr'], label='FPR', color='blue')
plt.plot(df_ideal.threshold, df_ideal['tpr'], label='TPR ideal')
plt.plot(df_ideal.threshold, df_ideal['fpr'], label='FPR ideal')
# plt.plot(df_rand.threshold, df_rand['tpr'], label='TPR random', color='grey')
# plt.plot(df_rand.threshold, df_rand['fpr'], label='FPR random', color='grey')
plt.legend()
plt.figure(figsize=(5, 5))
plt.plot(df_scores.fpr, df_scores.tpr, label='Model')
plt.plot([0, 1], [0, 1], label='Random', linestyle='--')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred)
plt.figure(figsize=(5, 5))
plt.plot(fpr, tpr, label='Model')
plt.plot([0, 1], [0, 1], label='Random', linestyle='--')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
```
## 4.6 ROC AUC
* Area under the ROC curve - useful metric
* Interpretation of AUC
```
from sklearn.metrics import auc
auc(fpr, tpr)
auc(df_scores.fpr, df_scores.tpr)
auc(df_ideal.fpr, df_ideal.tpr)
fpr, tpr, thresholds = roc_curve(y_val, y_pred)
auc(fpr, tpr)
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred)
neg = y_pred[y_val == 0]
pos = y_pred[y_val == 1]
import random
n = 100000
success = 0
for i in range(n):
pos_ind = random.randint(0, len(pos) - 1)
neg_ind = random.randint(0, len(neg) - 1)
if pos[pos_ind] > neg[neg_ind]:
success = success + 1
success / n
n = 50000
np.random.seed(1)
pos_ind = np.random.randint(0, len(pos), size=n)
neg_ind = np.random.randint(0, len(neg), size=n)
(pos[pos_ind] > neg[neg_ind]).mean()
```
## 4.7 Cross-Validation
* Evaluating the same model on different subsets of data
* Getting the average prediction and the spread within predictions
```
def train(df_train, y_train, C=1.0):
dicts = df_train[categorical + numerical].to_dict(orient='records')
dv = DictVectorizer(sparse=False)
X_train = dv.fit_transform(dicts)
model = LogisticRegression(C=C, max_iter=1000)
model.fit(X_train, y_train)
return dv, model
dv, model = train(df_train, y_train, C=0.001)
def predict(df, dv, model):
dicts = df[categorical + numerical].to_dict(orient='records')
X = dv.transform(dicts)
y_pred = model.predict_proba(X)[:, 1]
return y_pred
y_pred = predict(df_val, dv, model)
from sklearn.model_selection import KFold
!pip install tqdm
from tqdm.auto import tqdm
n_splits = 5
# C = regularization parameter for the model
# tqdm() is a function that prints progress bars
for C in tqdm([0.001, 0.01, 0.1, 0.5, 1, 5, 10]):
kfold = KFold(n_splits=n_splits, shuffle=True, random_state=1)
scores = []
for train_idx, val_idx in kfold.split(df_full_train):
df_train = df_full_train.iloc[train_idx]
df_val = df_full_train.iloc[val_idx]
y_train = df_train.churn.values
y_val = df_val.churn.values
dv, model = train(df_train, y_train, C=C)
y_pred = predict(df_val, dv, model)
auc = roc_auc_score(y_val, y_pred)
scores.append(auc)
print('C=%s %.3f +- %.3f' % (C, np.mean(scores), np.std(scores)))
scores
dv, model = train(df_full_train, df_full_train.churn.values, C=1.0)
y_pred = predict(df_test, dv, model)
auc = roc_auc_score(y_test, y_pred)
auc
```
## 4.8 Summary
* Metric - a single number that describes the performance of a model
* Accuracy - fraction of correct answers; sometimes misleading
* Precision and recall are less misleading when we have class inbalance
* ROC Curve - a way to evaluate the performance at all thresholds; okay to use with imbalance
* K-Fold CV - more reliable estimate for performance (mean + std)
## 4.9 Explore more
* Check the precision and recall of the dummy classifier that always predict "FALSE"
* F1 score = 2 * P * R / (P + R)
* Evaluate precision and recall at different thresholds, plot P vs R - this way you'll get the precision/recall curve (similar to ROC curve)
* Area under the PR curve is also a useful metric
Other projects:
* Calculate the metrics for datasets from the previous week
| github_jupyter |
# DSCI 525 - Web and Cloud Computing
***Milestone 3:*** This milestone aims to set up your spark cluster and develop your machine learning to deploy in the cloud for the next milestone.
## Milestone 3 checklist :
- [ ] Setup your EMR cluster with Spark, Hadoop, JupyterEnterpriseGateway, JupyterHub 1.1.0, and Livy.
- [ ] Make sure you set up foxy proxy for your web browser(Firefox). Probably you already set this up from the previous milestone.
- [ ] Develop a ML model using scikit-learn. (We will be using this model to deploy for our next milestone.)
- [ ] Obtain the best hyperparameter settings using spark's MLlib.
**Keep in mind:**
- _Please use the Firefox browser for this milestone. Make sure you got foxy proxy setup._
- _All services you use are in region us-west-2 region._
- _Use only default VPC and subnet, if not specified explicitly in instruction, leave all other options default when setting up your cluster._
- _No IP addresses are visible when you provide the screenshot (***Please mask it before uploading***)._
- _1 node cluster with a single master node (zero slave nodes) of size ```m5.xlarge``` is good enough for your spark MLlib process. These configurations might take 15 - 20 minutes to get optimal tuning parameters for the entire dataset._
- _Say something went wrong and you want to spin up another EMR cluster, then make sure you terminate the previous one._
- _Upon termination, stored data in your cluster will be lost. Make sure you save any data to S3 and download the notebooks to your laptop so that next time you have your jupyterHub in a different cluster, you can upload your notebook there._
_***Outside of Milestone [OPTIONAL]:*** You are encouraged to practice it yourself by spinning up EMR clusters._
***VERY IMPORTANT:*** With task 4, make sure you occasionally download the notebook to your local computer. Once the lab is stopped after 3 hours, your EMR cluster will be terminated, and everything will vanish.
### 1. Setup your EMR cluster
rubric={correctness:25}
Follow the instructions shown during the lecture to set up your EMR cluster. I am adding instructions here again for guidance.
1.1) Go to advanced options.
1.2) Choose Release 6.5.0.
1.3) Check Spark, Hadoop, JupyterEnterpriseGateway, JupyterHub 1.1.0, and Livy.
1.4) Core instances to be 0, master 1.
1.5) By default, the instance will be selected as m5.xlarge. However, you can also choose a bigger instance (e.g., m4.4xlarge, but make sure you budget )
1.6) Cluster name : Your-group-number.
1.7) Uncheck Enable auto-termination.
1.8) Select the key pair you have access to (from your milestone 2).
1.9) EC2 security group, please go with the default. Remember, this is a managed service; what we learned from the shared responsibility model so that AWS will take care of many things. EMR comes in the list of container services. Check [this]( https://aws.amazon.com/blogs/industries/applying-the-aws-shared-responsibility-model-to-your-gxp-solution/).
1.10) Wait for the cluster to start. This takes around ~15 min. Once it is ready, you will see a solid green dot.
#### Please attach this screen shots from your group for grading

https://github.com/UBC-MDS/525-group27/blob/main/notebooks/image/m3_1.png
### 2. Setup your browser , jupyter environment & connect to the master node.
rubric={correctness:25}
2.1) Under cluster ```summary > Application user interfaces > On-cluster user interfaces```: Click on _***Enable an SSH Connection***_.
2.2) From instructions in the popup from Step 2.1, use: **Step 1: Open an SSH Tunnel to the Amazon EMR Master Node.** Remember you are running this from your laptop terminal, and after running, it will look like [this](https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone3/images/eg.png). For the private key make sure you point to the correct location in your computer.
2.3) (If you haven't done so from milestone 2) From instructions in the popup from Step 2.1, please ignore **Step 2: Configure a proxy management tool**. Instead follow instructions given [here](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-proxy.html), under section **Example: Configure FoxyProxy for Firefox:**. Get foxy proxy standard [here](https://addons.mozilla.org/en-CA/firefox/addon/foxyproxy-standard/)
2.4) Move to **application user interfaces** tab, use the jupytetHub URL to access it.
2.4.1) Username: ```jovyan```, Password: ```jupyter```. These are default more details [here](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-user-access.html)
2.5) Login into the master node from your laptop terminal (```cluster summary > Connect to the Master Node Using SSH```), and install the necessary packages. Here are the needed packages based on my solution; you might have to install other packages depending on your approach.
sudo yum install python3-devel
sudo pip3 install pandas
sudo pip3 install s3fs
**IMPORTANT:**
- Make sure ssh -i ~/ggeorgeAD.pem -ND 8157 [email protected] (Step 2.2) is running in your terminal window before trying to access your jupyter URL. Sometimes the connection might lose; in that case, run that step again to access your jupyterHub.
- Don't confuse Step 2.2 and Step 2.5. In 2.2, you open an ssh tunnel to access the jupyterHub URL. With Step 2.6, you log into the master node to install the necessary packages.
#### Please attach this screen shots from your group for grading

https://github.com/UBC-MDS/525-group27/blob/main/notebooks/image/m3_2.png
### 3. Develop a ML model using scikit-learn.
rubric={correctness:25}
You can either use the setup that we have from our last milestone. But it might have been shut down by AWS due to the time limit; also, we haven't got permission from AWS to spin up instances larger than t2.large. Considering the situation, I recommend doing this on your local computer. So upload this notebook to your local jupyter notebook and follow the instructions.
https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone3/Milestone3-Task3.ipynb
There are 2 parts to this notebook; For doing part 2, you want information from Task 4.
#### Please attach this screen shots from your group for grading

https://github.com/UBC-MDS/525-group27/blob/main/notebooks/image/m3_3.png
### 4. Obtain best hyperparameter settings using spark's MLlib.
rubric={correctness:20}
Upload this notebook to your jupyterHub (AWS managed jupyterHub in the cluster) you set up in Task 2 and follow the instructions given in the notebook.
https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone3/Milestone3-Task4.ipynb
### 5. Submission instructions
rubric={mechanics:5}
***SUBMISSION:*** Please put a link to your GitHub folder in the canvas where TAs can find the following-
- [ ] Python 3 notebook, with the code for ML model in scikit-learn. (You can develop this on your local computer)
- [ ] PySpark notebook, with the code for obtaining the best hyperparameter settings. ( For this, you have to use PySpark notebook(kernal) in your EMR cluster )
- [ ] Screenshot from
- [ ] Setup your EMR cluster (Task 1).
- [ ] Setup your browser, jupyter environment & connect to the master node (Task 2).
- [ ] Your S3 bucket showing ```model.joblib``` file. (From Task 3 Develop a ML model using scikit-learn)
| github_jupyter |
# Prospect Theory and Cumulative Prospect Theory Agent Demo
The PTAgent and CPTAgent classes reproduce patterns of choice behavior described by Kahneman & Tverski's survey data in their seminal papers on Prospect Theory and Cumulative Prospect Theory. These classes expresses valuations of single lottery inputs, or express preferences between two lottery inputs. To more explicitly describe these agent classes, we define the following:
1. $(x_1, p_1; \cdots; x_n, p_n)$: a lottery offering outcome $x_1$ with probability $p_1$, ..., outcome $x_n$ with probability $p_n$.
2. $v(x)$: the internal representation of the value of an outcome $x$ to an instance of a PTAgent.
3. $\pi(p)$: the internal representation of a probability $p$ to an instance of a PTAgent.
4. $V(x_1, p_1; \cdots; x_n, p_n)$: a lottery valuation function.
#### **Prospect Theory Agent**
The PTAgent class reflects the lottery valuation function of Prospect Theory described in Kahneman & Tverski (1979). Generally, the lottery valuation function operates as follows:
$$V(x_1, p_1; \dots; x_n, p_n) = v(x_1) \times \pi(p_1) + \cdots + v(x_n) \times \pi(p_n) \tag{1a}$$
However, under certain conditions the lottery valuation function is operates under a different formulation. These conditions are:
1. When the lottery contains exactly two non-zero outcomes and one zero outcome relative to a reference point, with each of these outcomes occuring with non-zero probability; ie., $p_1 + p_2 + p_3 = 1$ for $x_1, x_2 \in \lbrace x | x \ne 0 \rbrace$ and $x_3=0$.
2. When the outcomes are both positive relative to a reference point or both negative relative to a reference point. Explicitly, $x_2 < x_1 < 0$ or $x_2 > x_1 > 0$.
When a lottery satisfies the conditions above, the lottery valuation function becomes:
$$V(x_1, p_1; x_2, p_2) = x_1 + p_2(x_2 - x_1) \tag{1b}$$
Since the original account of prospect theory does not explicitly describe the value function or weighting function, the value function uses the same function proposed in Tverski & Kahneman (1992):
$$v(x) = \begin{equation}
\left\{
\begin{aligned}
x^\alpha& \;\; \text{if} \, x \ge 0\\
-\lambda (-x)^\beta& \;\; \text{if} \, x \lt 0\\
\end{aligned}
\right.
\end{equation} \tag{2}$$
While the weighting function uses a form described here: https://sites.duke.edu/econ206_01_s2011/files/2011/04/39b-Prospect-Theory-Kahnemann-Tversky_final2-1.pdf.
$$\pi(p) = exp(-(-ln(p))^\gamma) \tag{3}$$
#### **Cumulative Prospect Theory Agent**
The CPTAgent class reflects the lottery valuation function, value function, and weighting function described in Tverski & Kahneman (1992). The CPTAgent class also incorporates capacities as described in this same paper. For Cumulative Prospect Theory, outcomes and associated probabilities include the attribute of *valence* that reflects whether the realization of an outcome would increases or decreases value from a reference point of the agent.
The value function for positive and negative outcomes is shown in equation 2 above.
For probabilities $p$ associated with positive valence outcomes, the *capacity* function is expressed as:
$$w^{+}(p) = \frac{p^\gamma}{\left(p^\gamma+(1-p)^\gamma) \right)^{1/ \gamma}} \tag{4a}$$
For probabilities $p$ associated with negative valence outcomes, the capacity function is expressed similarly as:
$$w^{-}(p) = \frac{p^\delta}{\left(p^\delta+(1-p)^\delta) \right)^{1/ \delta}} \tag{4b}$$
In order to compute a weight for the $i^{th}$ outcome with positive valence, a difference of cumulative sums is computed as follows:
$$\pi^{+}(p_i) = w^{+}(p_i + \cdots + p_n) - w^{+}(p_{i+1} + \cdots + p_n), \; 0 \le x_i < \cdots < x_n \tag{5a}$$
Similarly, computing a weight for the $j^{th}$ outcome with negative valence:
$$\pi^{-}(p_j) = w^{-}(p_j + \cdots + p_m) - w^{-}(p_{j+1} + \cdots + p_m), \; 0 \gt x_j > \cdots > x_m \tag{5b}$$
Lottery valuations for Cumulative Prospect Theory are then computed in a similar manner as Prospect Theory (equation 1a).
---
## Choice Behavior for Lotteries
#### **Normative Choice Behavior**
Specification of the following parameters leads to an agent that chooses lotteries according to Expected Utility Theory:
- $\alpha = \beta = 1$
- $\gamma = \delta = 1$
- $\lambda = 1$
#### **Descriptive Choice Behavior**
When $\alpha, \beta, \gamma, \delta$ take values on the interval $(0, 1)$, and when $\lambda > 1$, lottery valuation functions with constituent value and weighting functions show patterns of choice that better approximate empirical choice behavior than those predicted by normative choice behavior.
#### **Notation**
To illustrate functionality of the PTAgent and CPTAgent classes, we denote an outcome and its associated probability as a tuple $(G_1, p_1)$ and $(L_1, p_1)$, where $G_1$ is used to denote gains and $L_1$ denotes losses. A lottery is a set of gains and/or losses with associated probabilities: $[(L_1, p_1), \cdots, (G_n, p_n)]$, where $\sum p_i = 1$. A preference between two prospect, for example "A is prefered to B", is denoted as $A > B$.
The following instance of PTAgent uses function parameters estimated in Tverski & Kahneman (1992). These parameters are sufficient to replicate observed modal choices between prospects in (Kahneman & Tverski, 1992) and (Tverski & Kahneman, 1992).
---
### Decision Anomalies
The demonstrations below show instances of the PTAgent class exhibiting the same choice anomalies discussed in Kahneman & Tverskies seminal paper on Prospect Theory (1979).
```
from cpt_agent import PTAgent
pt = PTAgent(alpha=0.88, gamma=0.61, lambda_=2.25)
pt
```
### The certainty effect
The certainty effect demonstrates that reducing the probability of outcomes from certainty has larger effects on preferences than equivalent reductions from risky (ie., non-certain) outcomes. Problems 1 and 2 illustrate this effect for absolute reductions in probabilities and problems 3 and 4 show this effect for relative reductions in probabilities.
- Problem 1: $[(G_1, p_1), (G_2, p_2), (0, p_3)] < [(G_2, 1)]$
- Problem 2: $[(G_1, p_1), (G_2, 0), (0, p_3)] > [(G_2, 1-p_2)]$
Subtracting probability $p_2$ of outcome $G_2$ from both options in problem 1 leads to a preference reversal in problem 2.
```
# Problem 1
lottery_1A = {'outcome':[2500, 2400, 0], 'probability':[0.33, 0.66, 0.01]}
lottery_1B = {'outcome':[2400], 'probability':[1]}
pt.choose(lottery_1A, lottery_1B)
# Problem 2
lottery_2C = {'outcome':[2500, 0], 'probability':[0.33, 0.67]}
lottery_2D = {'outcome':[2400, 0], 'probability':[0.34, 0.66]}
pt.choose(lottery_2C, lottery_2D)
```
Scaling probabilities of risky outcome $G_1$ and certain outcome $G_2$ by $p'$ in problem 3 leads to a preference reversal in problem 4. This preference reversal violates the substitution axiom of Expected Utility Theory.
- Problem 3: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, 1)]$
- Problem 4: $\left[\left(G_1, p_1\cdot p'\right), \left(0, \frac{1-p_1}{p'}\right)\right] > [(G_2, p'), (0, 1-p')]$
```
# Problem 3
lottery_3A = {'outcome':[4000, 0], 'probability':[0.8, 0.2]}
lottery_3B = {'outcome':[3000], 'probability':[1]}
pt.choose(lottery_3A, lottery_3B)
# Problem 4
lottery_4C = {'outcome':[4000, 0], 'probability':[0.2, 0.8]}
lottery_4D = {'outcome':[3000, 0], 'probability':[0.25, 0.75]}
pt.choose(lottery_4C, lottery_4D)
```
### The reflection effect
The reflection effect demonstrates that altering outcomes by recasting prospects from the domain of gains to losses will correspondingly alter decision behavior from risk-aversion to risk-seeking. Since the reflection effect highlights preferences characterized as risk-seeking in the loss domain, the effect disqualifies risk-aversion as a general principle for explaining the certainty effect above.
- Problem 3: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, 1)]$
- Problem 3': $[(-G_1, p_1), (0, 1-p_1)] > [(-G_2, 1)]$
```
# Problem 3'
lottery_3A_, lottery_3B_ = lottery_3A.copy(), lottery_3B.copy()
lottery_3A_.update({'outcome':[-g for g in lottery_3A_['outcome']]})
lottery_3B_.update({'outcome':[-g for g in lottery_3B_['outcome']]})
pt.choose(lottery_3A_, lottery_3B_)
```
- Problem 4: $\left[\left(G_1, p_1\cdot p^{*}\right), \left(0, \frac{1-p_1}{p^{*}}\right)\right] > [(G_2, p^{*}), (0, 1-p^{*})]$
- Problem 4': $\left[\left(-G_1, p_1\cdot p^{*}\right), \left(0, \frac{1-p_1}{p^{*}}\right)\right] < [(-G_2, p^{*}), (0, 1-p^{*})]$
```
# Problem 4'
lottery_4C_, lottery_4D_ = lottery_4C.copy(), lottery_4D.copy()
lottery_4C_.update({'outcome':[-g for g in lottery_4C_['outcome']]})
lottery_4D_.update({'outcome':[-g for g in lottery_4D_['outcome']]})
pt.choose(lottery_4C_, lottery_4D_)
```
### Risk Seeking in Gains, Risk Aversion in Losses
In addition to violations of the substitution axiom, scaling probabilities of lotteries with a result of highly improbable outcomes can induce risk seeking in gains, and risk aversion in losses. While these characteristics of choice behavior are not violations of normative theories of choice behavior, they contrast with more typical observations of risk aversion in gains and risk seeking in losses for outcomes that occur with stronger likelihood. In the domain of gains, risk seeking for low probability events seems to correspond to the popularity of state lotteries.
- Problem 7: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, p_2), (0, 1-p_2)]$
- Problem 8: $\left[\left(G_1, p_1\cdot p'\right), \left(0, \frac{1-p_1}{p'}\right)\right] > \left[\left(G_2, p_2\cdot p'\right), \left(0, \frac{1-p_2}{p'}\right)\right]$
```
# Problem 7
lottery_7A = {'outcome':[6000, 0], 'probability':[0.45, 0.55]}
lottery_7B = {'outcome':[3000, 0], 'probability':[0.9, 0.1]}
pt.choose(lottery_7A, lottery_7B)
# Problem 8
lottery_8C = {'outcome':[6000, 0], 'probability':[0.001, 0.999]}
lottery_8D = {'outcome':[3000, 0], 'probability':[0.002, 0.998]}
pt.choose(lottery_8C, lottery_8D)
```
Just as Prospect Theory accounts for risk seeking in gains for low probability events, the theory also accounts for risk aversion in the domain of losses when outcomes occur very infrequently. Risk aversion in the domain of losses seems to match well with consumer purchase of insurance products.
```
# Problem 7'
lottery_7A_, lottery_7B_ = lottery_7A.copy(), lottery_7B.copy()
lottery_7A_.update({'outcome':[-g for g in lottery_7A_['outcome']]})
lottery_7B_.update({'outcome':[-g for g in lottery_7B_['outcome']]})
pt.choose(lottery_7A_, lottery_7B_)
# Problem 8'
lottery_8C_, lottery_8D_ = lottery_8C.copy(), lottery_8D.copy()
lottery_8C_.update({'outcome':[-g for g in lottery_8C_['outcome']]})
lottery_8D_.update({'outcome':[-g for g in lottery_8D_['outcome']]})
pt.choose(lottery_8D_, lottery_8D_)
```
### Probabilistic Insurance
Kahneman & Tverski discuss another frequent choice anomalie called *probabilistic insurance*. To demonstrate choice behavior matching this anomalie, we first need to find a point of indifference reflecting the following relationship between current wealth $w$ and the cost of an insurance premium $y$ against a potential loss $x$ that occurs with probability $p$:
$$pu(w-x) + (1-p)u(w) = u(w-y) \tag{6}$$
That is, we are finding the premium $y$ for which a respondent is ambivelant between purchasing the insurance against loss $x$, and simply incurring the loss $x$ with probability $p$. Kahneman & Tverski introduce an insurance product called probabilistic insurance whereby the consumer only purchases a portion $r$ of the premium $y$. If the event leading to loss actually occurs, the purchaser pays the remainder of the premium with probability $r$, or is returned the premium and suffers the loss entirely with probability $1-r$.
$$(1-r) p u(w-x) + rpu(w-y) + (1-p)u(w-ry) \tag{7}$$
Kahneman & Tverski show that according to Expected Utility Theory, probabilistic insurance is generally preferred to either a fully insured product $u(w-y)$ or a loss $x$ with probability $p$ (under the assumption of ambivalence described above). In surveys, however, respondents generally show a strong preference against probabilistic insurance.
```
# Problem 9
premium = 1000
asset_am = 6000
loss = 5000
prob_loss = 0.06925
lottery_9A = {'outcome':[asset_am - premium], 'probability':[1]}
lottery_9B = {'outcome':[asset_am - loss, asset_am], 'probability':[prob_loss, 1-prob_loss]}
pt.evaluate(lottery_9A)
pt.evaluate(lottery_9B)
# Problem 10
r = 0.94
lottery_10A = {'outcome':[asset_am - loss, asset_am - premium, asset_am - r*premium],
'probability':[(1-r)*prob_loss, r*prob_loss, (1-prob_loss)]}
pt.choose(lottery_9B, lottery_10A)
```
### Cumulative Prospect Theory
Kahneman & Tverski modified their original account of Prospect Theory with Cumulative Prospect Theory (1990). The CPTAgent exhibits the same choice behavior shown by the PTAgent for each of the problems considered above. Additionally, the cumulative features of the weighting function better demonstrates the choice patterns of respondents when considering probabilistic insurance, namely, the preference against probabilistic insurance seems to hold under a broader range of probabilities $r$.
```
from cpt_agent import CPTAgent
cpt = CPTAgent(alpha=0.88, gamma=0.61, lambda_=2.25)
cpt
# Problem 11
r = 0.73
lottery_10B = {'outcome':[asset_am - loss, asset_am - premium, asset_am - r*premium],
'probability':[(1-r)*prob_loss, r*prob_loss, (1-prob_loss)]}
cpt.choose(lottery_9A, lottery_10B)
```
| github_jupyter |
# 神经网络的训练
作者:杨岱川
时间:2019年12月
github:https://github.com/DrDavidS/basic_Machine_Learning
开源协议:[MIT](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/LICENSE)
参考文献:
- 《深度学习入门》,作者:斋藤康毅;
- 《深度学习》,作者:Ian Goodfellow 、Yoshua Bengio、Aaron Courville。
- [Keras overview](https://tensorflow.google.cn/guide/keras/overview)
## 本节目的
在[3.01 神经网络与前向传播](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/03深度学习基础/3.01%20神经网络与前向传播.ipynb)中我们学习了基于多层感知机的神经网络前向传播的原理,并且动手实现了一个很简单的神经网络模型。
但是,目前为止我们搭建的神经网络的权重矩阵 $W$ 是随机初始化的,我们只能说把输入 $X$ “喂”了进去, 然后“跑通”了这个网络。但是它的输出并没有任何实际的意义,因为我们并没有对它进行训练。
在 3.02 教程中,我们的主题就是**神经网络的学习**,也就是我们的神经网络是如何从训练数据中自动获取最优权重参数的过程,这个过程的主要思想和之前在传统机器学习中描述的训练本质相同。
我们为了让神经网络能够进行学习,将导入**损失函数(loss function)**这一指标,相信大家对其并不陌生。
神经网络学习的目的就是以损失函数为基准,找出能够使它的值达到最小的权重参数。而为了找出尽可能小的损失函数的值,我们将采用**梯度法**。
> 这些名词是不是听起来都很熟悉?
>
>“梯度法”在[2.11 XGBoost原理与应用](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/02机器学习基础/2.11%20XGBoost原理与应用.ipynb)中以**梯度提升**的形式出现,而“损失函数”更是贯穿了整个传统机器学习过程。
## 从数据中学习
同其他机器学习算法一样,神经网络的特征仍然是可以从数据中学习。什么叫“从数据中学习”,就是说我们的权重参数可以由数据来自动决定。
既然是机器学习,我们当然不能人工地决定参数,这样怎么忙得过来呢?
>一些大型神经网络参数数量,当然参数更多不代表效果一定更好:
>
>- ALBERT:1200万,by 谷歌;
>- BERT-large:3.34亿,by 谷歌;
>- BERT-xlarge:12.7亿,by 谷歌;
>- Megatron:80亿,by Nvidia;
>- T5,110亿,by 谷歌。
接下来我们会介绍神经网络地学习,也就是如何利用数据决定参数值。
## 损失函数
损失函数地概念大家都熟悉,我们在之前学过非常多的损失函数,比如 0-1 损失函数,均方误差损失函数等。这里我们会再介绍一种新的损失函数。
### 交叉熵误差
**交叉熵误差(cross entropy error)**是一种非常常用的损失函数,其公式如下:
$$\large E=-\sum_k t_k\log y_k$$
其中,$\log$ 是以 $\rm e$ 为底数的自然对数 $\log_e$。$k$ 表示共有 $k$ 个类别。$y_k$ 是神经网络的输出,$t_k$ 是真实的、正确的标签。$t_k$ 中只有正确解的标签索引为1,其他均为0,注意这里用的是 one-hot 表示,所以接受多分类问题。
实际上这个公式只计算了正确解标签输出的自然对数。
比如,一个三分类问题,有 A, B ,C 三种类别,而真实值为C,即 $t_k=[0,\quad0,\quad1]$,
而神经网络经过 softmax 后的输出 $y_k=[0.1,\quad0.3,\quad0.6]$。所以其交叉熵误差为 $-\log0.6\approx0.51$。
我们用代码来实现交叉熵:
```
import numpy as np
def cross_entropy_error(y, t):
"""定义交叉熵损失函数"""
delta = 1e-7
return -np.sum(t * np.log(y + delta))
```
这里的 $y$ 和 $t$ 都是 NumPy 数组。我们在计算 `np.log` 的时候加上了一个很小的值 delta,是为了防止出现 `np.log(0)` 的情况,也就是返回值为负无穷。这样一来会导致后续计算无法进行。
接下来我们试试使用代码进行简单的计算:
```
# 设置第三类为正确解
t = np.array([0, 0, 1])
t
# 设置三类概率情况,y1
y1 = np.array([0.1, 0.3, 0.6])
y1
# 设置三类概率情况,y2
y2 = np.array([0.3, 0.4, 0.3])
y2
# 计算y1交叉熵
cross_entropy_error(y1, t)
# 计算y2交叉熵
cross_entropy_error(y2, t)
```
可以看出第一个输出 y1 与监督数据(训练数据)更为切合,所以交叉熵误差更小。
### mini-batch 学习
机器学习使用训练数据进行学习,我们对训练数据计算损失函数的值。找出让这个值尽可能小的参数。也就是说,计算损失函数的时候必须将所有的训练数据作为对象,有 100 个数据,就应当把这 100 个损失函数的总和作为学习的目标。
要计算所有训练数据的损失函数的综合,以交叉熵误差为例:
$$\large E=-\frac{1}{N}\sum_n \sum_k t_{nk}\log y_{nk}$$
虽然看起来复杂,其实只是把单个数据的损失函数扩展到了 $n$ 个数据而已,最后再除以 $N$,求得单个数据的“平均损失函数”。这样平均化以后,可以获得和训练数据的数量无关的统一指标。
问题在于,很多数据集的数据量可不少,以 MNIST 为例,其训练数据有 60000 个,如果以全部数据为对象求损失函数的和,则时间花费较长。如果更大的数据集,比如 [ImageNet](http://www.image-net.org/about-stats) 数据集,甚至有1419万张图片(2019年12月),这种情况下以全部数据为对象计算损失函数是不现实的。
因此,我们从全部数据中选出一部分,作为全部数据的“近似”。神经网络的学习也是从训练数据中选出一批数据(mini-batch,小批量),然后对每个mini-batch进行学习。
比如在 MNIST 数据集中,每次选择 100 张图片学习。这种学习方式称为 **mini-batch学习**。或者说,整个训练过程的 batch-size 为 100。
### 为何要设定损失函数
为什么我们训练过程是损失函数最小?我们的最终目的是提高神经网络的识别精度,为什么不把识别精度作为指标?
这涉及到导数在神经网络学习中的作用。以后会详细解释,在神经网络的学习中,寻找最优参数(权重和偏置)时,要寻找使得损失函数的值尽可能小的的参数。而为了找到让损失函数值尽可能小的地方,需要计算参数的导数(准确说是**梯度**),然后以这个导数为指引,逐步更新参数的值。
假设有一个神经网络,我们关注这个网络中某一个权重参数。现在,对这个权重参数的损失函数求导,表示的是“如果稍微改变这个权重参数的值,损失函数会怎么变化”。如果导数的值为负,通过使该权重参数向正方向改变,可以减小损失函数的值;反过来,如果导数的值为正,则通过使该权重参数向负方向改变,可以减小损失函数的值。
>如果导数的值为 0 时,无论权重参数向哪个方向变化,损失函数的值都不变。
如果我们用识别精度(准确率)作为指标,那么绝大多数地方的导数都会变成 0 ,导致参数无法更新。
>假设某个神经网络识别出了 100 个训练数据中的 32 个,这时候准确率为 32%。如果我们以准确率为指标,即使稍微改变权重参数的值,识别的准确率也将继续保持在 32%,不会有变化。也就是说,仅仅微调参数,是无法改善识别精度的。即使有所改善,也不会变成 32.011% 这样连续变化,而是变成 33%,34% 这样离散的值。
>
>而如果我们采用**损失函数**作为指标,则当前损失函数的值可以表示为 0.92543...之类的值,而稍微微调一下参数,对应损失函数也会如 0.93431... 这样发生连续的变化。
所以,识别精度对微小的参数变化基本没啥反应,即使有反应,它的值也是不连续地、突然地变化。
回忆之前学习的 **阶跃函数** 和 **sigmoid 函数**:
```
import matplotlib
print(matplotlib.__version__)
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # 生成矢量图
def sigmoid(x):
"""定义sigmoid函数"""
return 1.0/(1.0 + np.exp(-x))
def step_function(x):
"""定义阶跃函数"""
return np.array(x > 0, dtype=np.int)
# 阶跃函数
plt.figure(figsize=(8,4))
plt.subplot(1, 2, 1)
x = np.arange(-6.0, 6.0, 0.1)
plt.plot(x, step_function(x))
plt.axhline(y=0.0,ls='dotted',color='k')
plt.axhline(y=1.0,ls='dotted',color='k')
plt.axhline(y=0.5,ls='dotted',color='k')
plt.yticks([0.0,0.5,1.0])
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('$step(x)$')
plt.title('Step Function')
# plt.savefig("pic001.png", dpi=600) # 保存图片
# sigmoid 函数
plt.subplot(1, 2, 2)
plt.plot(x, sigmoid(x))
plt.axhline(y=0.0,ls='dotted',color='k')
plt.axhline(y=1.0,ls='dotted',color='k')
plt.axhline(y=0.5,ls='dotted',color='k')
plt.yticks([0.0,0.5,1.0])
plt.ylim(-0.1,1.1)
plt.xlabel('x')
plt.ylabel('$sigmoid(x)$')
plt.title('Sigmoid Function')
# plt.savefig("pic001.png", dpi=600) # 保存图片
plt.tight_layout(3) # 间隔
plt.show()
```
如果我们使用**阶跃函数**作为激活函数,神经网络的学习无法进行。如图,阶跃函数的导数在绝大多数的地方都是 0 ,也就是说,如果我们采用阶跃函数,那么即使将损失函数作为指标,参数的微小变化也会被阶跃函数抹杀,导致损失函数的值没有任何变化。
而 **sigmoid 函数**,如图,不仅函数的输出是连续变化的,曲线的斜率也是连续变化的。也就是说,sigmoid 函数的导数在任何地方都不为 0。得益于这个性质,神经网络的学习得以正确进行。
## 数值微分
我们使用梯度信息决定前进方向。现在我们会介绍什么是梯度,它有什么性质。
### 导数
相信大家对导数都不陌生。导数就是表示某个瞬间的变化量,定义为:
$$\large \frac{{\rm d}f(x)}{{\rm d}x} = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$
那么现在我们参考上式实现函数求导:
```
def numerical_diff(f, x):
"""不太好的导数实现"""
h = 1e-50
return (f(x + h) - f(x)) / h
```
`numerical_diff` 的命名来源于 **数值微分(numerical differentiation)**。
实际上,我们对 $h$ 赋予了一个很小的值,反倒产生了**舍入误差**:
```
np.float32(1e-50)
```
如果采用 `float32` 类型来表示 $10^{-50}$,就会变成 $0.0$,无法正确表示。这是第一个问题,我们应当将微小值 $h$ 改为 $10^{-4}$,就可以得到正确的结果了。
第二个问题和函数 $f$ 的差分有关。我们虽然实现了计算函数 $f$ 在 $x+h$ 和 $x$ 之间的差分,但是是有误差的。我们实际上计算的是点 $x+h$ 和 $x$ 之间连线的斜率,而真正的导数则是函数在 $x$ 处切线的斜率。出现这个差异的原因是因为 $h$ 不能真的无限接近于 0。
为了减小误差,我们计算函数 $f$ 在 $(x+h)$ 和 $(x-h)$ 之间的差分。因为这种计算方法以 $x$ 为中心,计算左右两边的差分,所以叫**中心差分**,而 $(x+h)$ 和 $x$ 之间的差分叫**前向差分**。
现在改进如下:
```
def numerical_diff(f, x):
"""改进后的导数实现"""
h = 1e-4
return (f(x + h) - f(x - h)) / (2 * h)
```
### 数值微分的例子
使用上面的数值微分函数对简单函数求导:
$$\large y=0.01x^2+0.1x$$
首先我们绘制这个函数的图像。
```
def function_1(x):
"""定义函数"""
return 0.01 * x**2 + 0.1*x
x = np.arange(0.0, 20.0, 0.1)
y = function_1(x)
plt.xlabel('x')
plt.ylabel('$f(x)$')
plt.plot(x, y)
plt.show()
```
计算函数在 $x=5$ 时候的导数,画切线:
```
def tangent_line(f, x):
"""切线"""
d = numerical_diff(f, x)
print(d)
y = f(x) - d*x
return lambda t: d*t + y
x = np.arange(0.0, 20.0, 0.1)
y = function_1(x)
plt.xlabel("x")
plt.ylabel("f(x)")
tf = tangent_line(function_1, 5)
y2 = tf(x)
plt.plot(x, y)
plt.plot(x, y2)
plt.axvline(x=5,ls='dotted',color='k')
plt.axhline(y=0.75,ls='dotted',color='k')
plt.yticks([0, 0.75, 1, 2, 3, 4])
plt.show()
```
众所周知,$f(x)=0.01x^2+0.1x$ 求导的解析解是 $\cfrac{{\rm d}f(x)}{{\rm d}x}=0.02x+0.1$,因此在 $x=5$ 的时候,“真的导数”为 0.2。和上面的结果比起来,严格来说不一致,但是误差很小。
### 偏导数
接下来我们看一个新函数,这个函数有两个变量:
$$\large f(x_0, x_1)=x_0^2+x_1^2$$
其图像的绘制,用代码实现就是如下:
```
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as plt
import numpy as np
def function_2_old(x_0, x_1):
"""二元函数"""
return x_0**2 + x_1**2
fig = plt.figure()
ax = Axes3D(fig)
x_0 = np.arange(-2, 2.5, 0.2) # x0
x_1 = np.arange(-2, 2.5, 0.2) # x1
X_0, X_1 = np.meshgrid(x_0, x_1) # 二维数组生成
Y = function_2_old(X_0, X_1)
ax.set_xlabel('$x_0$')
ax.set_ylabel('$x_1$')
ax.set_zlabel('$f(x)$')
ax.plot_surface(X_0, X_1, Y, rstride=1, cstride=1, cmap='rainbow')
# ax.view_init(30, 60) # 调整视角
plt.show()
```
很漂亮的一幅图。
如果我们要对这个二元函数求导,就有必要区分是对 $x_0$ 还是 $x_1$ 求导。
这里讨论的有多个变量函数的导数就是**偏导数**,表示为 $\cfrac{\partial f}{\partial x_0}$、$\cfrac{\partial f}{\partial x_1}$。
当 $x_0=3$,$x_1=4$ 的时候,求关于 $x_0$ 的偏导数$\cfrac{\partial f}{\partial x_0}$:
```
def function_tmp1(x0):
return x0 * x0 + 4.0**2.0
numerical_diff(function_tmp1, 3.0)
```
当 $x_0=3$,$x_1=4$ 的时候,求关于 $x_1$ 的偏导数$\cfrac{\partial f}{\partial x_1}$:
```
def function_tmp2(x1):
return 3.0**2.0 + x1 * x1
numerical_diff(function_tmp2, 4.0)
```
实际上动笔计算,这两个计算值和解析解的导数基本一致。
所以偏导数和单变量的导数一样,都是求某个地方的**斜率**,不过偏导数需要将多个变量中的某一个变量定为目标变量,然后将其他变量固定为某个值。
## 梯度
铺垫了这么多,终于到了关键的环节。
我们刚刚计算了 $x_0$ 和 $x_1$ 的偏导数,现在我们要一起计算 $x_0$ 和 $x_1$ 的偏导数。
比如我们考虑求 $x_0=3$,$x_1=4$ 时 $(x_0,x_1)$ 的偏导数 $\left( \cfrac{\partial f}{\partial x_0},\cfrac{\partial f}{\partial x_1} \right)$。
>像 $\left( \cfrac{\partial f}{\partial x_0},\cfrac{\partial f}{\partial x_1} \right)$ 这样由全部变量的偏导数汇总而成的向量就叫做**梯度**。
我们采用以下代码来计算:
```
def _numerical_gradient_no_batch(f, x):
"""
计算梯度
输入:
f:函数
x:数组,多元变量。
"""
h = 1e-4 # 0.0001
grad = np.zeros_like(x) # 生成一个和x形状一样的全为0的数组
for idx in range(x.size):
tmp_val = x[idx]
x[idx] = float(tmp_val) + h
fxh1 = f(x) # f(x+h)
x[idx] = tmp_val - h
fxh2 = f(x) # f(x-h)
grad[idx] = (fxh1 - fxh2) / (2*h)
x[idx] = tmp_val # 还原值
return grad
def function_2(x):
"""
二元函数
重新定义一下,此时输入为一个np.array数组
"""
return x[0]**2 + x[1]**2
```
这个代码看起来稍微长一点,但是和求单变量的数值微分本质一样。
现在我们用这个函数实际计算一下梯度:
```
_numerical_gradient_no_batch(function_2, np.array([3.0, 4.0]))
_numerical_gradient_no_batch(function_2, np.array([0.0, 2.0]))
_numerical_gradient_no_batch(function_2, np.array([3.0, 0.0]))
```
像这样我们就能计算 $(x_0,x_1)$ 在各个点的梯度了。现在我们要把 $f(x_0,x_1)=x_0^2+x_1^2$ 的梯度画在图上,不过我们画的是**负梯度**的向量。
代码参考:[deep-learning-from-scratch](https://github.com/oreilly-japan/deep-learning-from-scratch/blob/master/ch04/gradient_2d.py)。
```
def numerical_gradient(f, X):
"""计算梯度矢量"""
if X.ndim == 1:
return _numerical_gradient_no_batch(f, X)
else:
grad = np.zeros_like(X)
for idx, x in enumerate(X):
grad[idx] = _numerical_gradient_no_batch(f, x)
return grad
x0 = np.arange(-2, 2.5, 0.25)
x1 = np.arange(-2, 2.5, 0.25)
X, Y = np.meshgrid(x0, x1)
X = X.flatten()
Y = Y.flatten()
grad = numerical_gradient(function_2, np.array([X, Y]).T).T
plt.figure()
plt.quiver(X, Y, -grad[0], -grad[1], angles="xy",color="#666666")
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.xlabel('x0')
plt.ylabel('x1')
plt.grid()
plt.draw()
plt.show()
```
如图所示,$f(x_0,x_1)=x_0^2+x_1^2$ 的梯度呈现为有向箭头,而且:
- 所有的箭头都指向 $f(x_0,x_1)$ 的“最低处”;
- 离“最低处”越远,箭头越大。
> 实际上,梯度并非任何时候都指向最低处。
>
> 更严格讲,**梯度指示的方向是各点处的函数值减小最多的方向**。
>
> 也就是说,我们有可能在某些优化过程中只收敛到了局部最小值。
### 梯度法
机器学习的主要任务是在训练(学习)过程中寻找最优的参数。这里“最优参数”就是让损失函数取到最小值时的参数。
但是损失函数一般都很复杂(回忆一下 `XGBoost` 的损失函数推导),参数空间很庞大,我们一般不知道它在何处能取得最小值。而使用梯度来寻找函数最小值(或者尽可能小的值)的方法就是梯度法。
>再次提醒:**梯度** 表示的是各点出函数的值减小最多的方向,因此没法保证梯度所指的方向就是函数的最小值或是真正应该前进的方向。实际上在复杂的函数中,梯度指示的方向基本上都 **不是** 函数值的最小位置。
我们沿着梯度方向能够最大限度减小函数(比如损失函数)的值,因此在寻找函数的最小值的位置上还是以梯度信息为线索,决定前进的方向。
这个时候**梯度法**就起作用了。在梯度法中,函数的取值从当前位置沿着梯度方向前进一小步(配合上面的图),然后在新的地方重新求梯度,再沿着梯度方向前进,如此循环往复。
像这样,通过不断地沿着梯度方向前进,逐渐减小函数的值的过程就是**梯度法(gradient method)**,它是解决机器学习中最优化问题的常用方法。
>严格地说,寻找最小值的梯度法叫**梯度下降法**(gradient descent method),而寻找最大值的梯度法称为**梯度上升法**(gradient ascent method),注意和 **提升方法**(Boosting)相区别。
用数学式来表达梯度法,就是:
$$x_0=x_0 - \eta \frac{\partial f}{\partial x_0}$$
$$x_1=x_1 - \eta \frac{\partial f}{\partial x_1}$$
其中,$\eta$,读作 **eta**,表示更新量。回忆一下,在之前的 SKLearn 的机器学习示例中,大多都用 `eta` 作为**学习率(learning rate)**的参数,在神经网络中也是如此。学习率决定在一次学习中,应该学习多少,以及在多大程度上更新参数,就像我们走在下山路上,$\eta$ 决定了我们每一步迈多远。
上面的公式只更新了一次,我们需要反复执行,逐渐减小函数值。
$\eta$ 的具体取值不能太大或者太小,否则都没法抵达一个“合适的位置”。在神经网络中,一般会一边改变学习率的值,一般确认训练是否正常进行。
代码参考[gradient_method.py](https://github.com/oreilly-japan/deep-learning-from-scratch/blob/master/ch04/gradient_method.py),用代码实现梯度下降法:
```
def gradient_descent(f, init_x, lr=0.01, step_num=100):
"""
梯度下降法
f:要进行最优化的参数
init_x:初始值
lr:学习率,默认为0.01
step_sum:梯度下降法重复的次数
"""
x = init_x
x_history = [] # 保存每一步的信息
for i in range(step_num):
x_history.append( x.copy() )
grad = numerical_gradient(f, x) # 计算梯度矢量
x -= lr * grad
return x, np.array(x_history)
```
使用这个函数就能求得函数的极小值,如果顺利,还能求得最小值。
现在我们来求 $f(x_0,x_1)=x_0^2+x_1^2$ 的最小值:
```
init_x = np.array([-3.0, 4.0]) # 初始位置
resutl = gradient_descent(function_2, init_x=init_x, lr=0.1, step_num=100) # 执行梯度下降算法
print(resutl[0])
```
最终结果是 $(-6.11110793\times10^{-10}, 8.14814391\times10^{-10})$,非常接近我们已知的正确值 $(0, 0)$。所以说通过梯度下降法我们基本得到了正确的结果。
如果我们把梯度更新的图片画出,如下:
```
init_x = np.array([-3.0, 4.0]) # 初始位置
lr = 0.1
step_num = 20
x, x_history = gradient_descent(function_2, init_x, lr=lr, step_num=step_num)
step = 0.01
x_0 = np.arange(-5,5,step)
x_1 = np.arange(-5,5,step)
X, Y = np.meshgrid(x_0, x_1) # 建立网格
Z = function_2_old(X, Y)
plt.contour(X, Y, Z, levels=10, linewidths=0.5, linestyles='dashdot') # 绘制等高线
plt.plot(x_history[:,0], x_history[:,1], '.') # 绘制梯度下降过程
plt.xlim(-4.5, 4.5)
plt.ylim(-4.5, 4.5)
plt.xlabel("$x_0$")
plt.ylabel("$x_1$")
plt.show()
```
前面说过,**学习率**过大或者过小都无法得到好结果。
可以做实验验证一下:
```
# 学习率过大
init_x = np.array([-3.0, 4.0]) # 初始位置
lr = 10.0 # 学习率
x, x_history = gradient_descent(function_2, init_x=init_x, lr=lr, step_num=step_num)
print(x)
# 学习率过小
init_x = np.array([-3.0, 4.0]) # 初始位置
lr = 1e-10 # 学习率
x, x_history = gradient_descent(function_2, init_x=init_x, lr=lr, step_num=step_num)
print(x)
```
由此可见:
- 学习率过大,会发散成一个很大的值;
- 学习率过小,基本上还没更新就结束了。
因此我们需要设置适当的学习率。记住,学习率是一个**超参数**,通常是人工设定的。
### 神经网络的梯度
神经网络的训练也是要求梯度的。这里的梯度指的是**损失函数**关于权重参数的梯度。比如,在[3.01 神经网络与前向传播](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/深度学习基础/3.01%20神经网络与前向传播.ipynb)中,我们搭建了一个三层神经网络。其中第一层(layer1)的权重 $W$ 的形状为 $2\times3$,损失函数用 $L$ 表示。
此时梯度用 $\cfrac{\partial L}{\partial W}$ 表示。用具体的数学表达式(注意下标为了方便说明,和以前不一样)来说,就是:
$$
\large
W=
\begin{pmatrix}
w_{11} & w_{12} & w_{13} \\
w_{21} & w_{22} & w_{23}\\
\end{pmatrix}
$$
$$
\large
\frac{\partial L}{\partial W}=
\begin{pmatrix}
\cfrac{\partial L}{\partial w_{11}} & \cfrac{\partial L}{\partial w_{12}} & \cfrac{\partial L}{\partial w_{13}} \\
\cfrac{\partial L}{\partial w_{21}} & \cfrac{\partial L}{\partial w_{22}} & \cfrac{\partial L}{\partial w_{23}}\\
\end{pmatrix}
$$
$\cfrac{\partial L}{\partial W}$ 的元素由各个元素关于 $W$ 的偏导数构成。比如,第1行第1列的元素 $\cfrac{\partial L}{\partial w_{11}}$ 表示当 $w_{11}$ 稍微变化的时候,损失函数 $L$ 会发生多大变化。
我们以一个简单的神经网络为例子,来实现求梯度的代码:
```
import os
import sys
import numpy as np
def softmax(a):
"""定义 softmax 函数"""
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
def cross_entropy_error(y, t):
"""定义交叉熵损失函数"""
delta = 1e-7
return -np.sum(t * np.log(y + delta))
def numerical_gradient(f, X):
"""计算梯度矢量"""
if X.ndim == 1:
return _numerical_gradient_no_batch(f, X)
else:
grad = np.zeros_like(X)
for idx, x in enumerate(X):
grad[idx] = _numerical_gradient_no_batch(f, x)
return grad
class simpleNet:
def __init__(self):
"""初始化"""
# self.W = np.random.randn(2, 3) # 高斯分布初始化
self.W = np.array([[ 0.68851943, 2.06916921, -0.88125086],
[-1.30951576, 0.72350587, -1.88984482]])
self.q = 1
def predict(self, x):
"""预测"""
return np.dot(x, self.W)
def loss(self, x, t):
"""损失函数"""
z = self.predict(x)
y = softmax(z)
loss = cross_entropy_error(y, t)
return loss
```
我们建立了一个名叫 `simpleNet` 的简单神经网络,其中 `softmax` 和 `cross_entropy_error` 都和以前一样。simpleNet 类只有一个实例变量,也就是形状为 $2\times 3$ 的权重参数矩阵。
网络中有两个方法,一个是前向传播 `predict`,用于预测;另一个是用于求损失函数的 `loss` 。其中参数 `x` 接受输入数据,`t`接受正确标签。
现在我们运行一下看看结果:
```
net = simpleNet()
print(net.W) # 权重参数
x = np.array([0.6, 0.9])
p = net.predict(x) # 预测
print(p)
np.argmax(p) # 正确解(最大值)的索引
# 正确解的标签,如果是随机初始化,注意每次运行可能都不一样!!!
t = np.array([0, 1, 0])
# 损失
loss1 = net.loss(x, t)
print(loss1)
```
现在我们来求**梯度**。我们使用 `numerical_gradient(f, x)` 求梯度:
由于 `numerical_gradient(f, x)` 中的 `f` 是一个函数,所以为了程序兼容,我们先定义函数 `f(W)`:
```
def f(W):
return net.loss(x, t)
dW = numerical_gradient(f, net.W)
print(dW)
```
`numerical_gradient(f, net.W)` 的结果是 $dW$,形状是一个 $2\times 3$ 的矩阵。
观察这个矩阵,在$\cfrac{\partial L}{\partial W}$ 中:
$\cfrac{\partial L}{\partial W_{11}}$的值约为0.039,这表示如果将$w_{11}$ 增加 $h$,则损失函数的值会增加 $0.039h$。
$\cfrac{\partial L}{\partial W_{22}}$的值约为-0.071,这表示如果将$w_{22}$ 增加 $h$,则损失函数的值会减少 $0.071h$。
所以,从减少损失函数的目的出发,$w_{22}$ 应该向正方向更新,而 $w_{11}$ 应该向负方向更新。
我们求出神经网络在输入 $x=[0.6, \quad 0.9]$ 的梯度以后,只需要根据梯度法,更新权重参数即可。
手动更新试试:
```
# 学习率 lr
lr = 1e-4
print(lr)
class simpleNet_step2:
def __init__(self):
"""初始化,手动更新一次参数"""
self.W = np.array([[ 0.68851943 - 0.0001, 2.06916921 + 0.0001, -0.88125086 - 0.0001],
[-1.30951576 - 0.0001, 0.72350587 + 0.0001, -1.88984482 - 0.0001]])
self.q = 1
def predict(self, x):
"""预测"""
return np.dot(x, self.W)
def loss(self, x, t):
"""损失函数"""
z = self.predict(x)
y = softmax(z)
loss = cross_entropy_error(y, t)
return loss
net = simpleNet_step2()
net.W
x = np.array([0.6, 0.9])
p = net.predict(x) # 预测
print(p)
# 最大值为正确答案
t = np.array([0, 1, 0])
# 损失
loss2 = net.loss(x, t)
print(loss2)
if loss2 < loss1:
print("loss2 比 loss1 小了:", loss1 - loss2)
```
由此可见,我们按照梯度法,更新了权重参数(步长为学习率)以后,损失函数的值下降了。
## 学习算法总结
到此,我们学习了“损失函数”、“mini-batch”、“梯度”、“梯度下降”等概念。现在回顾一些神经网络的学习步骤:
1. **minibatch**:
从训练数据中**随机**选出一部分数据,这部分数据称为 mini-batch。我们的目标是减小 mini-batch 的损失函数的值。
>在 PyTorch 中,使用 `torch.utils.data` 实现此功能,参考 [TORCH.UTILS.DATA](https://pytorch.org/docs/stable/data.html#multi-process-data-loading)。
>
>在 Tensorflow 中,使用 `tf.data` 实现此功能,参考 [tf.data: Build TensorFlow input pipelines](https://tensorflow.google.cn/guide/data)。
2. **计算梯度**:
为了减小 mini-batch 的损失函数的值,需要求出各个权重参数的梯度。梯度表示损失函数的值减小最多的方向。
3. **更新参数**:
将权重参数 $W$ 沿梯度方向进行微小更新。
4. **重复**:
重复步骤1、步骤2、步骤3。
神经网络的学习大概就是按照上面4个步骤进行。这个方法通过梯度下降法更新参数。由于我们使用的数据是**随机**选择的 mini-batch 数据,所以又称为**随机梯度下降(stochastic gradient descent)**。这就是其名称由来。
在大多数深度学习框架中,随机梯度下降法一般由一个名为 **SGD** 的函数来实现:
- TensorFlow:`tf.keras.optimizers.SGD`。
- PyTorch:`torch.optim.SGD`
实际上,随机梯度下降是通过数值微分实现的,但是缺点是计算上很耗费时间,后续我们会学习**误差反向传播**法,来解决这个问题。
| github_jupyter |
# AEJxLPS (Auroral electrojets SECS)
> Abstract: Access to the AEBS products, SECS type. This notebook uses code from the previous notebook to build a routine that is flexible to plot either the LC or SECS products - this demonstrates a prototype quicklook routine.
```
%load_ext watermark
%watermark -i -v -p viresclient,pandas,xarray,matplotlib
from viresclient import SwarmRequest
import datetime as dt
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
import matplotlib as mpl
request = SwarmRequest()
```
## AEBS product information
See previous notebook, "Demo AEBS products (LC)", for an introduction to these products.
### Function to request data from VirES and reshape it
```
def fetch_data(start_time=None, end_time=None, spacecraft=None, AEBS_type="L"):
"""DUPLICATED FROM PREVIOUS NOTEBOOK. TO BE REFACTORED"""
# Fetch data from VirES
auxiliaries = ['OrbitNumber', 'QDLat', 'QDOrbitDirection', 'OrbitDirection', 'MLT']
if AEBS_type == "L":
measurement_vars = ["J_NE"]
elif AEBS_type == "S":
measurement_vars = ["J_CF_NE", "J_DF_NE"]
# Fetch LPL/LPS
request.set_collection(f'SW_OPER_AEJ{spacecraft}LP{AEBS_type}_2F')
request.set_products(
measurements=measurement_vars,
auxiliaries=auxiliaries,
)
data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)
ds_lp = data.as_xarray()
# Fetch LPL/LPS Quality
request.set_collection(f'SW_OPER_AEJ{spacecraft}LP{AEBS_type}_2F:Quality')
request.set_products(
measurements=['RMS_misfit', 'Confidence'],
)
data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)
ds_lpq = data.as_xarray()
# Fetch PBL
request.set_collection(f'SW_OPER_AEJ{spacecraft}PB{AEBS_type}_2F')
request.set_products(
measurements=['PointType', 'Flags'],
auxiliaries=auxiliaries
)
data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)
ds_pb = data.as_xarray()
# Meaning of PointType
PointType_meanings = {
"WEJ_peak": 0, # minimum
"EEJ_peak": 1, # maximum
"WEJ_eq_bound_s": 2, # equatorward (pair start)
"EEJ_eq_bound_s": 3,
"WEJ_po_bound_s": 6, # poleward
"EEJ_po_bound_s": 7,
"WEJ_eq_bound_e": 10, # equatorward (pair end)
"EEJ_eq_bound_e": 11,
"WEJ_po_bound_e": 14, # poleward
"EEJ_po_bound_e": 15,
}
# Add new data variables (boolean Type) according to the dictionary above
ds_pb = ds_pb.assign(
{name: ds_pb["PointType"] == PointType_meanings[name]
for name in PointType_meanings.keys()}
)
# Merge datasets together
def drop_duplicate_times(_ds):
_, index = np.unique(_ds['Timestamp'], return_index=True)
return _ds.isel(Timestamp=index)
def merge_attrs(_ds1, _ds2):
attrs = {"Sources":[], "MagneticModels":[], "RangeFilters":[]}
for item in ["Sources", "MagneticModels", "RangeFilters"]:
attrs[item] = list(set(_ds1.attrs[item] + _ds2.attrs[item]))
return attrs
# Create new dataset from just the newly created PointType arrays
# This is created on a non-repeating Timestamp coordinate
ds = xr.Dataset(
{name: ds_pb[name].where(ds_pb[name], drop=True)
for name in PointType_meanings.keys()}
)
# Merge in the positional and auxiliary data
data_vars = list(set(ds_pb.data_vars).difference(set(PointType_meanings.keys())))
data_vars.remove("PointType")
ds = ds.merge(
(ds_pb[data_vars]
.pipe(drop_duplicate_times))
)
# Merge together with the LPL data
# Note that the Timestamp coordinates aren't equal
# Separately merge data with matching and missing time sample points in ds_lpl
idx_present = list(set(ds["Timestamp"].values).intersection(set(ds_lp["Timestamp"].values)))
idx_missing = list(set(ds["Timestamp"].values).difference(set(ds_lp["Timestamp"].values)))
# Override prioritises the first dataset (ds_lpl) where there are conflicts
ds2 = ds_lp.merge(ds.sel(Timestamp=idx_present), join="outer", compat="override")
ds2 = ds2.merge(ds.sel(Timestamp=idx_missing), join="outer")
# Update the metadata
ds2.attrs = merge_attrs(ds_lp, ds_pb)
# Switch the point type arrays to uint8 or bool for performance?
# But the .where operations later cast them back to float64 since gaps are filled with nan
for name in PointType_meanings.keys():
ds2[name] = ds2[name].astype("uint8").fillna(False)
# ds2[name] = ds2[name].fillna(False).astype(bool)
ds = ds2
# Append the PBL Flags information into the LPL:Quality dataset to use as a lookup table
ds_lpq = ds_lpq.assign(
Flags_PBL=
ds_pb["Flags"]
.pipe(drop_duplicate_times)
.reindex_like(ds_lpq, method="nearest"),
)
return ds, ds_lpq
```
### Plotting function
```
# Bit numbers which indicate non-nominal state
# Check SW-DS-DTU-GS-003_AEBS_PDD for details
BITS_PBL_FLAGS_EEJ_MINOR = (2, 3, 6)
BITS_PBL_FLAGS_WEJ_MINOR = (4, 5, 6)
BITS_PBL_FLAGS_EEJ_BAD = (0, 7, 8, 11)
BITS_PBL_FLAGS_WEJ_BAD = (1, 9, 10, 12)
def check_PBL_Flags(flags=0b0, EJ_type="WEJ"):
"""Return "good", "poor", or "bad" depending on status"""
def _check_bits(bitno_set):
return any(flags & (1 << bitno) for bitno in bitno_set)
if EJ_type == "WEJ":
if _check_bits(BITS_PBL_FLAGS_WEJ_BAD):
return "bad"
elif _check_bits(BITS_PBL_FLAGS_WEJ_MINOR):
return "poor"
else:
return "good"
elif EJ_type == "EEJ":
if _check_bits(BITS_PBL_FLAGS_EEJ_BAD):
return "bad"
elif _check_bits(BITS_PBL_FLAGS_EEJ_MINOR):
return "poor"
else:
return "good"
glyphs = {
"WEJ_peak": {"marker": 'v', "color":'tab:red'}, # minimum
"EEJ_peak": {"marker": '^', "color":'tab:purple'}, # maximum
"WEJ_eq_bound_s": {"marker": '>', "color":'black'}, # equatorward (pair start)
"EEJ_eq_bound_s": {"marker": '>', "color":'black'},
"WEJ_po_bound_s": {"marker": '>', "color":'black'}, # poleward
"EEJ_po_bound_s": {"marker": '>', "color":'black'},
"WEJ_eq_bound_e": {"marker": '<', "color":'black'}, # equatorward (pair end)
"EEJ_eq_bound_e": {"marker": '<', "color":'black'},
"WEJ_po_bound_e": {"marker": '<', "color":'black'}, # poleward
"EEJ_po_bound_e": {"marker": '<', "color":'black'},
}
def plot_stack(ds, ds_lpq, hemisphere="North", x_axis="Latitude", AEBS_type="L"):
# Identify which variable to plot from dataset
# If accessing the SECS (LPS) data, sum the DF & CF parts
if "J_CF_NE" in ds.data_vars:
ds["J_NE"] = ds["J_DF_NE"] + ds["J_CF_NE"]
plotvar = "J_NE"
orbdir = "OrbitDirection" if x_axis=="Latitude" else "QDOrbitDirection"
markersize = 1 if AEBS_type=="S" else 5
# Select hemisphere
if hemisphere == "North":
ds = ds.where(ds["Latitude"]>0, drop=True)
elif hemisphere == "South":
ds = ds.where(ds["Latitude"]<0, drop=True)
# Generate plot with split by columns: ascending/descending to/from pole
# by rows: successive orbits
fig, axes = plt.subplots(
nrows=len(ds.groupby("OrbitNumber")), ncols=2, sharex="col", sharey="all",
figsize=(10, 20)
)
max_ylim = np.max(np.abs(ds[plotvar].sel({"NE": "E"})))
# Loop through each orbit
for i, (_, ds_orbit) in enumerate(ds.groupby("OrbitNumber")):
if hemisphere == "North":
ds_orb_asc = ds_orbit.where(ds_orbit[orbdir] == 1, drop=True)
ds_orb_desc = ds_orbit.where(ds_orbit[orbdir] == -1, drop=True)
if hemisphere == "South":
ds_orb_asc = ds_orbit.where(ds_orbit[orbdir] == -1, drop=True)
ds_orb_desc = ds_orbit.where(ds_orbit[orbdir] == 1, drop=True)
# Loop through ascending and descending sections
for j, _ds in enumerate((ds_orb_asc, ds_orb_desc)):
if len(_ds.Timestamp) == 0:
continue
# Line plot of current strength
axes[i, j].plot(
_ds[x_axis], _ds[plotvar].sel({"NE": "E"}),
color="tab:blue", marker=".", markersize=markersize, linestyle=""
)
axes[i, j].plot(
_ds[x_axis], _ds[plotvar].sel({"NE": "N"}),
color="tab:grey", marker=".", markersize=markersize, linestyle=""
)
# Plot glyphs at the peaks and boundaries locations
for name in glyphs.keys():
__ds = _ds.where(_ds[name], drop=True)
try:
for lat in __ds[x_axis]:
axes[i, j].plot(
lat, 0,
marker=glyphs[name]["marker"], color=glyphs[name]["color"]
)
except Exception:
pass
# Identify Quality and Flags info
# Use either the start time of the section or the end, depending on asc or desc
index = 0 if j == 0 else -1
t = _ds["Timestamp"].isel(Timestamp=index).values
_ds_qualflags = ds_lpq.sel(Timestamp=t, method="nearest")
pbl_flags = int(_ds_qualflags["Flags_PBL"].values)
lpl_rms_misfit = float(_ds_qualflags["RMS_misfit"].values)
lpl_confidence = float(_ds_qualflags["Confidence"].values)
# Shade WEJ and EEJ regions, only if well-defined
# def _shade_EJ_region(_ds=None, EJ="WEJ", color="tab:red", alpha=0.3):
wej_status = check_PBL_Flags(pbl_flags, "WEJ")
eej_status = check_PBL_Flags(pbl_flags, "EEJ")
if wej_status in ["good", "poor"]:
alpha = 0.3 if wej_status == "good" else 0.1
try:
WEJ_left = _ds.where(
(_ds["WEJ_eq_bound_s"] == 1) | (_ds["WEJ_po_bound_s"] == 1), drop=True)
WEJ_right = _ds.where(
(_ds["WEJ_eq_bound_e"] == 1) | (_ds["WEJ_po_bound_e"] == 1), drop=True)
x1 = WEJ_left[x_axis][0]
x2 = WEJ_right[x_axis][0]
axes[i, j].fill_betweenx(
[-max_ylim, max_ylim], [x1, x1], [x2, x2], color="tab:red", alpha=alpha)
except Exception:
pass
if eej_status in ["good", "poor"]:
alpha = 0.3 if eej_status == "good" else 0.15
try:
EEJ_left = _ds.where(
(_ds["EEJ_eq_bound_s"] == 1) | (_ds["EEJ_po_bound_s"] == 1), drop=True)
EEJ_right = _ds.where(
(_ds["EEJ_eq_bound_e"] == 1) | (_ds["EEJ_po_bound_e"] == 1), drop=True)
x1 = EEJ_left[x_axis][0]
x2 = EEJ_right[x_axis][0]
axes[i, j].fill_betweenx(
[-max_ylim, max_ylim], [x1, x1], [x2, x2], color="tab:purple", alpha=alpha)
except Exception:
pass
# Write the LPL:Quality and PBL Flags info
ha = "right" if j == 0 else "left"
textx = 0.98 if j == 0 else 0.02
axes[i, j].text(
textx, 0.95,
f"RMS Misfit {np.round(lpl_rms_misfit, 2)}; Confidence {np.round(lpl_confidence, 2)}",
transform=axes[i, j].transAxes, verticalalignment="top", horizontalalignment=ha
)
axes[i, j].text(
textx, 0.05,
f"PBL Flags {pbl_flags:013b}",
transform=axes[i, j].transAxes, verticalalignment="bottom", horizontalalignment=ha
)
# Write the start/end time and MLT of the section, and the orbit number
def _format_utc(t):
return f"UTC {t.strftime('%H:%M')}"
def _format_mlt(mlt):
hour, fraction = divmod(mlt, 1)
t = dt.time(int(hour), minute=int(60*fraction))
return f"MLT {t.strftime('%H:%M')}"
try:
# Left part (section starting UTC, MLT, OrbitNumber)
time_s = pd.to_datetime(ds_orb_asc["Timestamp"].isel(Timestamp=0).data)
mlt_s = ds_orb_asc["MLT"].dropna(dim="Timestamp").isel(Timestamp=0).data
orbit_number = int(ds_orb_asc["OrbitNumber"].isel(Timestamp=0).data)
axes[i, 0].text(
0.01, 0.95, f"{_format_utc(time_s)}\n{_format_mlt(mlt_s)}",
transform=axes[i, 0].transAxes, verticalalignment="top"
)
axes[i, 0].text(
0.01, 0.05, f"Orbit {orbit_number}",
transform=axes[i, 0].transAxes, verticalalignment="bottom"
)
except Exception:
pass
try:
# Right part (section ending UTC, MLT)
time_e = pd.to_datetime(ds_orb_desc["Timestamp"].isel(Timestamp=-1).data)
mlt_e = ds_orb_desc["MLT"].dropna(dim="Timestamp").isel(Timestamp=-1).data
axes[i, 1].text(
0.99, 0.95, f"{_format_utc(time_e)}\n{_format_mlt(mlt_e)}",
transform=axes[i, 1].transAxes, verticalalignment="top", horizontalalignment="right"
)
except Exception:
pass
# Extra config of axes and figure text
axes[0, 0].set_ylim(-max_ylim, max_ylim)
if hemisphere == "North":
axes[0, 0].set_xlim(50, 90)
axes[0, 1].set_xlim(90, 50)
elif hemisphere == "South":
axes[0, 0].set_xlim(-50, -90)
axes[0, 1].set_xlim(-90, -50)
for ax in axes.flatten():
ax.grid()
axes[-1, 0].set_xlabel(x_axis)
axes[-1, 0].set_ylabel("Horizontal currents\n[ A.km$^{-1}$ ]")
time = pd.to_datetime(ds["Timestamp"].isel(Timestamp=0).data)
spacecraft = ds["Spacecraft"].dropna(dim="Timestamp").isel(Timestamp=0).data
AEBS_type_name = "LC" if AEBS_type == "L" else "SECS"
fig.text(
0.5, 0.9, f"{time.strftime('%Y-%m-%d')}\nSwarm {spacecraft}\n{hemisphere}\nAEBS: {AEBS_type_name}",
transform=fig.transFigure, horizontalalignment="center",
)
fig.subplots_adjust(wspace=0, hspace=0)
return fig, axes
```
### Fetching and plotting function
```
def quicklook(day="2015-01-01", hemisphere="North", spacecraft="A", AEBS_type="L", xaxis="Latitude"):
start_time = dt.datetime.fromisoformat(day)
end_time = start_time + dt.timedelta(days=1)
ds, ds_lpq = fetch_data(start_time, end_time, spacecraft, AEBS_type)
fig, axes = plot_stack(ds, ds_lpq, hemisphere, xaxis, AEBS_type)
return ds, fig, axes
```
Consecutive orbits are shown in consecutive rows, centered over the pole. The starting and ending times (UTC and MLT) of the orbital section are shown at the left and right. Westward (WEJ) and Eastward (EEJ) electrojet extents and peak intensities are indicated:
- Blue dots: Estimated current density in Eastward direction, J_NE (E)
- Grey dots: Estimated current density in Northward direction, J_NE (N)
- Red/Purple shaded region: WEJ/EEJ extent (boundaries marked by black triangles)
- Red/Purple triangles: Locations of peak WEJ/EEJ intensity
Select AEBS_type as S to get SECS results, L to get LC results
SECS = spherical elementary current systems method
LC = Line current method
Notes:
The code is currently quite fragile, so it is broken on some days. Sometimes the electrojet regions are not shaded correctly. Only the horizontal currents are currently shown.
```
quicklook(day="2016-01-01", hemisphere="North", spacecraft="A", AEBS_type="S", xaxis="Latitude");
quicklook(day="2016-01-01", hemisphere="North", spacecraft="A", AEBS_type="L", xaxis="Latitude");
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from allennlp.commands.evaluate import *
from kb.include_all import *
from allennlp.nn import util as nn_util
from allennlp.common.tqdm import Tqdm
import torch
import warnings
warnings.filterwarnings("ignore")
archive_file = "knowbert_wiki_wordnet_model"
cuda_device = -1
# line = "banana\tcolor\tyellow"
# import logging
# logger = logging.getLogger() # 不加名称设置root logger
# logger.setLevel(logging.INFO)
# formatter = logging.Formatter(
# '%(asctime)s - %(name)s - %(levelname)s: - %(message)s',
# datefmt='%Y-%m-%d %H:%M:%S')
# # 使用FileHandler输出到文件
# fh = logging.FileHandler('log.txt')
# fh.setLevel(logging.DEBUG)
# fh.setFormatter(formatter)
# # 使用StreamHandler输出到屏幕
# ch = logging.StreamHandler()
# ch.setLevel(logging.DEBUG)
# ch.setFormatter(formatter)
# # 添加两个Handler
# logger.addHandler(ch)
# logger.addHandler(fh)
archive = load_archive(archive_file, cuda_device)
config = archive.config
prepare_environment(config)
config = Params.from_file("/nas/home/gujiashe/kb/knowbert_wiki_wordnet_model/config.json")
reader_params = config.pop('dataset_reader')
if reader_params['type'] == 'multitask_reader':
reader_params = reader_params['dataset_readers']['language_modeling']
# reader_params['num_workers'] = 0
validation_reader_params = {
"type": "kg_probe",
"tokenizer_and_candidate_generator": reader_params['base_reader']['tokenizer_and_candidate_generator'].as_dict()
}
dataset_reader = DatasetReader.from_params(Params(validation_reader_params))
vocab = dataset_reader._tokenizer_and_candidate_generator.bert_tokenizer.vocab
token2word = {}
for k, v in vocab.items():
token2word[v] = k
dataset_path = "/nas/home/gujiashe/trans/knowbert_ppl_top10.tsv"
# birth_property = dataset_path.split('_')[1].split('.')[0]
instances = dataset_reader.read(dataset_path)
print(instances[0])
import pandas as pd
# birth_df = pd.read_csv('/nas/home/gujiashe/kb/sentences_birth.tsv', sep='\t', header = None)
birth_df = pd.read_csv(dataset_path, sep='\t', header = None)
birth = birth_df.values
print(birth[0])
instances[0]["lm_label_ids"][2]
import csv
model = archive.model
model.eval()
print("start")
# metrics = evaluate(model, instances, iterator, cuda_device, "")
data_iterator = DataIterator.from_params(Params(
{"type": "basic", "batch_size": 1}
))
data_iterator.index_with(model.vocab)
iterator = data_iterator(instances,
num_epochs=1,
shuffle=False)
logger.info("Iterating over dataset")
generator_tqdm = Tqdm.tqdm(iterator, total=data_iterator.get_num_batches(instances))
rows_id = 0
# with open('birth3.txt', 'wt') as f:
birth_spreadsheet = open(birth_property+"_spreadsheet_knowbert.tsv", "w")
tsv_writer = csv.writer(birth_spreadsheet, delimiter='\t')
total_ranks = []
for instance in generator_tqdm:
rows_id+=1
batch = nn_util.move_to_device(instance, cuda_device)
output_dict = model(**batch)
pooled_output = output_dict.get("pooled_output")
contextual_embeddings = output_dict.get("contextual_embeddings")
prediction_scores, seq_relationship_score = model.pretraining_heads(
contextual_embeddings, pooled_output
)
prediction_scores = prediction_scores.view(-1, prediction_scores.shape[-1])
ranks = torch.argsort(prediction_scores, dim = 1, descending=True)
ranks = torch.argsort(ranks, dim = 1)
vals, idxs = torch.topk(prediction_scores, k = 5, dim = 1)
idxs = idxs.cpu().numpy()
lines = []
# print("row: ", rows_id, file=f)
# print("================", file=f)
# print("source: ", birth[rows_id - 1, 1], file = f)
masked_tokens = []
for id in range(len(idxs)):
masked_tokens += [token2word[instance["tokens"]["tokens"][0][id].item()]]
masked_tokens = " ".join(masked_tokens[1: -1])
masked_ranks = []
for k in range(1):
source = []
line = []
for i, idx in enumerate(idxs):
if instance["tokens"]["tokens"][0][i] != 103:
line += [token2word[instance["tokens"]["tokens"][0][i].item()]]
else:
line += [token2word[idx[k]]]
text_id = instance["lm_label_ids"]["lm_labels"][0][i].item()
masked_ranks += [ranks[i][text_id].item()]
line = " ".join(line[1: -1])
line = line.split(" ##")
line = "".join(line)
total_ranks += masked_ranks
masked_ranks = list(map(str, masked_ranks))
masked_ranks = ",".join(masked_ranks)
# print("{} : ".format(k) + line, file=f)
# print("masked_ranks: ", masked_ranks , file = f)
# print("masked_tokens: ", masked_tokens, file = f)
# print("================", file=f)
row = [birth[rows_id - 1, 1], masked_tokens, line, masked_ranks]
tsv_writer.writerow(row)
# if rows_id>100:
# break
birth_spreadsheet.close()
%matplotlib notebook
import matplotlib.pyplot as plt
plt.hist(total_ranks, bins = 100, range = [0, 100])
plt.savefig(birth_property+'_knowbert.jpg')
```
| github_jupyter |
```
import torch
from torch import nn, optim
from torch.utils.data import DataLoader, Dataset
from torchvision import datasets, transforms
from torchvision.utils import make_grid
import matplotlib
from matplotlib import pyplot as plt
import seaborn as sns
from IPython import display
import torchsummary as ts
import numpy as np
sns.set()
display.set_matplotlib_formats("svg")
plt.rcParams['font.sans-serif'] = "Liberation Sans"
device = torch.device("cuda")
torch.cuda.is_available()
trans = transforms.Compose([
transforms.Resize((32, 32)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
dataset = datasets.ImageFolder("dataset/faces/", transform=trans)
data_loader = DataLoader(dataset, batch_size=16, shuffle=True, num_workers=4,
drop_last=True)
images = make_grid(next(iter(data_loader))[0], normalize=True, padding=5, pad_value=1)
plt.imshow(images.permute(1, 2, 0))
plt.axis("off")
plt.grid(False)
def imshow(data):
images = make_grid(data.detach().cpu() , normalize=True, padding=5, pad_value=1)
plt.imshow(images.permute(1, 2, 0))
plt.axis("off")
plt.grid(False)
plt.pause(0.0001)
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.main = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=4, stride=2,
padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(in_channels=6, out_channels=12, kernel_size=4, stride=2,
padding=1),
nn.BatchNorm2d(12),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(in_channels=12, out_channels=24, kernel_size=4, stride=2,
padding=1),
nn.BatchNorm2d(24),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(24, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, x):
x = self.main(x)
x = x.reshape(-1)
return x
class Generator(nn.Module):
def __init__(self, init_size=100):
super().__init__()
self.expand_dim = nn.Linear(init_size, 1024)
self.init_size = init_size
self.main = nn.Sequential(
nn.ConvTranspose2d(64, 32, kernel_size=4,
stride=2, padding=1, bias=False),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.ConvTranspose2d(32, 12, kernel_size=4,
stride=2, padding=1, bias=False),
nn.BatchNorm2d(12),
nn.ReLU(),
nn.ConvTranspose2d(12, 3, kernel_size=4,
stride=2, padding=1, bias=False),
nn.Tanh()
)
def forward(self, x):
x = self.expand_dim(x).reshape(-1, 64, 4, 4)
x = self.main(x)
return x
netD = Discriminator()
netD(torch.randn(16, 3, 32, 32))
netG = Generator()
netG(torch.randn(16, 100)).shape
BATCH_SIZE = 64
ININT_SIZE = 100
data_loader = DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=4, drop_last=True)
Epoch = 1000
D_losses = []
G_losses = []
generator = Generator(ININT_SIZE).to(device)
discirminator = Discriminator().to(device)
generator.apply(weights_init)
discirminator.apply(weights_init)
criterion = nn.BCELoss()
OPTIMIZER_G = optim.Adam(generator.parameters(), lr=4e-4, betas=(0.5, 0.999))
OPTIMIZER_D = optim.Adam(discirminator.parameters(), lr=1e-4, betas=(0.5, 0.999))
pdr, pdf, pg = None, None, None
for epoch in range(1, 1 + Epoch):
dis_temp_loss = []
gen_temp_loss = []
for idx, (d, l) in enumerate(data_loader):
d = d.to(device)
l = l.float().to(device)
out = discirminator(d)
pdr = out.mean().item()
real_loss = criterion(out, torch.ones_like(l))
noise = torch.randn(BATCH_SIZE, ININT_SIZE).to(device)
images = generator(noise)
out = discirminator(images.detach().to(device))
pdf = out.mean().item()
fake_loss = criterion(out, torch.zeros_like(l))
OPTIMIZER_D.zero_grad()
real_loss.backward()
fake_loss.backward()
OPTIMIZER_D.step()
noise = torch.randn(BATCH_SIZE, ININT_SIZE).to(device)
images = generator(noise)
out = discirminator(images)
pg = out.mean().item()
loss = criterion(out, torch.ones_like(l))
OPTIMIZER_G.zero_grad()
loss.backward()
OPTIMIZER_G.step()
d_loss = fake_loss + real_loss
print("Epoch = {:<2} Step[{:3}/{:3}] Dis-Loss = {:.5f} Gen-Loss = {:.5f} acc = {} {} {}"\
.format(epoch, idx + 1, len(data_loader), d_loss.item(),
loss.item(), pdr, pdf, pg))
dis_temp_loss.append(d_loss.item())
gen_temp_loss.append(loss.item())
D_losses.append(np.mean(dis_temp_loss))
G_losses.append(np.mean(gen_temp_loss))
if epoch > 1:
fig, ax = plt.subplots()
ax.plot(np.arange(len(D_losses)) + 1,
D_losses, label="Discriminator", ls="-.")
ax.plot(np.arange(len(G_losses)) + 1,
G_losses, label="Generator", ls="--")
ax.set_xlabel("Epoch")
ax.set_ylabel("Loss")
ax.set_title("GAN Training process")
ax.legend(bbox_to_anchor=[1, 1.02])
plt.pause(0.0001)
imshow(images[:16])
imshow(d[:16])
if epoch % 10 == 0:
display.clear_output()
```
| github_jupyter |
```
# this is a little trick to make sure the the notebook takes up most of the screen:
from IPython.display import HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
# Recommendation to leave the logging config like this, otherwise you'll be flooded with unnecessary info
import logging
logging.basicConfig(level=logging.WARNING, format='%(levelname)s:%(message)s')
# Recommendation: logging config like this, otherwise you'll be flooded with unnecessary information
import logging
logging.basicConfig(level=logging.ERROR)
import sys
sys.path.append('../')
# import all modeling concepts
from crestdsl.model import *
# import the simulator
from crestdsl.simulation import Simulator
# import the plotting libraries that can visualise the CREST systems
from crestdsl.ui import elk
# we will create tests for each Entity
import unittest
class TestClass(unittest.TestCase):
@classmethod
def runall(cls):
tests = unittest.TestLoader().loadTestsFromTestCase(cls)
return unittest.TextTestRunner().run(tests)
class Resources(object):
electricity = Resource("Watt", REAL)
switch = Resource("switch", ["on", "off"])
pourcent = Resource("%", REAL)
light = Resource("Lumen", INTEGER)
time = Resource("minutes", REAL)
water = Resource("litre", REAL)
celsius = Resource("Celsius", REAL)
counter = Resource("Count", INTEGER)
fahrenheit = Resource("Fahrenheit", REAL)
boolean = Resource("bool", BOOL)
presence = Resource("presence", ["detected", "no presence"])
onOffAuto = Resource("onOffAutoSwitch", ["on", "off", "auto"])
integer = Resource("integer", INTEGER)
weight = Resource("kg", REAL)
lenght = Resource("m", REAL)
area = Resource("m²", REAL)
class ElectricalDevice(object):
electricity_in = Input(Resources.electricity, value=0)
req_electricity_out = Output(Resources.electricity, value=0)
class WaterDevice(object):
water_in = Input(Resources.water, value=0)
req_water_out = Output(Resources.water, value=0)
class LightElement(Entity):
"""This is a definition of a new Entity type. It derives from CREST's Entity base class."""
"""we define ports - each has a resource and an initial value"""
electricity = Input(resource=Resources.electricity, value=0)
light = Output(resource=Resources.light, value=0)
"""automaton states - don't forget to specify one as the current state"""
on = State()
off = current = State()
"""transitions and guards (as lambdas)"""
off_to_on = Transition(source=off, target=on, guard=(lambda self: self.electricity.value >= 100))
on_to_off = Transition(source=on, target=off, guard=(lambda self: self.electricity.value < 100))
"""
update functions. They are related to a state, define the port to be updated and return the port's new value
Remember that updates need two parameters: self and dt.
"""
@update(state=on, target=light)
def set_light_on(self, dt=0):
return 800
@update(state=off, target=light)
def set_light_off(self, dt=0):
return 0
class HeatElement(Entity):
""" Ports """
electricity = Input(resource=Resources.electricity, value=0)
switch = Input(resource=Resources.switch, value="off") # the heatelement has its own switch
heat = Output(resource=Resources.celsius, value=0) # and produces a celsius value (i.e. the temperature increase underneath the lamp)
""" Automaton (States) """
state = current = State() # the only state of this entity
"""Update"""
@update(state=state, target=heat)
def heat_output(self, dt):
# When the lamp is on, then we convert electricity to temperature at a rate of 100Watt = 1Celsius
if self.switch.value == "on":
return self.electricity.value / 100
else:
return 0
# show us what it looks like
elk.plot(HeatElement())
# a logical entity (this one sums two values)
class Adder(LogicalEntity):
heat_in = Input(resource=Resources.celsius, value=0)
room_temp_in = Input(resource=Resources.celsius, value=22)
temperature = Output(resource=Resources.celsius, value=22)
state = current = State()
@update(state=state, target=temperature)
def add(self, dt):
return self.heat_in.value + self.room_temp_in.value
elk.plot(Adder()) # try adding the display option 'show_update_ports=True' and see what happens!
# Entity composed by the 3 subentities above, heats and lights the plant to an optimal value
class GrowLamp(Entity):
""" - - - - - - - PORTS - - - - - - - - - - """
electricity = Input(resource=Resources.electricity, value=0)
switch = Input(resource=Resources.switch, value="off")
heat_switch = Input(resource=Resources.switch, value="on")
room_temperature = Input(resource=Resources.fahrenheit, value=71.6)
light = Output(resource=Resources.light, value=3.1415*1000) # note that these are bogus values for now
temperature = Output(resource=Resources.celsius, value=42) # yes, nonsense..., they are updated when simulated
on_time = Local(resource=Resources.time, value=0)
on_count = Local(resource=Resources.counter, value=0)
""" - - - - - - - SUBENTITIES - - - - - - - - - - """
lightelement = LightElement()
heatelement = HeatElement()
adder = Adder()
""" - - - - - - - INFLUENCES - - - - - - - - - - """
"""
Influences specify a source port and a target port.
They are always executed, independent of the automaton's state.
Since they are called directly with the source-port's value, a self-parameter is not necessary.
"""
@influence(source=room_temperature, target=adder.room_temp_in)
def fahrenheit_to_celsius(value):
return (value - 32) * 5 / 9
# we can also define updates and influences with lambda functions...
heat_to_add = Influence(source=heatelement.heat, target=adder.heat_in, function=(lambda val: val))
# if the lambda function doesn't do anything (like the one above) we can omit it entirely...
add_to_temp = Influence(source=adder.temperature, target=temperature)
light_to_light = Influence(source=lightelement.light, target=light)
heat_switch_influence = Influence(source=heat_switch, target=heatelement.switch)
""" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """
on = State()
off = current = State()
error = State()
off_to_on = Transition(source=off, target=on, guard=(lambda self: self.switch.value == "on" and self.electricity.value >= 100))
on_to_off = Transition(source=on, target=off, guard=(lambda self: self.switch.value == "off" or self.electricity.value < 100))
# transition to error state if the lamp ran for more than 1000.5 time units
@transition(source=on, target=error)
def to_error(self):
"""More complex transitions can be defined as a function. We can use variables and calculations"""
timeout = self.on_time.value >= 1000.5
heat_is_on = self.heatelement.switch.value == "on"
return timeout and heat_is_on
""" - - - - - - - UPDATES - - - - - - - - - - """
# LAMP is OFF or ERROR
@update(state=[off, error], target=lightelement.electricity)
def update_light_elec_off(self, dt):
# no electricity
return 0
@update(state=[off, error], target=heatelement.electricity)
def update_heat_elec_off(self, dt):
# no electricity
return 0
# LAMP is ON
@update(state=on, target=lightelement.electricity)
def update_light_elec_on(self, dt):
# the lightelement gets the first 100Watt
return 100
@update(state=on, target=heatelement.electricity)
def update_heat_elec_on(self, dt):
# the heatelement gets the rest
return self.electricity.value - 100
@update(state=on, target=on_time)
def update_time(self, dt):
# also update the on_time so we know whether we overheat
return self.on_time.value + dt
""" - - - - - - - ACTIONS - - - - - - - - - - """
# let's add an action that counts the number of times we switch to state "on"
@action(transition=off_to_on, target=on_count)
def count_switching_on(self):
"""
Actions are functions that are executed when the related transition is fired.
Note that actions do not have a dt.
"""
return self.on_count.value + 1
# create an instance!
elk.plot(GrowLamp())
# Really bad model of the light send by the sun taken from the "SmartHome" file
class Sun(Entity):
""" - - - - - - - PORTS - - - - - - - - - - """
time_In = Input(Resources.time, 0)
time_Local = Local(Resources.time, 0)
light_Out = Output(Resources.light, 0)
""" - - - - - - - SUBENTITIES - - - - - - - - - - """
#None
""" - - - - - - - INFLUENCES - - - - - - - - - - """
#None
""" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """
state = current = State()
time_propagation = Influence(source = time_In, target = time_Local)
""" - - - - - - - UPDATES - - - - - - - - - - """
#Je pars ici du principe que dt = 1min
#On a donc 60*24 = 1440min/jour
"CREER INFLUENCE ENTRE LES VARIABLES DE TEMPS"
@update(state = state, target = light_Out)
def update_light_out(self, dt):
if(self.time_Local.value%1440 > 21*60):
light = 0
elif(self.time_Local.value%1440 > 16*60):
light = 20000*(21*60-self.time_Local.value)//60 #On veut 20000*un nombre flottant entre 0 et 5
elif(self.time_Local.value%1440 > 12*60):
light = 100000
elif(self.time_Local.value%1440 > 7*60):
light = 20000*(abs(self.time_Local.value - 7*60))//60 #On veut 20000*un nombre flottant entre 0 et 5
else:
light = 0
return light
#elk.plot(Sun())
sun=Sun()
sun.time_In.value = 691
ssim=Simulator(sun)
ssim.stabilize()
ssim.plot()
# Used to take water from the grid and sends it to the Plant Reservoir
class Pump(Entity):
""" - - - - - - - PORTS - - - - - - - - - - """
water_in = Input(Resources.water, 100)
size_pipe = Local(Resources.water, 2)
water_send = Output(Resources.water, 2)
""" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """
state = current = State()
""" - - - - - - - - - - UPDATES - - - - - - - - - - - - """
@update(state = state, target = water_send)
def update_water_send(self, dt):
return min(self.water_in.value, self.size_pipe.value)
elk.plot(Pump())
# Contains the water for the plants himidity, fill itself with the pump
@dependency(source="water_in", target="water_send")
class PlantReservoir(Entity):
""" - - - - - - - PORTS - - - - - - - - - - """
water_in = Input(Resources.water, 2)
water_needed = Input(Resources.water, 2)
water_send = Output(Resources.water, 2)
max_cap = Local(Resources.water, 10)
actual_cap = Local(Resources.water, 1)
""" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """
off = current = State()
draining = State()
filling = State()
off_to_draining = Transition(source=off, target=draining, guard=(lambda self: self.water_needed.value > 0 and self.actual_cap.value >= 2))
off_to_filling = Transition(source=off, target=filling, guard=(lambda self: self.water_needed.value > 0 and self.actual_cap.value < 2))
filling_to_draining = Transition(source=filling, target=draining, guard=(lambda self: self.actual_cap.value >= self.max_cap.value and self.water_needed.value > 0))
filling_to_off = Transition(source=filling, target=off, guard=(lambda self: (self.actual_cap.value >= self.max_cap.value and self.water_needed.value == 0) ))# or self.water_in.value == 0))
draining_to_filling = Transition(source=draining, target=filling, guard=(lambda self: self.actual_cap.value <= 2 or (self.water_needed.value == 0 and self.actual_cap.value < self.max_cap.value)))
draining_to_off = Transition(source = draining, target = off, guard=(lambda self: self.actual_cap.value >= self.max_cap.value and self.water_needed.value == 0))
""" - - - - - - - - - - INFLUENCES - - - - - - - - - - - - """
@update(state = draining, target = actual_cap)
def update_actual_cap_drain(self, dt):
return max(self.actual_cap.value - self.water_needed.value*dt,0)
@update(state = filling, target = actual_cap)
def update_actual_cap_fill(self, dt):
return min(self.actual_cap.value + self.water_in.value*dt, self.max_cap.value)
influ_water_pump = Influence(target = water_send, source = water_needed)
elk.plot(PlantReservoir())
# Entity representing the plants humidity, adapting the consumption for the reservoir
class Plants(Entity):
""" - - - - - - - PORTS - - - - - - - - - - """
water_in = Input(Resources.water, 0)
water_cons = Local(Resources.water, 2)
actual_humidity = Output(Resources.pourcent, 1)
water_needed = Output(Resources.water,2)
""" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """
watering = current = State()
off = State()
off_to_watering = Transition(source=off, target=watering, guard=(lambda self: self.actual_humidity.value < 50))# and self.water_in.value >= self.water_cons.value))
watering_to_off = Transition(source=watering, target=off, guard=(lambda self: self.actual_humidity.value > 70))# or self.water_in.value < self.water_cons.value))
""" - - - - - - - - - - INFLUENCES - - - - - - - - - - - - """
@update(state = watering, target = actual_humidity)
def update_actual_humidity_watering(self, dt):
return max(self.actual_humidity.value + 5*dt,0)
@update(state = off, target = actual_humidity)
def update_actual_humidity_off(self, dt):
return max(self.actual_humidity.value - dt,0)
@update(state = watering, target = water_needed)
def update_water_needed_watering(self, dt):
return self.water_cons.value
@update(state = off, target = water_needed)
def update_water_needed_off(self, dt):
return 0
elk.plot(Plants())
# Entity made with all the subentities above, take account of the humidity, the room temperature and the time (for the sun),
#The output "plant_is_ok is a boolean, true if all 3 are in optimal ranges"
class GrowingPlant(Entity):
""" - - - - - - - PORTS - - - - - - - - - - """
electricity_in = Input(Resources.electricity, 200)
room_temp = Input(Resources.fahrenheit, 0)
time = Local(Resources.time, 0)
plant_is_ok = Output(Resources.boolean, False)
def __init__(self, starting_time=0):
self.time.value = starting_time
""" - - - - - - - SUBENTITIES - - - - - - - - - - """
gl = GrowLamp()
s = Sun()
pump = Pump()
pr = PlantReservoir()
pl = Plants()
""" - - - - - - - INFLUENCES - - - - - - - - - - """
time_propagation = Influence(source=time, target = s.time_In)
water_propagation = Influence(source = pump.water_send, target = pr.water_in)
water_propagation2 = Influence(source = pr.water_send, target = pl.water_in)
elec_propagation = Influence(source = electricity_in, target = gl.electricity)
#needed_water_propagation = Influence(source = pr.water_send, target = pump.water_needed)
needed_water_propagation2 = Influence(source = pl.water_needed, target = pr.water_needed)
""" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - """
state = current = State()
""" - - - - - - - UPDATES - - - - - - - - - - """
@update(state=state, target=gl.room_temperature)
def set_gl_room_temp(self,dt):
return self.room_temp.value + (self.s.light_Out.value/(1000000)-0.1)*dt
@update(state=state, target=plant_is_ok)
def set_plant_is_ok(self,dt):
return ((self.pl.actual_humidity.value>=50)and(self.pl.actual_humidity.value<=70)
and(self.gl.temperature.value>=20)and(self.gl.temperature.value<=25))
#elk.plot(GrowingPlant())
gPlant = GrowingPlant()
gPlant.room_temp.value = 68
gPlant.gl.switch.value = "on"
gPlant.pump.water_in.value = 100
gPlant.pl.water_in.value = 100
simuLamp = Simulator(gPlant)
simuLamp.stabilize()
#simuLamp.advance(3)
for i in range(5):
simuLamp.advance(1)
simuLamp.plot()
simuLamp.traces.plot(traces=[gPlant.gl.temperature])
simuLamp.traces.plot(traces=[gPlant.pl.actual_humidity])
simuLamp.traces.plot(traces=[gPlant.plant_is_ok])
```
| github_jupyter |
# <center>Dataset Anaylsis</center>
```
%%html
<style>
body {
font-family: "Apple Script", cursive, sans-serif;
}
</style>
```
_importing necessary libraries of Data Science_
```
import numpy as np
import pandas as pd
import cv2
from matplotlib import pyplot as plt
import os
```
_making a function for showing image through open cv_
```
def imshow(img):
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
_________
_read one data out of the dataset and viewing the image_
```
# reading data using opencv
img = cv2.imread(r"PatientSpiral\sp1-P1.jpg")
grey_img = cv2.imread(r"PatientSpiral\sp1-P1.jpg", cv2.IMREAD_GRAYSCALE)
# viewing data using matplotlib
plt.imshow(img)
# plt.imshow(grey_img)
```
_As we can see from above image, there is two diffrent pattern. one is question and one is answer, and they both overlap each other, we need to decompose them into __question__ and __answer__ to get accurate comparision across the dataset._
```
Q_img = np.zeros(np.shape(grey_img))
A_img = np.zeros(np.shape(grey_img))
```
#### extracting question
Setting bias as 30 for black colour
```
for i,e in enumerate(grey_img):
for j,f in enumerate(e):
if f > 30:
Q_img[i][j] = 255
else:
Q_img[i][j] = 0
```
If we consider that the darkness of shade is also a factor in diagnosing the
Parkinson's disease, as it may reflect the muscle strength of the Person then
don't use the else statement in above code<br>
__Points to consider:__
- This may increase the ambiguity as the lightning conditions do not remain constant during image capture
- Muscle strength also depends on the person irrespective of disease (not the age,weight factor)
- Variance of darkness due to age, height, weight factor will be handled later by the model
- Camera quality may also increase ambiguity
- Pen and paper quality also contribute to ambiguity, but has nothing to do with the disease itself.
- not considering the darkness of ink may leave out variables that represent the deterioration of muscle control that is a significant part of Diagnosis
```
imshow(Q_img)
```
#### extracting answer
<font TimesNewRoman>Setting +ve bias as __60__ for _removing black colour_ and -ve bias as __160__ for _removing white noise_ </font>
```
for i,e in enumerate(grey_img):
for j,f in enumerate(e):
if f < 60 or f > 160:
A_img[i][j] = 255
else:
A_img[i][j] = 0
```
Again don't use the else statement in the above code if considering darkness of ink as a variable
```
imshow(A_img)
plt.imshow(img)
plt.imshow(Q_img)
plt.imshow(A_img)
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import torch
from torch import nn as nn
from math import factorial
import random
import torch.nn.functional as F
import numpy as np
import seaborn as sn
import pandas as pd
import os
from os.path import join
import glob
from math import factorial
ttype = torch.cuda.DoubleTensor if torch.cuda.is_available() else torch.DoubleTensor
print(ttype)
from sith import DeepSITH
from tqdm.notebook import tqdm
import pickle
sn.set_context("poster")
sig_lets = ["A","B","C","D","E","F","G","H",]
signals = ttype([[0,1,1,1,0,1,1,1,0,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,0,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0]]
).view(7, 1, 1, -1)
key2id = {k:i for i, k in enumerate(sig_lets)}
print(key2id)
target = ttype([[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0]]).view(7, -1)
print(target.shape)
signals.shape
def train_model(model,
signals,
target,
optimizer,
loss_func,
train_dur=2.0,
test_durs=[1.5, 2.0, 2.5],
epochs=1500,
loss_buffer_size=50,
testing_every=30):
loss_track = {"loss":[],
"epoch":[],
"acc":[],
"perf":[]}
losses = []
progress_bar = tqdm(range(int(epochs)), ncols=800)
for e in progress_bar:
for i in range(target.shape[1]):
# use target one by one
perm = target[:,i].type(torch.LongTensor)
#print(perm.shape)
# Zero the gradient between each batch
model.zero_grad()
# Present an entire batch to the model
# indexing using -1 at the time dimension,
# only use the latest value
out = model(signals)[:, -1,:]
#print(out.shape)
# Measure loss via CrossEntropyLoss
loss = loss_func(out,
perm)
# Adjust Weights
loss.backward()
optimizer.step()
losses.append(loss.detach().cpu().numpy())
if len(losses) > loss_buffer_size:
losses = losses[1:]
# Record loss, epoch number, batch number in epoch,
# last accuracy measure, etc
loss_track['loss'].append(np.mean(losses))
loss_track['epoch'].append(e)
# calculate model accuracy:
if ((e)%testing_every == 0) & (e != 0):
model.eval()
perf = test_model(model, signals, target)
model.train()
loss_track['perf'].append(perf)
if e > testing_every:
# Update progress_bar
s = "{}: Loss: {:.6f}, Acc:{:.4f}"
format_list = [e, loss_track['loss'][-1]] + [perf]
s = s.format(*format_list)
progress_bar.set_description(s)
if loss_track['perf'][-1] == 1.0:
break
return loss_track
def test_model(model, signals,target):
# Test the Model
out = model(signals)[:, -1, :]
print(out)
pred = torch.argmax(out, dim=-1)
print(pred)
groundTruth = target
perf = 0
return perf
```
# Setup Classifier type model
```
class DeepSITH_Classifier(nn.Module):
def __init__(self, out_features, layer_params, dropout=.5):
super(DeepSITH_Classifier, self).__init__()
last_hidden = layer_params[-1]['hidden_size']
self.hs = DeepSITH(layer_params=layer_params, dropout=dropout)
self.to_out = nn.Linear(last_hidden, out_features)
def forward(self, inp):
x = self.hs(inp)
x = self.to_out(x)
return x
```
# TEST layers for correct taustars/parameters/cvalues
These dictionaries will not be used later.
```
sith_params2 = {"in_features":1,
"tau_min":.1, "tau_max":20.0, 'buff_max':40,
"k":50,
"ntau":5, 'g':0,
"ttype":ttype,
"hidden_size":10, "act_func":nn.ReLU()}
sith_params3 = {"in_features":sith_params2['hidden_size'],
"tau_min":.1, "tau_max":200.0, 'buff_max':240,
"k":50,
"ntau":5, 'g':0,
"ttype":ttype,
"hidden_size":20, "act_func":nn.ReLU()}
layer_params = [sith_params2, sith_params3]
model = DeepSITH_Classifier(out_features=2,
layer_params=layer_params, dropout=.0).double()
print(model)
for i, l in enumerate(model.hs.layers):
print("Layer {}".format(i), l.sith.tau_star)
tot_weights = 0
for p in model.parameters():
tot_weights += p.numel()
print("Total Weights:", tot_weights)
```
# Visualize the taustar buffers
They must all completely empty or there will be edge effects
```
plt.plot(model.hs.layers[0].sith.filters[:, 0, 0, :].detach().cpu().T);
```
# Training and testing
```
# You likely don't need this to be this long, but just in case.
epochs = 500
# Just for visualizing average loss through time.
loss_buffer_size = 100
loss_func = torch.nn.CrossEntropyLoss()
sith_params2 = {"in_features":1,
"tau_min":.1, "tau_max":20.0, 'buff_max':40,
"k":50,
"ntau":10, 'g':0,
"ttype":ttype,
"hidden_size":10, "act_func":nn.ReLU()}
sith_params3 = {"in_features":sith_params2['hidden_size'],
"tau_min":.1, "tau_max":200.0, 'buff_max':240,
"k":50,
"ntau":10, 'g':0,
"ttype":ttype,
"hidden_size":20, "act_func":nn.ReLU()}
layer_params = [sith_params2, sith_params3]
model = DeepSITH_Classifier(out_features=5,
layer_params=layer_params,
dropout=0.).double()
optimizer = torch.optim.Adam(model.parameters())
perf = train_model(model, signals, target,optimizer, loss_func,
epochs=epochs,
loss_buffer_size=loss_buffer_size)
#perfs.append(perf)
with open('filename.dill', 'wb') as handle:
pickle.dump(perf, handle, protocol=pickle.HIGHEST_PROTOCOL)
fig = plt.figure(figsize=(8,10))
ax = fig.add_subplot(2,1,1)
ax.plot(perfs[-1]['loss'])
ax.set_ylabel("Loss")
#ax.set_xlabel("Presentation Number")
ax = fig.add_subplot(2,1,2)
dat = pd.DataFrame(perfs[-1]['perf'])
ax.plot(np.arange(dat.shape[0])*30, dat)
ax.set_ylabel("Classification Acc")
ax.set_xlabel("Presentation Number")
()
plt.savefig(join("figs","DeepSith_training_H8"))
```
| github_jupyter |
<a id="ndvi_std_top"></a>
# NDVI STD
Deviations from an established average z-score.
<hr>
# Notebook Summary
* A baseline for each month is determined by measuring NDVI over a set time
* The data cube is used to visualize at NDVI anomalies over time.
* Anomalous times are further explored and visualization solutions are proposed.
<hr>
# Index
* [Import Dependencies and Connect to the Data Cube](#ndvi_std_import)
* [Choose Platform and Product](#ndvi_std_plat_prod)
* [Get the Extents of the Cube](#ndvi_std_extents)
* [Define the Extents of the Analysis](#ndvi_std_define_extents)
* [Load Data from the Data Cube](#ndvi_std_load_data)
* [Create and Use a Clean Mask](#ndvi_std_clean_mask)
* [Calculate the NDVI](#ndvi_std_calculate)
* [Convert the Xarray to a Dataframe](#ndvi_std_pandas)
* [Define a Function to Visualize Values Over the Region](#ndvi_std_visualization_function)
* [Visualize the Baseline Average NDVI by Month](#ndvi_std_baseline_mean_ndvi)
* [Visualize the Baseline Distributions Binned by Month](#ndvi_std_boxplot_analysis)
* [Visualize the Baseline Kernel Distributions Binned by Month](#ndvi_std_violinplot_analysis)
* [Plot Z-Scores by Month and Year](#ndvi_std_pixelplot_analysis)
* [Further Examine Times Of Interest](#ndvi_std_heatmap_analysis)
<hr>
# How It Works
To detect changes in plant life, we use a measure called NDVI.
* <font color=green>NDVI</font> is the ratio of the difference between amount of near infrared light <font color=red>(NIR)</font> and red light <font color=red>(RED)</font> divided by their sum.
<br>
$$ NDVI = \frac{(NIR - RED)}{(NIR + RED)}$$
<br>
<div class="alert-info">
The idea is to observe how much red light is being absorbed versus reflected. Photosynthetic plants absorb most of the visible spectrum's wavelengths when they are healthy. When they aren't healthy, more of that light will get reflected. This makes the difference between <font color=red>NIR</font> and <font color=red>RED</font> much smaller which will lower the <font color=green>NDVI</font>. The resulting values from doing this over several pixels can be used to create visualizations for the changes in the amount of photosynthetic vegetation in large areas.
</div>
## <span id="ndvi_std_import">Import Dependencies and Connect to the Data Cube [▴](#ndvi_std_top) </span>
```
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.ticker import FuncFormatter
import seaborn as sns
from utils.data_cube_utilities.dc_load import get_product_extents
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
from utils.data_cube_utilities.data_access_api import DataAccessApi
api = DataAccessApi()
dc = api.dc
```
## <span id="ndvi_std_plat_prod">Choose Platform and Product [▴](#ndvi_std_top)</span>
```
# Change the data platform and data cube here
product = 'ls7_usgs_sr_scene'
platform = 'LANDSAT_7'
collection = 'c1'
level = 'l2'
# product = 'ls8_usgs_sr_scene'
# platform = 'LANDSAT_8'
# collection = 'c1'
# level = 'l2'
```
## <span id="ndvi_std_extents">Get the Extents of the Cube [▴](#ndvi_std_top)</span>
```
full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)
print("{}:".format(platform))
print("Lat bounds:", full_lat)
print("Lon bounds:", full_lon)
print("Time bounds:", min_max_dates)
```
## <span id="ndvi_std_define_extents">Define the Extents of the Analysis [▴](#ndvi_std_top)</span>
```
display_map(full_lat, full_lon)
params = {'latitude': (0.55, 0.6),
'longitude': (35.55, 35.5),
'time': ('2008-01-01', '2010-12-31')}
display_map(params["latitude"], params["longitude"])
```
## <span id="ndvi_std_load_data">Load Data from the Data Cube [▴](#ndvi_std_top)</span>
```
dataset = dc.load(**params,
platform = platform,
product = product,
measurements = ['red', 'green', 'blue', 'swir1', 'swir2', 'nir', 'pixel_qa'],
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000}).persist()
```
## <span id="ndvi_std_clean_mask">Create and Use a Clean Mask [▴](#ndvi_std_top)</span>
```
# Make a clean mask to remove clouds and scanlines.
clean_mask = landsat_clean_mask_full(dc, dataset, product=product, platform=platform,
collection=collection, level=level)
# Filter the scenes with that clean mask
dataset = dataset.where(clean_mask)
```
## <span id="ndvi_std_calculate">Calculate the NDVI [▴](#ndvi_std_top)</span>
```
#Calculate NDVI
ndvi = (dataset.nir - dataset.red)/(dataset.nir + dataset.red)
```
## <span id="ndvi_std_pandas">Convert the Xarray to a Dataframe [▴](#ndvi_std_top)</span>
```
#Cast to pandas dataframe
df = ndvi.to_dataframe("NDVI")
#flatten the dimensions since it is a compound hierarchical dataframe
df = df.stack().reset_index()
#Drop the junk column that was generated for NDVI
df = df.drop(["level_3"], axis=1)
#Preview first 5 rows to make sure everything looks as it should
df.head()
#Rename the NDVI column to the appropriate name
df = df.rename(index=str, columns={0: "ndvi"})
#clamp NDVI between 0 and 1
df.ndvi = df.ndvi.clip(lower=0)
#Add columns for Month and Year for convenience
df["Month"] = df.time.dt.month
df["Year"] = df.time.dt.year
#Preview changes
df.head()
```
## <span id="ndvi_std_visualization_function">Define a Function to Visualize Values Over the Region [▴](#ndvi_std_top)</span>
```
#Create a function for formatting our axes
def format_axis(axis, digits = None, suffix = ""):
#Get Labels
labels = axis.get_majorticklabels()
#Exit if empty
if len(labels) == 0: return
#Create formatting function
format_func = lambda x, pos: "{0}{1}".format(labels[pos]._text[:digits],suffix)
#Use formatting function
axis.set_major_formatter(FuncFormatter(format_func))
#Create a function for examining the z-score and NDVI of the region graphically
def examine(month = list(df["time"].dt.month.unique()), year = list(df["time"].dt.year.unique()), value_name = "z_score"):
#This allows the user to pass single floats as values as well
if type(month) is not list: month = [month]
if type(year) is not list: year = [year]
#pivoting the table to the appropriate layout
piv = pd.pivot_table(df[df["time"].dt.year.isin(year) & df["time"].dt.month.isin(month)],
values=value_name,index=["latitude"], columns=["longitude"])
#Sizing
plt.rcParams["figure.figsize"] = [11,11]
#Plot pivot table as heatmap using seaborn
val_range = (-1.96,1.96) if value_name is "z_score" else (df[value_name].unique().min(),df[value_name].unique().max())
ax = sns.heatmap(piv, square=False, cmap="RdYlGn",vmin=val_range[0],vmax=val_range[1], center=0)
#Formatting
format_axis(ax.yaxis, 6)
format_axis(ax.xaxis, 7)
plt.setp(ax.xaxis.get_majorticklabels(), rotation=90 )
plt.gca().invert_yaxis()
```
Lets examine the average <font color=green>NDVI</font> across all months and years to get a look at the region
```
#It defaults to binning the entire range of months and years so we can just leave those parameters out
examine(value_name="ndvi")
```
This gives us an idea of the healthier areas of the region before we start looking at specific months and years.
## <span id="ndvi_std_baseline_mean_ndvi">Visualize the Baseline Average NDVI by Month [▴](#ndvi_std_top)</span>
```
#Make labels for convenience
labels = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
#Initialize an empty pandas Series
df["z_score"] = pd.Series()
#declare list for population
binned_data = list()
#Calculate monthly binned z-scores from the composited monthly NDVI mean and store them
for i in range(12):
#grab z_score and NDVI for the appropriate month
temp = df[["z_score", "ndvi"]][df["Month"] == i+1]
#populate z_score
df.loc[df["Month"] == i+1,"z_score"] = (temp["ndvi"] - temp["ndvi"].mean())/temp["ndvi"].std(ddof=0)
#print the month next to its mean NDVI and standard deviation
binned_data.append((labels[i], temp["ndvi"].mean(), temp["ndvi"].std()))
#Create dataframe for binned values
binned_data = pd.DataFrame.from_records(binned_data, columns=["Month","Mean", "Std_Dev"])
#print description for clarification
print("Monthly Average NDVI over Baseline Period")
#display binned data
binned_data
```
## <span id="ndvi_std_boxplot_analysis">Visualize the Baseline Distributions Binned by Month [▴](#ndvi_std_top)</span>
```
#Set figure size to a larger size
plt.rcParams["figure.figsize"] = [16,9]
#Create the boxplot
df.boxplot(by="Month",column="ndvi")
#Create the mean line
plt.plot(binned_data.index+1, binned_data.Mean, 'r-')
#Create the one standard deviation away lines
plt.plot(binned_data.index+1, binned_data.Mean-binned_data.Std_Dev, 'b--')
plt.plot(binned_data.index+1, binned_data.Mean+binned_data.Std_Dev, 'b--')
#Create the two standard deviations away lines
plt.plot(binned_data.index+1, binned_data.Mean-(2*binned_data.Std_Dev), 'g-.', alpha=.3)
plt.plot(binned_data.index+1, binned_data.Mean+(2*binned_data.Std_Dev), 'g-.', alpha=.3)
```
The plot above shows the distributions for each individual month over the baseline period.
<br>
- The <b><font color=red>red</font></b> line is the mean line which connects the <b><em>mean values</em></b> for each month.
<br>
- The dotted <b><font color=blue>blue</font></b> lines are exactly <b><em>one standard deviation away</em></b> from the mean and show where the NDVI values fall within 68% of the time, according to the Empirical Rule.
<br>
- The <b><font color=green>green</font></b> dotted lines are <b><em>two standard deviations away</em></b> from the mean and show where an estimated 95% of the NDVI values are contained for that month.
<br>
<div class="alert-info"><font color=black> <em><b>NOTE: </b>You will notice a seasonal trend in the plot above. If we had averaged the NDVI without binning, this trend data would be lost and we would end up comparing specific months to the average derived from all the months combined, instead of individually.</em></font>
</div>
## <span id="ndvi_std_violinplot_analysis">Visualize the Baseline Kernel Distributions Binned by Month [▴](#ndvi_std_top)</span>
The violinplot has the advantage of allowing us to visualize kernel distributions but comes at a higher computational cost.
```
sns.violinplot(x=df.Month, y="ndvi", data=df)
```
<hr>
## <span id="ndvi_std_pixelplot_analysis">Plot Z-Scores by Month and Year [▴](#ndvi_std_top)</span>
### Pixel Plot Visualization
```
#Create heatmap layout from dataframe
img = pd.pivot_table(df, values="z_score",index=["Month"], columns=["Year"], fill_value=None)
#pass the layout to seaborn heatmap
ax = sns.heatmap(img, cmap="RdYlGn", annot=True, fmt="f", center = 0)
#set the title for Aesthetics
ax.set_title('Z-Score\n Regional Selection Averages by Month and Year')
ax.fill= None
```
Each block in the visualization above is representative of the deviation from the average for the region selected in a specific month and year. The omitted blocks are times when there was no satellite imagery available. Their values must either be inferred, ignored, or interpolated.
You may notice long vertical strips of red. These are strong indications of drought since they deviate from the baseline consistently over a long period of time.
## <span id="ndvi_std_heatmap_analysis">Further Examine Times Of Interest [▴](#ndvi_std_top)</span>
### Use the function we created to examine times of interest
```
#Lets look at that drought in 2009 during the months of Aug-Oct
#This will generate a composite of the z-scores for the months and years selected
examine(month = [8], year = 2009, value_name="z_score")
```
Note:
This graphical representation of the region shows the amount of deviation from the mean for each pixel that was binned by month.
### Grid Layout of Selected Times
```
#Restrict input to a maximum of about 12 grids (months*year) for memory
def grid_examine(month = None, year = None, value_name = "z_score"):
#default to all months then cast to list, if not already
if month is None: month = list(df["Month"].unique())
elif type(month) is int: month = [month]
#default to all years then cast to list, if not already
if year is None: year = list(df["Year"].unique())
elif type(year) is int: year = [year]
#get data within the bounds specified
data = df[np.logical_and(df["Month"].isin(month) , df["Year"].isin(year))]
#Set the val_range to be used as the vertical limit (vmin and vmax)
val_range = (-1.96,1.96) if value_name is "z_score" else (df[value_name].unique().min(),df[value_name].unique().max())
#create colorbar to export and use on grid
Z = [[val_range[0],0],[0,val_range[1]]]
CS3 = plt.contourf(Z, 200, cmap="RdYlGn")
plt.clf()
#Define facet function to use for each tile in grid
def heatmap_facet(*args, **kwargs):
data = kwargs.pop('data')
img = pd.pivot_table(data, values=value_name,index=["latitude"], columns=["longitude"], fill_value=None)
ax = sns.heatmap(img, cmap="RdYlGn",vmin=val_range[0],vmax=val_range[1],
center = 0, square=True, cbar=False, mask = img.isnull())
plt.setp(ax.xaxis.get_majorticklabels(), rotation=90 )
plt.gca().invert_yaxis()
#Create grid using the face function above
with sns.plotting_context(font_scale=5.5):
g = sns.FacetGrid(data, col="Year", row="Month", height=5,sharey=True, sharex=True)
mega_g = g.map_dataframe(heatmap_facet, "longitude", "latitude")
g.set_titles(col_template="Yr= {col_name}", fontweight='bold', fontsize=18)
#Truncate axis tick labels using the format_axis function defined in block 13
for ax in g.axes:
format_axis(ax[0]._axes.yaxis, 6)
format_axis(ax[0]._axes.xaxis, 7)
#create a colorbox and apply the exported colorbar
cbar_ax = g.fig.add_axes([1.015,0.09, 0.015, 0.90])
cbar = plt.colorbar(cax=cbar_ax, mappable=CS3)
grid_examine(month=[8,9,10], year=[2008,2009,2010])
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# This is a custom matplotlib style that I use for most of my charts
country_data = pd.read_csv('C:/Users/user/Desktop/test.csv')
country_data
```
圖中顯示2016到2017入境各個國家人數
多數國家在2017人數皆些微成長
```
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(111)
for (i, row) in country_data.iterrows():
plt.bar([i - 0.2, i + 0.2], [row['2016'], row['2017']],
color=['#CC6699', '#008AB8'], width=0.4, align='center', edgecolor='none')
plt.xlim(-1, 13)
plt.xticks(range(0, 13), country_data['country'], fontsize=11)
plt.grid(False, axis='x')
plt.yticks(np.arange(0, 5e6, 1e6),
['{}m'.format(int(x / 1e6)) if x > 0 else 0 for x in np.arange(0, 5e6, 1e6)])
plt.xlabel('Country')
plt.ylabel('Number of people (millions)')
plt.savefig('pop_pyramid_grouped.pdf')
;
```
DIFFERENCE
各個國家在 2016到2017年間出入境的人數比較
大部分國家為正成長,少數為負成長。
```
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(111)
for (i, row) in country_data.iterrows():
plt.bar([i], [row['difference']],
color=['#CC6699'], width=0.6, align='center', edgecolor='none')
plt.xlim(-1, 13)
plt.xticks(range(0, 13), country_data['country'], fontsize=11)
plt.grid(False, axis='x')
plt.yticks(np.arange(0, 4e5, 1e5),
['{}m'.format(int(x / 1e5)) if x > 0 else 0 for x in np.arange(0, 4e5, 1e5)])
plt.xlabel('Country')
plt.ylabel('Number of people (billions)')
;
GDP
國內生產總值是國民經濟核算的核心指標,在衡量一個國家或地區經濟狀況和發展水準有相當的重要性,
此一數值亦包括移住勞工的薪資在內。
可以看出在出入境較高的國家不論在2016或2017都有極高的GDP
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(111)
for (i, row) in country_data.iterrows():
plt.bar([i - 0.2, i + 0.2], [row['2016GDP'], row['2017GDP']],
color=['#CC6699', '#008AB8'], width=0.4, align='center', edgecolor='none')
plt.xlim(-1, 13)
plt.xticks(range(0, 13), country_data['country'], fontsize=11)
plt.grid(False, axis='x')
plt.yticks(np.arange(0, 13000000, 1000000),
['{}m'.format(int(x / 1000000)) if x > 0 else 0 for x in np.arange(0, 13000000, 1000000)])
plt.xlabel('Country')
plt.ylabel('Number of people (millions)')
plt.savefig('pop_pyramid_grouped.pdf')
;
m = country_data['2017']
m
n = country_data['2017GDP']
n
```
RELATION
x軸為2017入境各個國家的人數
Y軸為2017該國家GDP總額
當出入境人數增加時,該國當年GDP也隨之上升。
推測在各國家觀光產業所佔生產毛額都有相當大的比例。
可看出兩者有正相關關係。
```
plt.plot(m,n, 'bo')
x = country_data['growth']
x
y = country_data['GDPgrowth']
y
```
影響GDP成長因素有許多不同於原因
拿日本和香港做比較
從圖表中可見入境日本的人口數再2016到2017年間是成長的
卻在這兩年間GDP些微下降
相反的在兩年間入境香港的人數下降,GDP卻上升
可得知雖然觀光經濟在國家GDP中佔有滿大的比重,但在某些國家中並非最為重要的部分。
```
plt.plot(x,y, 'bo')
```
| github_jupyter |
# Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
* [Pix2Pix](https://affinelayer.com/pixsrv/)
* [CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)
* [A list of generative models](https://github.com/wiseodd/generative-models)
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes "fake" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake.
> * The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real, training data.
* The discriminator is a classifier that is trained to figure out which data is real and which is fake.
What ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator.
<img src='assets/gan_pipeline.png' width=70% />
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a **latent vector** and that vector space is called **latent space**. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator.
If you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!
```
%matplotlib inline
%config Completer.use_jedi = True
import numpy as np
import torch
import matplotlib.pyplot as plt
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 64
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# get the training datasets
train_data = datasets.MNIST(root='gan-mnist\data', train=True,
download=False, transform=transform)
# prepare data loader
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
```
### Visualize the data
```
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (3,3))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
```
---
# Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
## Discriminator
The discriminator network is going to be a pretty typical linear classifier. To make this network a universal function approximator, we'll need at least one hidden layer, and these hidden layers should have one key attribute:
> All hidden layers will have a [Leaky ReLu](https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU) activation function applied to their outputs.
<img src='assets/gan_network.png' width=70% />
#### Leaky ReLu
We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
<img src='assets/leaky_relu.png' width=40% />
#### Sigmoid Output
We'll also take the approach of using a more numerically stable loss function on the outputs. Recall that we want the discriminator to output a value 0-1 indicating whether an image is _real or fake_.
> We will ultimately use [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss), which combines a `sigmoid` activation function **and** binary cross entropy loss in one function.
So, our final output layer should not have any activation function applied to it.
```
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Discriminator, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_dim*4)
self.fc2 = nn.Linear(hidden_dim*4, hidden_dim*2)
self.fc3 = nn.Linear(hidden_dim*2, hidden_dim)
self.fc4 = nn.Linear(hidden_dim, output_size)
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# flatten image
x = x.view(-1, 28*28)
# pass x through all layers
x = F.leaky_relu(self.fc1(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc3(x), 0.2)
x = self.dropout(x)
return self.fc4(x)
```
## Generator
The generator network will be almost exactly the same as the discriminator network, except that we're applying a [tanh activation function](https://pytorch.org/docs/stable/nn.html#tanh) to our output layer.
#### tanh Output
The generator has been found to perform the best with $tanh$ for the generator output, which scales the output to be between -1 and 1, instead of 0 and 1.
<img src='assets/tanh_fn.png' width=40% />
Recall that we also want these outputs to be comparable to the *real* input pixel values, which are read in as normalized values between 0 and 1.
> So, we'll also have to **scale our real input images to have pixel values between -1 and 1** when we train the discriminator.
I'll do this in the training loop, later on.
```
class Generator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Generator, self).__init__()
# define all layers
self.fc1 = nn.Linear(input_size, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim*2)
self.fc3 = nn.Linear(hidden_dim*2, hidden_dim*4)
self.fc4 = nn.Linear(hidden_dim*4, output_size)
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# pass x through all layers
x = F.leaky_relu(self.fc1(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc3(x), 0.2)
x = self.dropout(x)
# final layer should have tanh applied
x = F.tanh(self.fc4(x))
return x
```
## Model hyperparameters
```
# Discriminator hyperparams
# Size of input image to discriminator (28*28)
input_size = 28*28
# Size of discriminator output (real or fake)
d_output_size = 1
# Size of *last* hidden layer in the discriminator
d_hidden_size = 32
# Generator hyperparams
# Size of latent vector to give to generator
z_size = 100
# Size of discriminator output (generated image)
g_output_size = 28*28
# Size of *first* hidden layer in the generator
g_hidden_size = 32
```
## Build complete network
Now we're instantiating the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
```
# instantiate discriminator and generator
D = Discriminator(input_size, d_hidden_size, d_output_size)
G = Generator(z_size, g_hidden_size, g_output_size)
# check that they are as you expect
print(D)
print()
print(G)
```
---
## Discriminator and Generator Losses
Now we need to calculate the losses.
### Discriminator Losses
> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
<img src='assets/gan_pipeline.png' width=70% />
The losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.
For the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. To help the discriminator generalize better, the labels are **reduced a bit from 1.0 to 0.9**. For this, we'll use the parameter `smooth`; if True, then we should smooth our labels. In PyTorch, this looks like `labels = torch.ones(size) * 0.9`
The discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`.
### Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
```
# Calculate losses
def real_loss(D_out, smooth=False):
# compare logits to real labels
# smooth labels if smooth=True
labels = torch.ones(D_out.size(0))*0.9 if smooth else torch.ones(D_out.size(0))
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
# compare logits to fake labels
labels = torch.zeros(D_out.size(0))
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
```
## Optimizers
We want to update the generator and discriminator variables separately. So, we'll define two separate Adam optimizers.
```
import torch.optim as optim
# learning rate for optimizers
lr = 0.002
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr)
g_optimizer = optim.Adam(G.parameters(), lr)
```
---
## Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases.
### Discriminator training
1. Compute the discriminator loss on real, training images
2. Generate fake images
3. Compute the discriminator loss on fake, generated images
4. Add up real and fake loss
5. Perform backpropagation + an optimization step to update the discriminator's weights
### Generator training
1. Generate fake images
2. Compute the discriminator loss on fake images, using **flipped** labels!
3. Perform backpropagation + an optimization step to update the generator's weights
#### Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
```
import pickle as pkl
# training hyperparams
num_epochs = 100
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 400
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
D.train()
G.train()
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
## Important rescaling step ##
real_images = real_images*2 - 1 # rescale input images from [0,1) to [-1, 1)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
d_optimizer.zero_grad()
# 1. Train with real images
# Compute the discriminator losses on real images
# use smoothed labels
d_real = D(real_images)
d_real_loss = real_loss(d_real, True)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
d_fake = D(fake_images)
d_fake_loss = fake_loss(d_fake)
# add up real and fake losses and perform backprop
d_loss = d_fake_loss + d_real_loss
d_loss.backward()
d_optimizer.step()
# =========================================
# TRAIN THE GENERATOR
# =========================================
# 1. Train with fake images and flipped labels
g_optimizer.zero_grad()
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
d_fake = D(fake_images)
g_loss = real_loss(d_fake)
# perform backprop
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# generate and save sample, fake images
G.eval() # eval mode for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to train mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
```
## Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
Here we can view samples of images from the generator. First we'll look at the images we saved during training.
```
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
```
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
```
# -1 indicates final epoch's samples (the last in the list)
view_samples(-1, samples)
```
Below I'm showing the generated images as the network was training, every 10 epochs.
```
rows = 10 # split epochs into 10, so 100/10 = every 10 epochs
cols = 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
img = img.detach()
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
```
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
## Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. **We just need to pass in a new latent vector $z$ and we'll get new samples**!
```
# randomly generated, new latent vectors
sample_size=16
rand_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
rand_z = torch.from_numpy(rand_z).float()
G.eval() # eval mode
# generated samples
rand_images = G(rand_z)
# 0 indicates the first set of samples in the passed in list
# and we only have one batch of samples, here
view_samples(0, [rand_images])
```
| github_jupyter |
# DAPA Tutorial #3: Timeseries - Sentinel-2
## Load environment variables
Please make sure that the environment variable "DAPA_URL" is set in the `custom.env` file. You can check this by executing the following block.
If DAPA_URL is not set, please create a text file named `custom.env` in your home directory with the following input:
>DAPA_URL=YOUR-PERSONAL-DAPA-APP-URL
```
from edc import setup_environment_variables
setup_environment_variables()
```
## Check notebook compabtibility
**Please note:** If you conduct this notebook again at a later time, the base image of this Jupyter Hub service can include newer versions of the libraries installed. Thus, the notebook execution can fail. This compatibility check is only necessary when something is broken.
```
from edc import check_compatibility
check_compatibility("user-0.24.5", dependencies=[])
```
## Load libraries
Python libraries used in this tutorial will be loaded.
```
import os
import xarray as xr
import pandas as pd
import requests
import matplotlib
from ipyleaflet import Map, Rectangle, Marker, DrawControl, basemaps, basemap_to_tiles
%matplotlib inline
```
## Set DAPA endpoint
Execute the following code to check if the DAPA_URL is available in the environment variable and to set the `/dapa` endpoint.
```
service_url = None
dapa_url = None
if 'DAPA_URL' not in os.environ:
print('!! DAPA_URL does not exist as environment variable. Please make sure this is the case - see first block of this notebook! !!')
else:
service_url = os.environ['DAPA_URL']
dapa_url = '{}/{}'.format(service_url, 'oapi')
print('DAPA path: {}'.format(dapa_url.replace(service_url, '')))
```
## Get collections supported by this endpoint
This request provides a list of collections. The path of each collection is used as starting path of this service.
```
collections_url = '{}/{}'.format(dapa_url, 'collections')
collections = requests.get(collections_url, headers={'Accept': 'application/json'})
print('DAPA path: {}'.format(collections.url.replace(service_url, '')))
collections.json()
```
## Get fields of collection Sentinel-2 L2A
The fields (or variables in other DAPA endpoints - these are the bands of the raster data) can be retrieved in all requests to the DAPA endpoint. In addition to the fixed set of fields, "virtual" fields can be used to conduct math operations (e.g., the calculation of indices).
```
collection = 'S2L2A'
fields_url = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/fields')
fields = requests.get(fields_url, headers={'Accept': 'application/json'})
print('DAPA path: {}'.format(fields.url.replace(service_url, '')))
fields.json()
```
## Retrieve NDVI as 1d time-series extraced for a single point
### Set DAPA URL and parameters
The output of this request is a time-series requested from a point of interest (`timeseries/position` endpoint). As the input collection (S2L2A) is a multi-temporal raster and the requested geometry is a point, no aggregation is conducted.
To retrieve a time-series of a point, the parameter `point` needs to be provided. The `time` parameter allows to extract data only within a specific time span. The band (`field`) from which the point is being extracted needs to be specified as well.
```
# DAPA URL
url = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/timeseries/position')
# Parameters for this request
params = {
'point': '11.49,48.05',
'time': '2018-04-01T00:00:00Z/2018-05-01T00:00:00Z',
'fields': 'NDVI=(B08-B04)/(B08%2BB04)' # Please note: + signs need to be URL encoded -> %2B
}
# show point in the map
location = list(reversed([float(coord) for coord in params['point'].split(',')]))
m = Map(
basemap=basemap_to_tiles(basemaps.OpenStreetMap.Mapnik),
center=location,
zoom=10
)
marker = Marker(location=location, draggable=False)
m.add_layer(marker)
m
```
### Build request URL and conduct request
```
params_str = "&".join("%s=%s" % (k, v) for k,v in params.items())
r = requests.get(url, params=params_str)
print('DAPA path: {}'.format(r.url.replace(service_url, '')))
print('Status code: {}'.format(r.status_code))
```
### Write timeseries dataset to CSV file
The response of this request returns data as CSV including headers splitted by comma. Additional output formats (e.g., CSV with headers included) will be integrated within the testbed activtiy.
You can either write the response to file or use it as string (`r.content` variable).
```
# write time-series data to CSV file
with open('timeseries_s2.csv', 'wb') as filew:
filew.write(r.content)
```
### Open timeseries dataset with pandas
Time-series data can be opened, processed, and plotted easily with the `Pandas` library. You only need to specify the `datetime` column to automatically convert dates from string to a datetime object.
```
# read data into Pandas dataframe
ds = pd.read_csv('timeseries_s2.csv', parse_dates=['datetime'])
# set index to datetime column
ds.set_index('datetime', inplace=True)
# show dataframe
ds
```
### Plot NDVI data
```
ds.plot()
```
### Output CSV file
```
!cat timeseries_s2.csv
```
## Time-series aggregated over area
```
# DAPA URL
url = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/timeseries/area')
# Parameters for this request
params = {
#'point': '11.49,48.05',
'bbox': '11.49,48.05,11.66,48.22',
'aggregate': 'min,max,avg',
'time': '2018-04-01T00:00:00Z/2018-05-01T00:00:00Z',
'fields': 'NDVI=(B08-B04)/(B08%2BB04)' # Please note: + signs need to be URL encoded -> %2B
}
params_str = "&".join("%s=%s" % (k, v) for k,v in params.items())
r = requests.get(url, params=params_str)
print('DAPA path: {}'.format(r.url.replace(service_url, '')))
print('Status code: {}'.format(r.status_code))
# read data into Pandas dataframe
from io import StringIO
ds = pd.read_csv(StringIO(r.text), parse_dates=['datetime'])
# set index to datetime column
ds.set_index('datetime', inplace=True)
# show dataframe
ds
ds.plot()
```
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import gc
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pal = sns.color_palette()
df_train = pd.read_csv('train.csv')
df_train.head()
print('Total number of question pairs for training: {}'.format(len(df_train)))
print('Duplicate pairs: {}%'.format(round(df_train['is_duplicate'].mean()*100, 2)))
qids = pd.Series(df_train['qid1'].tolist() + df_train['qid2'].tolist())
print('Total number of questions in the training data: {}'.format(len(
np.unique(qids))))
print('Number of questions that appear multiple times: {}'.format(np.sum(qids.value_counts() > 1)))
plt.figure(figsize=(12, 5))
plt.hist(qids.value_counts(), bins=50)
plt.yscale('log', nonposy='clip')
plt.title('Log-Histogram of question appearance counts')
plt.xlabel('Number of occurences of question')
plt.ylabel('Number of questions')
print()
df_test = pd.read_csv('test.csv')
df_test.head()
print('Total number of question pairs for testing: {}'.format(len(df_test)))
train_qs = pd.Series(df_train['question1'].tolist() + df_train['question2'].tolist()).astype(str)
test_qs = pd.Series(df_test['question1'].tolist() + df_test['question2'].tolist()).astype(str)
dist_train = train_qs.apply(len)
dist_test = test_qs.apply(len)
plt.figure(figsize=(15, 10))
plt.hist(dist_train, bins=200, range=[0, 200], color=pal[2], normed=True, label='train')
plt.hist(dist_test, bins=200, range=[0, 200], color=pal[1], normed=True, alpha=0.5, label='test')
plt.title('Normalised histogram of character count in questions', fontsize=15)
plt.legend()
plt.xlabel('Number of characters', fontsize=15)
plt.ylabel('Probability', fontsize=15)
print('mean-train {:.2f} std-train {:.2f} mean-test {:.2f} std-test {:.2f} max-train {:.2f} max-test {:.2f}'.format(dist_train.mean(),
dist_train.std(), dist_test.mean(), dist_test.std(), dist_train.max(), dist_test.max()))
```
We can see that most questions have anywhere from 15 to 150 characters in them. It seems that the test distribution is a little different from the train one, but not too much so.
```
dist_train = train_qs.apply(lambda x: len(x.split(' ')))
dist_test = test_qs.apply(lambda x: len(x.split(' ')))
plt.figure(figsize=(15, 10))
plt.hist(dist_train, bins=50, range=[0, 50], color=pal[2], normed=True, label='train')
plt.hist(dist_test, bins=50, range=[0, 50], color=pal[1], normed=True, alpha=0.5, label='test')
plt.title('Normalised histogram of word count in questions', fontsize=15)
plt.legend()
plt.xlabel('Number of words', fontsize=15)
plt.ylabel('Probability', fontsize=15)
print('mean-train {:.2f} std-train {:.2f} mean-test {:.2f} std-test {:.2f} max-train {:.2f} max-test {:.2f}'.format(dist_train.mean(),
dist_train.std(), dist_test.mean(), dist_test.std(), dist_train.max(), dist_test.max()))
```
### WordCloud
```
from wordcloud import WordCloud
cloud = WordCloud(width=1440, height=1080).generate(" ".join(train_qs.astype(str)))
plt.figure(figsize=(20, 15))
plt.imshow(cloud)
plt.axis('off')
```
## Semantic Analysis
```
qmarks = np.mean(train_qs.apply(lambda x: '?' in x))
math = np.mean(train_qs.apply(lambda x: '[math]' in x))
fullstop = np.mean(train_qs.apply(lambda x: '.' in x))
capital_first = np.mean(train_qs.apply(lambda x: x[0].isupper()))
capitals = np.mean(train_qs.apply(lambda x: max([y.isupper() for y in x])))
numbers = np.mean(train_qs.apply(lambda x: max([y.isdigit() for y in x])))
print('Questions with question marks: {:.2f}%'.format(qmarks * 100))
print('Questions with [math] tags: {:.2f}%'.format(math * 100))
print('Questions with full stops: {:.2f}%'.format(fullstop * 100))
print('Questions with capitalised first letters: {:.2f}%'.format(capital_first * 100))
print('Questions with capital letters: {:.2f}%'.format(capitals * 100))
print('Questions with numbers: {:.2f}%'.format(numbers * 100))
```
# Initial Feature Analysis
Before we create a model, we should take a look at how powerful some features are. I will start off with the word share feature from the benchmark model.
```
from nltk.corpus import stopwords
stops = set(stopwords.words("english"))
def word_match_share(row):
q1words = {}
q2words = {}
for word in str(row['question1']).lower().split():
if word not in stops:
q1words[word] = 1
for word in str(row['question2']).lower().split():
if word not in stops:
q2words[word] = 1
if len(q1words) == 0 or len(q2words) == 0:
# The computer-generated chaff includes a few questions that are nothing but stopwords
return 0
shared_words_in_q1 = [w for w in q1words.keys() if w in q2words]
shared_words_in_q2 = [w for w in q2words.keys() if w in q1words]
R = (len(shared_words_in_q1) + len(shared_words_in_q2))/(len(q1words) + len(q2words))
return R
plt.figure(figsize=(15, 5))
train_word_match = df_train.apply(word_match_share, axis=1, raw=True)
plt.hist(train_word_match[df_train['is_duplicate'] == 0], bins=20, normed=True, label='Not Duplicate')
plt.hist(train_word_match[df_train['is_duplicate'] == 1], bins=20, normed=True, alpha=0.7, label='Duplicate')
plt.legend()
plt.title('Label distribution over word_match_share', fontsize=15)
plt.xlabel('word_match_share', fontsize=15)
from collections import Counter
# If a word appears only once, we ignore it completely (likely a typo)
# Epsilon defines a smoothing constant, which makes the effect of extremely rare words smaller
def get_weight(count, eps=10000, min_count=2):
if count < min_count:
return 0
else:
return 1 / (count + eps)
eps = 5000
words = (" ".join(train_qs)).lower().split()
counts = Counter(words)
weights = {word: get_weight(count) for word, count in counts.items()}
print('Most common words and weights: \n')
print(sorted(weights.items(), key=lambda x: x[1] if x[1] > 0 else 9999)[:10])
print('\nLeast common words and weights: ')
(sorted(weights.items(), key=lambda x: x[1], reverse=True)[:10])
def tfidf_word_match_share(row):
q1words = {}
q2words = {}
for word in str(row['question1']).lower().split():
if word not in stops:
q1words[word] = 1
for word in str(row['question2']).lower().split():
if word not in stops:
q2words[word] = 1
if len(q1words) == 0 or len(q2words) == 0:
# The computer-generated chaff includes a few questions that are nothing but stopwords
return 0
shared_weights = [weights.get(w, 0) for w in q1words.keys() if w in q2words] + [weights.get(w, 0) for w in q2words.keys() if w in q1words]
total_weights = [weights.get(w, 0) for w in q1words] + [weights.get(w, 0) for w in q2words]
R = np.sum(shared_weights) / np.sum(total_weights)
return R
plt.figure(figsize=(15, 5))
tfidf_train_word_match = df_train.apply(tfidf_word_match_share, axis=1, raw=True)
plt.hist(tfidf_train_word_match[df_train['is_duplicate'] == 0].fillna(0), bins=20, normed=True, label='Not Duplicate')
plt.hist(tfidf_train_word_match[df_train['is_duplicate'] == 1].fillna(0), bins=20, normed=True, alpha=0.7, label='Duplicate')
plt.legend()
plt.title('Label distribution over tfidf_word_match_share', fontsize=15)
plt.xlabel('word_match_share', fontsize=15)
from sklearn.metrics import roc_auc_score
print('Original AUC:', roc_auc_score(df_train['is_duplicate'], train_word_match))
print(' TFIDF AUC:', roc_auc_score(df_train['is_duplicate'], tfidf_train_word_match.fillna(0)))
```
## Rebalancing the Data
However, before I do this, I would like to rebalance the data that XGBoost receives, since we have 37% positive class in our training data, and only 17% in the test data. By re-balancing the data so our training set has 17% positives, we can ensure that XGBoost outputs probabilities that will better match the data, and should get a better score (since LogLoss looks at the probabilities themselves and not just the order of the predictions like AUC)
```
# First we create our training and testing data
x_train = pd.DataFrame()
x_test = pd.DataFrame()
x_train['word_match'] = train_word_match
x_train['tfidf_word_match'] = tfidf_train_word_match
x_test['word_match'] = df_test.apply(word_match_share, axis=1, raw=True)
x_test['tfidf_word_match'] = df_test.apply(tfidf_word_match_share, axis=1, raw=True)
y_train = df_train['is_duplicate'].values
pos_train = x_train[y_train == 1]
neg_train = x_train[y_train == 0]
# Now we oversample the negative class
# There is likely a much more elegant way to do this...
p = 0.165
scale = ((len(pos_train) / (len(pos_train) + len(neg_train))) / p) - 1
while scale > 1:
neg_train = pd.concat([neg_train, neg_train])
scale -=1
neg_train = pd.concat([neg_train, neg_train[:int(scale * len(neg_train))]])
print(len(pos_train) / (len(pos_train) + len(neg_train)))
x_train = pd.concat([pos_train, neg_train])
y_train = (np.zeros(len(pos_train)) + 1).tolist() + np.zeros(len(neg_train)).tolist()
del pos_train, neg_train
# Finally, we split some of the data off for validation
from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid = train_test_split(x_train, y_train, test_size=0.2, random_state=4242)
```
## XGBoost
```
import xgboost as xgb
# Set our parameters for xgboost
params = {}
params['objective'] = 'binary:logistic'
params['eval_metric'] = 'logloss'
params['eta'] = 0.02
params['max_depth'] = 4
d_train = xgb.DMatrix(x_train, label=y_train)
d_valid = xgb.DMatrix(x_valid, label=y_valid)
watchlist = [(d_train, 'train'), (d_valid, 'valid')]
bst = xgb.train(params, d_train, 400, watchlist, early_stopping_rounds=50, verbose_eval=10)
d_test = xgb.DMatrix(x_test)
p_test = bst.predict(d_test)
sub = pd.DataFrame()
sub['test_id'] = df_test['test_id']
sub['is_duplicate'] = p_test
sub.to_csv('simple_xgb.csv', index=False)
sub.head()
def logloss(ptest):
s = 0
for res in ptest:
s+=np.log(res)
return -s
print(logloss(p_test)/len(p_test))
```
| github_jupyter |
```
# default_exp models.OmniScaleCNN
```
# OmniScaleCNN
> This is an unofficial PyTorch implementation by Ignacio Oguiza - [email protected] based on:
* Rußwurm, M., & Körner, M. (2019). Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536.
* Official implementation: https://github.com/dl4sits/BreizhCrops/blob/master/breizhcrops/models/OmniScaleCNN.py
```
#export
from tsai.imports import *
from tsai.models.layers import *
from tsai.models.utils import *
#export
#This is an unofficial PyTorch implementation by Ignacio Oguiza - [email protected] based on:
# Rußwurm, M., & Körner, M. (2019). Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536.
# Official implementation: https://github.com/dl4sits/BreizhCrops/blob/master/breizhcrops/models/OmniScaleCNN.py
class SampaddingConv1D_BN(Module):
def __init__(self, in_channels, out_channels, kernel_size):
self.padding = nn.ConstantPad1d((int((kernel_size - 1) / 2), int(kernel_size / 2)), 0)
self.conv1d = torch.nn.Conv1d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size)
self.bn = nn.BatchNorm1d(num_features=out_channels)
def forward(self, x):
x = self.padding(x)
x = self.conv1d(x)
x = self.bn(x)
return x
class build_layer_with_layer_parameter(Module):
"""
formerly build_layer_with_layer_parameter
"""
def __init__(self, layer_parameters):
"""
layer_parameters format
[in_channels, out_channels, kernel_size,
in_channels, out_channels, kernel_size,
..., nlayers
]
"""
self.conv_list = nn.ModuleList()
for i in layer_parameters:
# in_channels, out_channels, kernel_size
conv = SampaddingConv1D_BN(i[0], i[1], i[2])
self.conv_list.append(conv)
def forward(self, x):
conv_result_list = []
for conv in self.conv_list:
conv_result = conv(x)
conv_result_list.append(conv_result)
result = F.relu(torch.cat(tuple(conv_result_list), 1))
return result
class OmniScaleCNN(Module):
def __init__(self, c_in, c_out, seq_len, layers=[8 * 128, 5 * 128 * 256 + 2 * 256 * 128], few_shot=False):
receptive_field_shape = seq_len//4
layer_parameter_list = generate_layer_parameter_list(1,receptive_field_shape, layers, in_channel=c_in)
self.few_shot = few_shot
self.layer_parameter_list = layer_parameter_list
self.layer_list = []
for i in range(len(layer_parameter_list)):
layer = build_layer_with_layer_parameter(layer_parameter_list[i])
self.layer_list.append(layer)
self.net = nn.Sequential(*self.layer_list)
self.gap = GAP1d(1)
out_put_channel_number = 0
for final_layer_parameters in layer_parameter_list[-1]:
out_put_channel_number = out_put_channel_number + final_layer_parameters[1]
self.hidden = nn.Linear(out_put_channel_number, c_out)
def forward(self, x):
x = self.net(x)
x = self.gap(x)
if not self.few_shot: x = self.hidden(x)
return x
def get_Prime_number_in_a_range(start, end):
Prime_list = []
for val in range(start, end + 1):
prime_or_not = True
for n in range(2, val):
if (val % n) == 0:
prime_or_not = False
break
if prime_or_not:
Prime_list.append(val)
return Prime_list
def get_out_channel_number(paramenter_layer, in_channel, prime_list):
out_channel_expect = max(1, int(paramenter_layer / (in_channel * sum(prime_list))))
return out_channel_expect
def generate_layer_parameter_list(start, end, layers, in_channel=1):
prime_list = get_Prime_number_in_a_range(start, end)
layer_parameter_list = []
for paramenter_number_of_layer in layers:
out_channel = get_out_channel_number(paramenter_number_of_layer, in_channel, prime_list)
tuples_in_layer = []
for prime in prime_list:
tuples_in_layer.append((in_channel, out_channel, prime))
in_channel = len(prime_list) * out_channel
layer_parameter_list.append(tuples_in_layer)
tuples_in_layer_last = []
first_out_channel = len(prime_list) * get_out_channel_number(layers[0], 1, prime_list)
tuples_in_layer_last.append((in_channel, first_out_channel, 1))
tuples_in_layer_last.append((in_channel, first_out_channel, 2))
layer_parameter_list.append(tuples_in_layer_last)
return layer_parameter_list
bs = 16
c_in = 3
seq_len = 12
c_out = 2
xb = torch.rand(bs, c_in, seq_len)
m = create_model(OmniScaleCNN, c_in, c_out, seq_len)
test_eq(OmniScaleCNN(c_in, c_out, seq_len)(xb).shape, [bs, c_out])
m
#hide
from tsai.imports import *
from tsai.export import *
nb_name = get_nb_name()
# nb_name = "109_models.OmniScaleCNN.ipynb"
create_scripts(nb_name);
```
| github_jupyter |
# Natural Language Processing - Unsupervised Topic Modeling with Reddit Posts
###### This project dives into multiple techniques used for NLP and subtopics such as dimensionality reduction, topic modeling, and clustering.
1. [Google BigQuery](#Google-BigQuery)
1. [Exploratory Data Analysis (EDA) & Preprocessing](#Exploratory-Data-Analysis-&-Preprocessing)
1. [Singular Value Decomposition (SVD)](#Singular-Value-Decomposition-(SVD))
1. [Latent Semantic Analysis (LSA - applied SVD)](#Latent-Semantic-Analysis-(LSA))
1. [Similarity Scoring Metrics](#sim)
1. [KMeans Clustering](#km)
1. [Latent Dirichlet Allocation (LDA)](#lda)
1. [pyLDAvis - interactive d3 for LDA](#py)
- This was separated out in a new notebook to quickly view visual (load files and see visualization)
```
# Easter Egg to start your imports
#import this
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import logging
import pickle
import sys
import os
from google.cloud import bigquery
import warnings
def warn(*args, **kwargs):
pass
warnings.warn = warn
# Logging is the verbose for Gensim
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
#plt.style.available # Style options
plt.style.use('fivethirtyeight')
sns.set_context("talk")
%matplotlib inline
pd.options.display.max_rows = 99
pd.options.display.max_columns = 99
pd.options.display.max_colwidth = 99
#pd.describe_option('display') # Option settings
float_formatter = lambda x: "%.3f" % x if x>0 else "%.0f" % x
np.set_printoptions(formatter={'float_kind':float_formatter})
pd.set_option('display.float_format', float_formatter)
```
## Google BigQuery
```
%%time
path = "data/posts.pkl"
key = 'fakeKey38i7-4259.json'
if not os.path.isdir('data/'):
os.makedirs('data/')
# Set GOOGLE_APPLICATION_CREDENTIALS before querying
def bigQuery(QUERY, key=key):
"""
Instantiates a client using a key,
Requests a SQL query from the Big Query API,
Returns the queried table as a DataFrame
"""
client = bigquery.Client.from_service_account_json(key)
job_config = bigquery.QueryJobConfig()
job_config.use_legacy_sql = False
query_job = client.query(QUERY, job_config=job_config)
return query_job.result().to_dataframe()
# SQL query for Google BigQuery
QUERY = (
"""
SELECT created_utc, subreddit, author, domain, url, num_comments,
score, title, selftext, id, gilded, retrieved_on, over_18
FROM `fh-bigquery.reddit_posts.*`
WHERE _table_suffix IN ( '2016_06' )
AND LENGTH(selftext) > 550
AND LENGTH(title) > 15
AND LENGTH(title) < 345
AND score > 8
AND is_self = true
AND NOT subreddit IS NULL
AND NOT subreddit = 'de'
AND NOT subreddit = 'test'
AND NOT subreddit = 'tr'
AND NOT subreddit = 'A6XHE'
AND NOT subreddit = 'es'
AND NOT subreddit = 'removalbot'
AND NOT subreddit = 'tldr'
AND NOT selftext LIKE '[removed]'
AND NOT selftext LIKE '[deleted]'
;""")
#df = bigQuery(QUERY)
#df.to_pickle(path)
df = pd.read_pickle(path)
df.info(memory_usage='Deep')
```
## Exploratory Data Analysis & Preprocessing
```
# Exploring data by length of .title or .selftext
df[[ True if 500 < len(x) < 800 else False for x in df.selftext ]].sample(1, replace=False)
%%time
run = False
path = '/home/User/data/gif'
# Run through various selftext lengths and save the plots of the distribution of the metric
# Gif visual after piecing all the frames together
while run==True:
for i in range(500,20000,769):
tempath = os.path.join(path, f"textlen{i}.png") # PEP498 requires python 3.6
print(tempath)
# Look at histogram of posts with len<i
cuts = [len(x) for x in df.selftext if len(x)<i]
# Save plot
plt.figure()
plt.hist(cuts, bins=30) #can change bins based on function of i
plt.savefig(tempath, dpi=120, format='png', bbox_inches='tight', pad_inches=0.1)
plt.close()
# Bin Settings
def binSize(lower, upper, buffer=.05):
bins = upper - lower
buffer = int(buffer*bins)
bins -= buffer
print('Lower Bound:', lower)
print('Upper Bound:', upper)
return bins, lower, upper
# Plotting
def plotHist(tmp, bins, title, xlabel, ylabel, l, u):
plt.figure(figsize=(10,6))
plt.hist(tmp, bins=bins)
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.xlim(lower + l, upper + u)
print('\nLocal Max %s:' % xlabel, max(tmp))
print('Local Average %s:' % xlabel, int(np.mean(tmp)))
print('Local Median %s:' % xlabel, int(np.median(tmp)))
# Create the correct bin size
bins, lower, upper = binSize(lower=0, upper=175)
# Plot distribution of lower scores
tmp = df[[ True if lower <= x <= upper else False for x in df['score'] ]]['score']
plotHist(tmp=tmp, bins=bins, title='Lower Post Scores', xlabel='Scoring', ylabel='Frequency', l=5, u=5);
# Titles should be less than 300 charcters
# Outliers are due to unicode translation
# Plot lengths of titles
tmp = [ len(x) for x in df.title ]
bins, lower, upper = binSize(lower=0, upper=300, buffer=-.09)
plotHist(tmp=tmp, bins=bins, title='Lengths of Titles', xlabel='Length', ylabel='Frequency', l=10, u=0);
# Slice lengths of texts and plot histogram
bins, lower, upper = binSize(lower=500, upper=5000, buffer=.011)
tmp = [len(x) for x in df.selftext if lower <= len(x) <= upper]
plotHist(tmp=tmp, bins=bins, title='Length of Self Posts Under 5k', xlabel='Length', ylabel='Frequency', l=10, u=0)
plt.ylim(0, 200);
# Anomalies could be attributed to bots or duplicate reposts
# Posts per Subreddit
tmp = df.groupby('subreddit')['id'].nunique().sort_values(ascending=False)
top = 100
s = sum(tmp)
print('Subreddits:', len(tmp))
print('Total Posts:', s)
print('Total Posts from Top %s:' % top, sum(tmp[:top]), ', %.3f of Total' % (sum(tmp[:top])/s))
print('Total Posts from Top 10:', sum(tmp[:10]), ', %.3f of Total' % (sum(tmp[:10])/s))
print('\nTop 10 Contributors:', tmp[:10])
plt.figure(figsize=(10,6))
plt.plot(tmp, 'go')
plt.xticks('')
plt.title('Top %s Subreddit Post Counts' % top)
plt.xlabel('Subreddits, Ranked')
plt.ylabel('Post Count')
plt.xlim(-2, top+1)
plt.ylim(0, 2650);
path1 = 'data/origin.pkl'
#path2 = 'data/grouped.pkl'
# Save important data
origin_df = df.loc[:,['created_utc', 'subreddit', 'author', 'title', 'selftext', 'id']] \
.copy().reset_index().rename(columns={"index": "position"})
print(origin_df.info())
origin_df.to_pickle(path1)
posts_df = origin_df.loc[:,['title', 'selftext']]
posts_df['text'] = posts_df.title + ' ' + df.selftext
#del origin_df
# To group the results later
def groupUserPosts(x):
''' Group users' id's by post '''
return pd.Series(dict(ids = ", ".join(x['id']),
text = ", ".join(x['text'])))
###df = posts_df.groupby('author').apply(groupUserPosts)
#df.to_pickle(path2)
df = posts_df.text.to_frame()
origin_df.sample(2).drop('author', axis=1)
%%time
def clean_text(df, text_field):
'''
Clean all the text data within a certain text column of the dataFrame.
'''
df[text_field] = df[text_field].str.replace(r"http\S+", " ")
df[text_field] = df[text_field].str.replace(r"&[a-z]{2,4};", "")
df[text_field] = df[text_field].str.replace("\\n", " ")
df[text_field] = df[text_field].str.replace(r"#f", "")
df[text_field] = df[text_field].str.replace(r"[\’\'\`\":]", "")
df[text_field] = df[text_field].str.replace(r"[^A-Za-z0-9]", " ")
df[text_field] = df[text_field].str.replace(r" +", " ")
df[text_field] = df[text_field].str.lower()
clean_text(df, 'text')
df.sample(3)
# For exploration of users
df[origin_df.author == '<Redacted>'][:3]
# User is a post summarizer and aggregator, added /r/tldr to the blocked list!
# Slice lengths of texts and plot histogram
bins, lower, upper = binSize(lower=500, upper=5000, buffer=.015)
tmp = [len(x) for x in df.text if lower <= len(x) <= upper]
plotHist(tmp=tmp, bins=bins, title='Cleaned - Length of Self Posts Under 5k',
xlabel='Lengths', ylabel='Frequency', l=0, u=0)
plt.ylim(0, 185);
# Download everything for nltk! ('all')
import nltk
nltk.download() # (Change config save path)
nltk.data.path.append('/home/User/data/')
from nltk.corpus import stopwords
# "stopeng" is our extended list of stopwords for use in the CountVectorizer
# I could spend days extending this list for fine tuning results
stopeng = stopwords.words('english')
stopeng.extend([x.replace("\'", "") for x in stopeng])
stopeng.extend(['nbsp', 'also', 'really', 'ive', 'even', 'jon', 'lot', 'could', 'many'])
stopeng = list(set(stopeng))
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# Count vectorization for LDA
cv = CountVectorizer(token_pattern='\\w{3,}', max_df=.30, min_df=.0001,
stop_words=stopeng, ngram_range=(1,1), lowercase=False,
dtype='uint8')
# Vectorizer object to generate term frequency-inverse document frequency matrix
tfidf = TfidfVectorizer(token_pattern='\\w{3,}', max_df=.30, min_df=.0001,
stop_words=stopeng, ngram_range=(1,1), lowercase=False,
sublinear_tf=True, smooth_idf=False, dtype='float32')
```
###### Tokenization is one of the most important steps in NLP, I will explain some of my parameter choices in the README. CountVectorizer was my preferred choice. I used these definitions to help me in the iterative process of building an unsupervised model.
###### The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus.
###### Smooth = False: The effect of adding “1” to the idf in the equation above is that terms with zero idf, i.e., terms that occur in all documents in a training set, will not be entirely ignored.
###### sublinear_tf = True: “l” (logarithmic), replaces tf with 1 + log(tf)
```
%%time
# Count & tf-idf vectorizer fits the tokenizer and transforms data into new matrix
cv_vecs = cv.fit_transform(df.text).transpose()
tf_vecs = tfidf.fit_transform(df.text).transpose()
pickle.dump(cv_vecs, open('data/cv_vecs.pkl', 'wb'))
# Checking the shape and size of the count vectorizer transformed matrix
# 47,317 terms
# 146996 documents
print("Sparse Shape:", cv_vecs.shape)
print('CV:', sys.getsizeof(cv_vecs))
print('Tf-Idf:', sys.getsizeof(tf_vecs))
# IFF using a subset can you store these in a Pandas DataFrame/
#tfidf_df = pd.DataFrame(tf_vecs.transpose().todense(), columns=[tfidf.get_feature_names()]).astype('float32')
#cv_df = pd.DataFrame(cv_vecs.transpose().todense(), columns=[cv.get_feature_names()]).astype('uint8')
#print(cv_df.info())
#print(tfidf_df.info())
#cv_description = cv_df.describe().T
#tfidf_description = tfidf_df.describe().T
#tfidf_df.sum().sort_values(ascending=False)
# Explore the document-term vectors
#cv_description.sort_values(by='max', ascending=False)
#tfidf_description.sort_values(by='mean', ascending=False)
```
## Singular Value Decomposition (SVD)
```
#from sklearn.utils.extmath import randomized_svd
# Randomized SVD for extracting the full decomposition
#U, Sigma, VT = randomized_svd(tf_vecs, n_components=8, random_state=42)
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import Normalizer
def Trunc_SVD(vectorized, n_components=300, iterations=1, normalize=False, random_state=42):
"""
Performs LSA/LSI on a sparse document term matrix, returns a fitted, transformed, (normalized) LSA object
"""
# Already own the vectorized data for LSA, just transpose it back to normal
vecs_lsa = vectorized.T
# Initialize SVD object as LSA
lsa = TruncatedSVD(n_components=n_components, n_iter=iterations, algorithm='randomized', random_state=random_state)
dtm_lsa = lsa.fit(vecs_lsa)
print("Explained Variance - LSA {}:".format(n_components), dtm_lsa.explained_variance_ratio_.sum())
if normalize:
dtm_lsa_t = lsa.fit_transform(vecs_lsa)
dtm_lsa_t = Normalizer(copy=False).fit_transform(dtm_lsa_t)
return dtm_lsa, dtm_lsa_t
return dtm_lsa
def plot_SVD(lsa, title, level=None):
"""
Plots the singular values of an LSA object
"""
plt.figure(num=1, figsize=(15,10))
plt.suptitle(title, fontsize=22, x=.55, y=.45, horizontalalignment='left')
plt.subplot(221)
plt.title('Explained Variance by each Singular Value')
plt.plot(lsa.explained_variance_[:level])
plt.subplot(222)
plt.title('Explained Variance Ratio by each Singular Value')
plt.plot(lsa.explained_variance_ratio_[:level])
plt.subplot(223)
plt.title("Singular Values ('Components')")
plt.plot(lsa.singular_values_[:level])
plt.show()
%%time
components = 350
cv_dtm_lsa = Trunc_SVD(cv_vecs, n_components=components, iterations=5, normalize=False)
plot_SVD(cv_dtm_lsa, title='Count Vectorizer', level=25)
tf_dtm_lsa = Trunc_SVD(tf_vecs, n_components=components, iterations=5, normalize=False)
plot_SVD(tf_dtm_lsa, title='Term Frequency - \nInverse Document Frequency', level=25)
# Numerically confirming the elbow in the above plot
print('SVD Value| CV | TFIDF')
print('Top 2: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:2])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:2])),3))
print('Top 3: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:3])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:3])),3))
print('Top 4: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:4])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:4])),3))
print('Top 5: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:5])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:5])),3))
print('Top 6: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:6])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:6])),3))
print('Top 7: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:7])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:7])),3))
print('Top 8: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:8])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:8])),3))
print('Top 16:\t',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:16])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:16])),3))
print('Top 32:\t',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:32])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:32])),3))
print('Top 64:\t',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:64])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:64])),3))
print('Top 128:',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:128])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:128])),3))
print('Top 256:',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:256])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:256])),3))
print('Top 350:',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:350])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:350])),3))
# Close look at the elbow plots
def elbow(dtm_lsa):
evr = dtm_lsa.explained_variance_ratio_[:20]
print("Explained Variance Ratio (EVR):\n", evr)
print("Difference in EVR (start 3):\n", np.diff(evr[2:]))
plt.figure()
plt.plot(-np.diff(evr[2:]))
plt.xticks(range(-1,22), range(2,20))
plt.suptitle('Difference in Explained Variance Ratio', fontsize=15);
plt.title('Start from 3, moves up to 20');
# Count Vectorizer
elbow(cv_dtm_lsa)
# Tf-Idf
elbow(tf_dtm_lsa)
```
###### The count vectorizer seems like it will be more fool proof, so I will use cv for my study. 8 might be a good cutoff value for the number of components kept in dimensionality reduction, I will try to confirm this later with KMeans clustering. The intuition behind this is that the slope after the 8th element is significantly different from the first elements. Keeping just 2 components would not be sufficient enough for clustering because we want to retain as much information as we can while still cutting down the dimensions to find some kind of human readable latent concept space.
###### I am going to try out 2 quick methods before clustering and moving onto my main goal of topic modeling with LDA.
## Latent Semantic Analysis (LSA)
```
%%time
from gensim import corpora, matutils, models
# Convert sparse matrix of term-doc counts to a gensim corpus
cv_corpus = matutils.Sparse2Corpus(cv_vecs)
pickle.dump(cv_corpus, open('data/cv_corpus.pkl', 'wb'))
# Maps index to term
id2word = dict((v, k) for k, v in cv.vocabulary_.items())
# This is for Python 3, Need this for something at the end
id2word = corpora.Dictionary.from_corpus(cv_corpus, id2word=id2word)
pickle.dump(lda, open('data/id2word.pkl', 'wb'))
# Fitting an LSI model
lsi = models.LsiModel(corpus=cv_corpus, id2word=id2word, num_topics=10)
%%time
# Retrieve vectors for the original cv corpus in the LS space ("transform" in sklearn)
lsi_corpus = lsi[cv_corpus]
# Dump the resulting document vectors into a list
doc_vecs = [doc for doc in lsi_corpus]
doc_vecs[0][:5]
# Sum of a documents' topics (?)
for i in range(5):
print(sum(doc_vecs[i][1]))
```
## <a id='sim'></a> Similarity Scoring
```
from gensim import similarities
# Create an index transformer that calculates similarity based on our space
index = similarities.MatrixSimilarity(doc_vecs, num_features=300)
# Return the sorted list of cosine similarities to the docu document
docu = 5 # Change docu as needed
sims = sorted(enumerate(index[doc_vecs[docu]]), key=lambda item: -item[1])
np.r_[sims[:10] , sims[-10:]]
# Viewing similarity of top documents
top = 1
for sim_doc_id, sim_score in sims[:top + 1]:
print("\nScore:", sim_score)
print("Document Text:\n", df.text[sim_doc_id])
```
###### The metrics look artifically high, and do not match well for each document. The similarity method could be used to optimize keyword search if we were trying to expand the reach of a certain demographic using these rankings. The next step would be to improve on this method with word2vec or a better LSI model.
# <a id='km'></a>KMeans Clustering
```
lsi_red = matutils.corpus2dense(lsi_corpus, num_terms=300).transpose()
print('Reduced LS space shape:', lsi_red.shape)
print('Reduced LS space size in bytes:', sys.getsizeof(lsi_red))
# Taking a subset for Kmeans due to memory dropout
lsi_red_sub = lsi_red.copy()
np.random.shuffle(lsi_red_sub)
lsi_red_sub = lsi_red_sub[:30000]
lsi_red_sub = Normalizer(copy=False).fit_transform(lsi_red_sub) # Normalized for the Euclidean metric
print('Reduced LS space subset shape:', lsi_red_sub.shape)
print('Reduced LS space subset size in bytes:', sys.getsizeof(lsi_red_sub))
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
# Calculating Silhouette coefficients and Sum of Squared Errors
def silhouette_co(start, stop, lsi_red_sub, random_state=42, n_jobs=-2, verbose=4):
"""
Input a normalized subset of a reduced dense latent semantic matrix
Returns list of scores for plotting
"""
SSEs = []
Sil_coefs = []
try_clusters = range(start, stop)
for k in try_clusters:
km = KMeans(n_clusters=k, random_state=random_state, n_jobs=n_jobs)
km.fit(lsi_red_sub)
labels = km.labels_
Sil_coefs.append(silhouette_score(lsi_red_sub, labels, metric='euclidean'))
SSEs.append(km.inertia_)
if k%verbose==0:
print(k)
return SSEs, Sil_coefs, try_clusters
%%time
SSEs, Sil_coefs, try_clusters = silhouette_co(start=2, stop=40, lsi_red_sub=lsi_red_sub)
def plot_sil(try_clusters, Sil_coefs, SSEs):
""" Function for visualizing/ finding the best clustering point """
# Plot Silhouette scores
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,5), sharex=True, dpi=200)
ax1.plot(try_clusters, Sil_coefs)
ax1.title('Silhouette of Clusters')
ax1.set_xlabel('Number of Clusters')
ax1.set_ylabel('Silhouette Coefficient')
# Plot errors
ax2.plot(try_clusters, SSEs)
ax2.title("Cluster's Error")
ax2.set_xlabel('Number of Clusters')
ax2.set_ylabel('SSE');
plot_sil(try_clusters=try_clusters, Sil_coefs=Sil_coefs, SSEs=SSes)
```
###### This suggests that there arent meaningful clusters in the normalized LS300 space. To test if 300 dimensions is too large, I will try clustering again with a reduced input.
```
# Fix IndexError: index 10
lsi_red5 = matutils.corpus2dense(lsi_corpus, num_terms=10).transpose()
print('Reduced LSI space shape:', lsi_red5.shape)
print('Reduced LS space subset size in bytes:', sys.getsizeof(lsi_red5))
# Taking a subset for Kmeans due to memory dropout
lsi_red_sub5 = lsi_red5.copy()
np.random.shuffle(lsi_red_sub5)
lsi_red_sub5 = lsi_red_sub5[:5000]
lsi_red_sub5 = Normalizer(copy=False).fit_transform(lsi_red_sub5) # Normalized for the Euclidean metric
print('Reduced LSI space subset shape:', lsi_red_sub5.shape)
print('Reduced LS space subset size in bytes:', sys.getsizeof(lsi_red_sub5))
%%time
SSEs, Sil_coefs, try_clusters = silhouette_co(start=2, stop=40, lsi_red_sub=lsi_red_sub)
plot_sil(try_clusters=try_clusters, Sil_coefs=Sil_coefs, SSEs=SSes)
```
###### Due to project deadlines, I was not able to complete this method but I wanted to preserve the effort and document the process for later use. I will move on to LDA.
```
# Cluster with the best results
#kmeans = KMeans(n_clusters=20, n_jobs=-2)
#lsi_clusters = kmeans.fit_predict(lsi_red)
# Take a look at the
print(lsi_clusters[0:15])
df.text[0:2]
from sklearn.metrics import silhouette_samples, silhouette_score
# Validating cluster performance
# Select range around best result, plot the silhouette distributions for each cluster
for k in range(14,17):
plt.figure(dpi=120, figsize=(8,6))
ax1 = plt.gca()
km = KMeans(n_clusters=k, random_state=1)
km.fit(X)
labels = km.labels_
silhouette_avg = silhouette_score(X, labels)
print("For n_clusters =", k,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, labels)
y_lower = 100
for i in range(k):
# Aggregate the silhouette scores for samples belonging to cluster i
ith_cluster_silhouette_values = sample_silhouette_values[labels == i]
#Sort
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = plt.cm.spectral(float(i) / k)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
```
## <a id='lda'></a> Latent Dirichlet Allocation (LDA)
```
%%time
run = False
passes = 85
if run==True:
lda = models.LdaMulticore(corpus=cv_corpus, num_topics=15, id2word=id2word, passes=passes,
workers=13, random_state=42, eval_every=None, chunksize=6000)
# Save model after your last run, or continue to update LDA
#pickle.dump(lda, open('data/lda_gensim.pkl', 'wb'))
# Gensim save
#lda.save('data/gensim_lda.model')
lda = models.LdaModel.load('data/gensim_lda.model')
%%time
# Transform the docs from the word space to the topic space (like "transform" in sklearn)
lda_corpus = lda[cv_corpus]
# Store the documents' topic vectors in a list so we can take a peak
lda_docs = [doc for doc in lda_corpus]
# Review Dirichlet distribution for documents
lda_docs[25000]
# Manually review the document to see if it makes sense!
# Look back at the topics that it matches with to confirm the result!
df.iloc[25000]
#bow = df.iloc[1,0].split()
# Print topic probability distribution for a document
#print(lda[bow]) #Values unpack error
# Given a chunk of sparse document vectors, estimate gamma:
# (parameters controlling the topic weights) for each document in the chunk.
#lda.inference(bow) #Not enough values
# Makeup of each topic! Interpretable!
# The d3 visualization below is far better for looking at the interpretations.
lda.print_topics(num_words=10, num_topics=1)
```
## <a id='py'></a> pyLDAvis
```
# For quickstart, we can just jump straight to results
import pickle
from gensim import models
def loadingPickles():
id2word = pickle.load(open('data/id2word.pkl','rb'))
cv_vecs = pickle.load(open('data/cv_vecs.pkl','rb'))
cv_corpus = pickle.load(open('data/cv_corpus.pkl','rb'))
lda = models.LdaModel.load('data/gensim_lda.model')
return id2word, cv_vecs, cv_corpus, lda
import pyLDAvis.gensim
import gensim
# Enables visualization in jupyter notebook
pyLDAvis.enable_notebook()
# Prepare the visualization
# Change multidimensional scaling function via mds parameter
# Options are tsne, mmds, pcoa
# cv_corpus or cv_vecs work equally
id2word, _, cv_corpus, lda = loadingPickles()
viz = pyLDAvis.gensim.prepare(topic_model=lda, corpus=cv_corpus, dictionary=id2word, mds='mmds')
# Save the html for sharing!
pyLDAvis.save_html(viz,'data/viz.html')
# Interact! Saliency is the most important metric that changes the story of each topic.
pyLDAvis.display(viz)
```
# There you have it. There is a ton of great information right here that I will conclude upon in the README and the slides on my github.
###### In it I will discuss what I could do with this information. I did not end up using groupUserPosts but I could create user profiles based on the aggregate of their document topic distributions. I believe this is a great start to understanding NLP and how it can be used. I would consider working on this again but with more technologies needed for big data.
| github_jupyter |
# Forecasting in Statsmodels
This notebook describes forecasting using time series models in Statsmodels.
**Note**: this notebook applies only to the state space model classes, which are:
- `sm.tsa.SARIMAX`
- `sm.tsa.UnobservedComponents`
- `sm.tsa.VARMAX`
- `sm.tsa.DynamicFactor`
```
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
macrodata = sm.datasets.macrodata.load_pandas().data
macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q')
```
## Basic example
A simple example is to use an AR(1) model to forecast inflation. Before forecasting, let's take a look at the series:
```
endog = macrodata['infl']
endog.plot(figsize=(15, 5))
```
### Constructing and estimating the model
The next step is to formulate the econometric model that we want to use for forecasting. In this case, we will use an AR(1) model via the `SARIMAX` class in Statsmodels.
After constructing the model, we need to estimate its parameters. This is done using the `fit` method. The `summary` method produces several convinient tables showing the results.
```
# Construct the model
mod = sm.tsa.SARIMAX(endog, order=(1, 0, 0), trend='c')
# Estimate the parameters
res = mod.fit()
print(res.summary())
```
### Forecasting
Out-of-sample forecasts are produced using the `forecast` or `get_forecast` methods from the results object.
The `forecast` method gives only point forecasts.
```
# The default is to get a one-step-ahead forecast:
print(res.forecast())
```
The `get_forecast` method is more general, and also allows constructing confidence intervals.
```
# Here we construct a more complete results object.
fcast_res1 = res.get_forecast()
# Most results are collected in the `summary_frame` attribute.
# Here we specify that we want a confidence level of 90%
print(fcast_res1.summary_frame(alpha=0.10))
```
The default confidence level is 95%, but this can be controlled by setting the `alpha` parameter, where the confidence level is defined as $(1 - \alpha) \times 100\%$. In the example above, we specified a confidence level of 90%, using `alpha=0.10`.
### Specifying the number of forecasts
Both of the functions `forecast` and `get_forecast` accept a single argument indicating how many forecasting steps are desired. One option for this argument is always to provide an integer describing the number of steps ahead you want.
```
print(res.forecast(steps=2))
fcast_res2 = res.get_forecast(steps=2)
# Note: since we did not specify the alpha parameter, the
# confidence level is at the default, 95%
print(fcast_res2.summary_frame())
```
However, **if your data included a Pandas index with a defined frequency** (see the section at the end on Indexes for more information), then you can alternatively specify the date through which you want forecasts to be produced:
```
print(res.forecast('2010Q2'))
fcast_res3 = res.get_forecast('2010Q2')
print(fcast_res3.summary_frame())
```
### Plotting the data, forecasts, and confidence intervals
Often it is useful to plot the data, the forecasts, and the confidence intervals. There are many ways to do this, but here's one example
```
fig, ax = plt.subplots(figsize=(15, 5))
# Plot the data (here we are subsetting it to get a better look at the forecasts)
endog.loc['1999':].plot(ax=ax)
# Construct the forecasts
fcast = res.get_forecast('2011Q4').summary_frame()
fcast['mean'].plot(ax=ax, style='k--')
ax.fill_between(fcast.index, fcast['mean_ci_lower'], fcast['mean_ci_upper'], color='k', alpha=0.1);
```
### Note on what to expect from forecasts
The forecast above may not look very impressive, as it is almost a straight line. This is because this is a very simple, univariate forecasting model. Nonetheless, keep in mind that these simple forecasting models can be extremely competitive.
## Prediction vs Forecasting
The results objects also contain two methods that all for both in-sample fitted values and out-of-sample forecasting. They are `predict` and `get_prediction`. The `predict` method only returns point predictions (similar to `forecast`), while the `get_prediction` method also returns additional results (similar to `get_forecast`).
In general, if your interest is out-of-sample forecasting, it is easier to stick to the `forecast` and `get_forecast` methods.
## Cross validation
**Note**: some of the functions used in this section were first introduced in Statsmodels v0.11.0.
A common use case is to cross-validate forecasting methods by performing h-step-ahead forecasts recursively using the following process:
1. Fit model parameters on a training sample
2. Produce h-step-ahead forecasts from the end of that sample
3. Compare forecasts against test dataset to compute error rate
4. Expand the sample to include the next observation, and repeat
Economists sometimes call this a pseudo-out-of-sample forecast evaluation exercise, or time-series cross-validation.
### Basic example
We will conduct a very simple exercise of this sort using the inflation dataset above. The full dataset contains 203 observations, and for expositional purposes we'll use the first 80% as our training sample and only consider one-step-ahead forecasts.
A single iteration of the above procedure looks like the following:
```
# Step 1: fit model parameters w/ training sample
training_obs = int(len(endog) * 0.8)
training_endog = endog[:training_obs]
training_mod = sm.tsa.SARIMAX(
training_endog, order=(1, 0, 0), trend='c')
training_res = training_mod.fit()
# Print the estimated parameters
print(training_res.params)
# Step 2: produce one-step-ahead forecasts
fcast = training_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
```
To add on another observation, we can use the `append` or `extend` results methods. Either method can produce the same forecasts, but they differ in the other results that are available:
- `append` is the more complete method. It always stores results for all training observations, and it optionally allows refitting the model parameters given the new observations (note that the default is *not* to refit the parameters).
- `extend` is a faster method that may be useful if the training sample is very large. It *only* stores results for the new observations, and it does not allow refitting the model parameters (i.e. you have to use the parameters estimated on the previous sample).
If your training sample is relatively small (less than a few thousand observations, for example) or if you want to compute the best possible forecasts, then you should use the `append` method. However, if that method is infeasible (for example, becuase you have a very large training sample) or if you are okay with slightly suboptimal forecasts (because the parameter estimates will be slightly stale), then you can consider the `extend` method.
A second iteration, using the `append` method and refitting the parameters, would go as follows (note again that the default for `append` does not refit the parameters, but we have overridden that with the `refit=True` argument):
```
# Step 1: append a new observation to the sample and refit the parameters
append_res = training_res.append(endog[training_obs:training_obs + 1], refit=True)
# Print the re-estimated parameters
print(append_res.params)
```
Notice that these estimated parameters are slightly different than those we originally estimated. With the new results object, `append_res`, we can compute forecasts starting from one observation further than the previous call:
```
# Step 2: produce one-step-ahead forecasts
fcast = append_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
```
Putting it altogether, we can perform the recursive forecast evaluation exercise as follows:
```
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.append(updated_endog, refit=False)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
```
We now have a set of three forecasts made at each point in time from 1999Q2 through 2009Q3. We can construct the forecast errors by subtracting each forecast from the actual value of `endog` at that point.
```
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
```
To evaluate our forecasts, we often want to look at a summary value like the root mean square error. Here we can compute that for each horizon by first flattening the forecast errors so that they are indexed by horizon and then computing the root mean square error fore each horizon.
```
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
```
#### Using `extend`
We can check that we get similar forecasts if we instead use the `extend` method, but that they are not exactly the same as when we use `append` with the `refit=True` argument. This is because `extend` does not re-estimate the parameters given the new observation.
```
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.extend(updated_endog)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
```
By not re-estimating the parameters, our forecasts are slightly worse (the root mean square error is higher at each horizon). However, the process is faster, even with only 200 datapoints. Using the `%%timeit` cell magic on the cells above, we found a runtime of 570ms using `extend` versus 1.7s using `append` with `refit=True`. (Note that using `extend` is also faster than using `append` with `refit=False`).
## Indexes
Throughout this notebook, we have been making use of Pandas date indexes with an associated frequency. As you can see, this index marks our data as at a quarterly frequency, between 1959Q1 and 2009Q3.
```
print(endog.index)
```
In most cases, if your data has an associated data/time index with a defined frequency (like quarterly, monthly, etc.), then it is best to make sure your data is a Pandas series with the appropriate index. Here are three examples of this:
```
# Annual frequency, using a PeriodIndex
index = pd.period_range(start='2000', periods=4, freq='A')
endog1 = pd.Series([1, 2, 3, 4], index=index)
print(endog1.index)
# Quarterly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='QS')
endog2 = pd.Series([1, 2, 3, 4], index=index)
print(endog2.index)
# Monthly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='M')
endog3 = pd.Series([1, 2, 3, 4], index=index)
print(endog3.index)
```
In fact, if your data has an associated date/time index, it is best to use that even if does not have a defined frequency. An example of that kind of index is as follows - notice that it has `freq=None`:
```
index = pd.DatetimeIndex([
'2000-01-01 10:08am', '2000-01-01 11:32am',
'2000-01-01 5:32pm', '2000-01-02 6:15am'])
endog4 = pd.Series([0.2, 0.5, -0.1, 0.1], index=index)
print(endog4.index)
```
You can still pass this data to Statsmodels' model classes, but you will get the following warning, that no frequency data was found:
```
mod = sm.tsa.SARIMAX(endog4)
res = mod.fit()
```
What this means is that you cannot specify forecasting steps by dates, and the output of the `forecast` and `get_forecast` methods will not have associated dates. The reason is that without a given frequency, there is no way to determine what date each forecast should be assigned to. In the example above, there is no pattern to the date/time stamps of the index, so there is no way to determine what the next date/time should be (should it be in the morning of 2000-01-02? the afternoon? or maybe not until 2000-01-03?).
For example, if we forecast one-step-ahead:
```
res.forecast(1)
```
The index associated with the new forecast is `4`, because if the given data had an integer index, that would be the next value. A warning is given letting the user know that the index is not a date/time index.
If we try to specify the steps of the forecast using a date, we will get the following exception:
KeyError: 'The `end` argument could not be matched to a location related to the index of the data.'
```
# Here we'll catch the exception to prevent printing too much of
# the exception trace output in this notebook
try:
res.forecast('2000-01-03')
except KeyError as e:
print(e)
```
Ultimately there is nothing wrong with using data that does not have an associated date/time frequency, or even using data that has no index at all, like a Numpy array. However, if you can use a Pandas series with an associated frequency, you'll have more options for specifying your forecasts and get back results with a more useful index.
| github_jupyter |
```
import pandas as pd
import numpy as np
from bayes_opt import BayesianOptimization
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
from Data_Processing import DataProcessing
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
pop = pd.read_csv('../Data/population.csv')
train = pd.read_csv('../Data/train.csv')
test = pd.read_csv('../Data/test.csv')
X, y, test = DataProcessing(train, test, pop)
pca = PCA(n_components=20)
X = pca.fit_transform(X)
y = y.ravel()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
#Bayesian optimization
def bayesian_optimization(dataset, function, parameters):
X_train, y_train, X_test, y_test = dataset
n_iterations = 5
gp_params = {"alpha": 1e-4}
BO = BayesianOptimization(function, parameters)
BO.maximize(n_iter=n_iterations, **gp_params)
return BO.max
def rfc_optimization(cv_splits):
def function(n_estimators, max_depth, min_samples_split):
return cross_val_score(
RandomForestRegressor(
n_estimators=int(max(n_estimators,0)),
max_depth=int(max(max_depth,1)),
min_samples_split=int(max(min_samples_split,2)),
n_jobs=-1,
random_state=42),
X=X_train,
y=y_train,
cv=cv_splits,
scoring="neg_mean_squared_error",
n_jobs=-1).mean()
parameters = {"n_estimators": (10, 1000),
"max_depth": (1, 150),
"min_samples_split": (2, 10)
# "criterion": ('squared_error', 'absolute_error', 'poisson'),
# 'bootstrap': (True, False)
}
return function, parameters
def xgb_optimization(cv_splits, eval_set):
def function(eta, gamma, max_depth):
return cross_val_score(
xgb.XGBClassifier(
objective="binary:logistic",
learning_rate=max(eta, 0),
gamma=max(gamma, 0),
max_depth=int(max_depth),
seed=42,
nthread=-1,
scale_pos_weight = len(y_train[y_train == 0])/
len(y_train[y_train == 1])),
X=X_train,
y=y_train,
cv=cv_splits,
scoring="roc_auc",
fit_params={
"early_stopping_rounds": 10,
"eval_metric": "auc",
"eval_set": eval_set},
n_jobs=-1).mean()
parameters = {"eta": (0.001, 0.4),
"gamma": (0, 20),
"max_depth": (1, 2000),
"criterion": ('squared_error', 'absolute_error', 'poisson'),
'bootstrap': (True, False),
}
return function, parameters
#Train model
def train(X_train, y_train, X_test, y_test, function, parameters):
dataset = (X_train, y_train, X_test, y_test)
cv_splits = 4
best_solution = bayesian_optimization(dataset, function, parameters)
params = best_solution["params"]
model = RandomForestRegressor(
n_estimators=int(max(params["n_estimators"], 0)),
max_depth=int(max(params["max_depth"], 1)),
min_samples_split=int(max(params["min_samples_split"], 2)),
n_jobs=-1,
random_state=42)
model.fit(X_train, y_train)
return model
rfc, params = rfc_optimization(4)
model = train(X_train, y_train, X_test, y_test, rfc, params)
y_true = y_test
y_pred = model.predict(X_test)
mean_squared_error(y_true, y_pred)
```
| github_jupyter |
```
from keras.layers import Input, Dense, merge
from keras.models import Model
from keras.layers import Convolution2D, MaxPooling2D, Reshape, BatchNormalization
from keras.layers import Activation, Dropout, Flatten, Dense
def default_categorical():
img_in = Input(shape=(120, 160, 3), name='img_in') # First layer, input layer, Shape comes from camera.py resolution, RGB
x = img_in
x = Convolution2D(24, (5,5), strides=(2,2), activation='relu', name = 'conv1')(x) # 24 features, 5 pixel x 5 pixel kernel (convolution, feauture) window, 2wx2h stride, relu activation
x = Convolution2D(32, (5,5), strides=(2,2), activation='relu', name = 'conv2')(x) # 32 features, 5px5p kernel window, 2wx2h stride, relu activatiion
x = Convolution2D(64, (5,5), strides=(2,2), activation='relu', name = 'conv3')(x) # 64 features, 5px5p kernal window, 2wx2h stride, relu
x = Convolution2D(64, (3,3), strides=(2,2), activation='relu', name = 'conv4')(x) # 64 features, 3px3p kernal window, 2wx2h stride, relu
x = Convolution2D(64, (3,3), strides=(1,1), activation='relu', name = 'conv5')(x) # 64 features, 3px3p kernal window, 1wx1h stride, relu
# Possibly add MaxPooling (will make it less sensitive to position in image). Camera angle fixed, so may not to be needed
x = Flatten(name='flattened')(x) # Flatten to 1D (Fully connected)
x = Dense(100, activation='relu', name = 'dense1')(x) # Classify the data into 100 features, make all negatives 0
x = Dropout(.1)(x) # Randomly drop out (turn off) 10% of the neurons (Prevent overfitting)
x = Dense(50, activation='relu', name = 'dense2')(x) # Classify the data into 50 features, make all negatives 0
x = Dropout(.1)(x) # Randomly drop out 10% of the neurons (Prevent overfitting)
#categorical output of the angle
angle_out = Dense(15, activation='softmax', name='angle_out')(x) # Connect every input with every output and output 15 hidden units. Use Softmax to give percentage. 15 categories and find best one based off percentage 0.0-1.0
#continous output of throttle
throttle_out = Dense(1, activation='relu', name='throttle_out')(x) # Reduce to 1 number, Positive number only
model = Model(inputs=[img_in], outputs=[angle_out, throttle_out])
return model
model = default_categorical()
model.load_weights('weights.h5')
img_in = Input(shape=(120, 160, 3), name='img_in')
x = img_in
x = Convolution2D(24, (5,5), strides=(2,2), activation='relu', name='conv1')(x)
x = Convolution2D(32, (5,5), strides=(2,2), activation='relu', name='conv2')(x)
x = Convolution2D(64, (5,5), strides=(2,2), activation='relu', name='conv3')(x)
x = Convolution2D(64, (3,3), strides=(2,2), activation='relu', name='conv4')(x)
conv_5 = Convolution2D(64, (3,3), strides=(1,1), activation='relu', name='conv5')(x)
convolution_part = Model(inputs=[img_in], outputs=[conv_5])
for layer_num in ('1', '2', '3', '4', '5'):
convolution_part.get_layer('conv' + layer_num).set_weights(model.get_layer('conv' + layer_num).get_weights())
from keras import backend as K
inp = convolution_part.input # input placeholder
outputs = [layer.output for layer in convolution_part.layers[1:]] # all layer outputs
functor = K.function([inp], outputs)
import tensorflow as tf
import numpy as np
import pdb
kernel_3x3 = tf.constant(np.array([
[[[1]], [[1]], [[1]]],
[[[1]], [[1]], [[1]]],
[[[1]], [[1]], [[1]]]
]), tf.float32)
kernel_5x5 = tf.constant(np.array([
[[[1]], [[1]], [[1]], [[1]], [[1]]],
[[[1]], [[1]], [[1]], [[1]], [[1]]],
[[[1]], [[1]], [[1]], [[1]], [[1]]],
[[[1]], [[1]], [[1]], [[1]], [[1]]],
[[[1]], [[1]], [[1]], [[1]], [[1]]]
]), tf.float32)
layers_kernels = {5: kernel_3x3, 4: kernel_3x3, 3: kernel_5x5, 2: kernel_5x5, 1: kernel_5x5}
layers_strides = {5: [1, 1, 1, 1], 4: [1, 2, 2, 1], 3: [1, 2, 2, 1], 2: [1, 2, 2, 1], 1: [1, 2, 2, 1]}
def compute_visualisation_mask(img):
# pdb.set_trace()
activations = functor([np.array([img])])
activations = [np.reshape(img, (1, img.shape[0], img.shape[1], img.shape[2]))] + activations
upscaled_activation = np.ones((3, 6))
for layer in [5, 4, 3, 2, 1]:
averaged_activation = np.mean(activations[layer], axis=3).squeeze(axis=0) * upscaled_activation
output_shape = (activations[layer - 1].shape[1], activations[layer - 1].shape[2])
x = tf.constant(
np.reshape(averaged_activation, (1,averaged_activation.shape[0],averaged_activation.shape[1],1)),
tf.float32
)
conv = tf.nn.conv2d_transpose(
x, layers_kernels[layer],
output_shape=(1,output_shape[0],output_shape[1], 1),
strides=layers_strides[layer],
padding='VALID'
)
with tf.Session() as session:
result = session.run(conv)
upscaled_activation = np.reshape(result, output_shape)
final_visualisation_mask = upscaled_activation
return (final_visualisation_mask - np.min(final_visualisation_mask))/(np.max(final_visualisation_mask) - np.min(final_visualisation_mask))
import cv2
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import display, HTML
def plot_movie_mp4(image_array):
dpi = 72.0
xpixels, ypixels = image_array[0].shape[0], image_array[0].shape[1]
fig = plt.figure(figsize=(ypixels/dpi, xpixels/dpi), dpi=dpi)
im = plt.figimage(image_array[0])
def animate(i):
im.set_array(image_array[i])
return (im,)
anim = animation.FuncAnimation(fig, animate, frames=len(image_array))
display(HTML(anim.to_html5_video()))
from glob import iglob
imgs = []
alpha = 0.004
beta = 1.0 - alpha
counter = 0
for path in sorted(iglob('imgs/*.jpg')):
img = cv2.imread(path)
salient_mask = compute_visualisation_mask(img)
salient_mask_stacked = np.dstack((salient_mask,salient_mask))
salient_mask_stacked = np.dstack((salient_mask_stacked,salient_mask))
blend = cv2.addWeighted(img.astype('float32'), alpha, salient_mask_stacked, beta, 0.0)
imgs.append(blend)
counter += 1
if counter >= 400:
break
plot_movie_mp4(imgs)
```
| github_jupyter |
# Notebook to visualize location data
```
import csv
# count the number of Starbucks in DC
with open('starbucks.csv') as file:
csvinput = csv.reader(file)
acc = 0
for record in csvinput:
if 'DC' in record[3]:
acc += 1
print( acc )
def parse_locations(csv_iterator,state=''):
""" strip out long/lat and convert to a list of floating point 2-tuples --
optionally, filter by a specified state """
return [ ( float(row[0]), float(row[1])) for row in csv_iterator
if state in row[3]]
def get_locations(filename, state=''):
""" read a list of longitude/latitude pairs from a csv file,
optionally, filter by a specified state """
with open(filename, 'r') as input_file:
csvinput = csv.reader(input_file)
location_data = parse_locations(csvinput,state)
return location_data
# get the data from all starbucks locations
starbucks_locations = get_locations('starbucks.csv')
# get the data from burger locations"
burger_locations = get_locations('burgerking.csv') + \
get_locations('mcdonalds.csv') + \
get_locations('wendys.csv')
# look at the first few (10) data points of each
for n in range(10):
print( starbucks_locations[n] )
print()
for n in range(10):
print( burger_locations[n] )
# a common, powerful plotting library
import matplotlib.pyplot as plt
# set figure size
plt.figure(figsize=(12, 9))
# get the axes of the plot and set them to be equal-aspect and limited (specify bounds) by data
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
# plot the data
plt.scatter(*zip(*starbucks_locations), s=1)
plt.legend(["Starbucks"])
# jupyter automatically plots this inline. On the console, you need to invoke plt.show()
# FYI: In that case, execution halts until you close the window it opens.
# set figure size
plt.figure(figsize=(12, 9))
# get the axes of the plot and set them to be equal-aspect and limited (specify bounds) by data
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
# plot the data
plt.scatter(*zip(*burger_locations), color='green', s=1)
plt.legend(["Burgers"])
lat, lon = zip(*get_locations('burgerking.csv'))
min_lat = min(lat)
max_lat = max(lat)
min_lon = min(lon)
max_lon = max(lon)
lat, lon = zip(*get_locations('mcdonalds.csv'))
min_lat = min(min_lat,min(lat))
max_lat = max(max_lat,max(lat))
min_lon = min(min_lon,min(lon))
max_lon = max(max_lon,max(lon))
lat, lon = zip(*get_locations('wendys.csv'))
min_lat = min(min_lat,min(lat))
max_lat = max(max_lat,max(lat))
min_lon = min(min_lon,min(lon))
max_lon = max(max_lon,max(lon))
lat, lon = zip(*get_locations('pizzahut.csv'))
min_lat = min(min_lat,min(lat))
max_lat = max(max_lat,max(lat))
min_lon = min(min_lon,min(lon))
max_lon = max(max_lon,max(lon))
# set figure size
fig = plt.figure(figsize=(12, 9))
#fig = plt.figure()
plt.subplot(2,2,1)
plt.scatter(*zip(*get_locations('burgerking.csv')), color='black', s=1, alpha=0.2)
plt.xlim(min_lat-5,max_lat+5)
plt.ylim(min_lon-5,max_lon+5)
plt.gca().set_aspect('equal')
plt.subplot(2,2,2)
plt.scatter(*zip(*get_locations('mcdonalds.csv')), color='black', s=1, alpha=0.2)
plt.xlim(min_lat-5,max_lat+5)
plt.ylim(min_lon-5,max_lon+5)
plt.gca().set_aspect('equal')
plt.subplot(2,2,3)
plt.scatter(*zip(*get_locations('wendys.csv')), color='black', s=1, alpha=0.2)
plt.xlim(min_lat-5,max_lat+5)
plt.ylim(min_lon-5,max_lon+5)
plt.gca().set_aspect('equal')
plt.subplot(2,2,4)
plt.scatter(*zip(*get_locations('pizzahut.csv')), color='black', s=1, alpha=0.2)
plt.xlim(min_lat-5,max_lat+5)
plt.ylim(min_lon-5,max_lon+5)
plt.gca().set_aspect('equal')
#plt.scatter(*zip(*get_locations('dollar-tree.csv')), color='black', s=1, alpha=0.2)
# get the starbucks in DC
starbucks_dc_locations = get_locations('starbucks.csv', state='DC')
burger_dc_locations = get_locations('burgerking.csv', state='DC') + \
get_locations('mcdonalds.csv', state='DC') + \
get_locations('wendys.csv', state='DC')
# show the first 10 locations of each:
for n in range(10):
print( starbucks_dc_locations[n] )
print()
for n in range(min(10,len(burger_dc_locations))):
print( burger_dc_locations[n] )
# set figure size
plt.figure(figsize=(12, 9))
# get the axes of the plot and set them to be equal-aspect and limited by data
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
# plot the data
plt.scatter(*zip(*starbucks_dc_locations))
plt.scatter(*zip(*burger_dc_locations), color='green')
# We also want to plot the DC boundaries, so we have a better idea where these things are
# the data is contained in DC.txt
# let's inspect it. Observe the format
with open('DC.txt') as file:
for line in file:
print(line,end='') # lines already end with a newline so don't print another
with open('DC.txt') as file:
# get the lower left and upper right coords for the bounding box
ll_long, ll_lat = map(float, next(file).split())
ur_long, ur_lat = map(float, next(file).split())
# get the number of regions
num_records = int(next(file))
# there better just be one
assert num_records == 1
# then a blank line
next(file)
# Title of "county"
county_name = next(file).rstrip() # removes newline at end
# "State" county resides in
state_name = next(file).rstrip()
# this is supposed to be DC
assert state_name == "DC"
# number of points to expect
num_pairs = int(next(file))
dc_boundary = [ tuple(map(float,next(file).split())) for n in range(num_pairs)]
dc_boundary
# add the beginning to the end so that it closes up
dc_boundary.append(dc_boundary[0])
# draw it!
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
plt.plot(*zip(*dc_boundary))
# draw both the starbucks location and DC boundary together
plt.figure(figsize=(12, 9))
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
plt.scatter(*zip(*starbucks_dc_locations))
plt.scatter(*zip(*burger_dc_locations), color='green')
plt.plot(*zip(*dc_boundary))
# draw both the starbucks location and DC boundary together
plt.figure(figsize=(12, 9))
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
plt.scatter(*zip(*get_locations('burgerking.csv', state='DC')), color='red')
plt.scatter(*zip(*get_locations('mcdonalds.csv', state='DC')), color='green')
plt.scatter(*zip(*get_locations('wendys.csv', state='DC')), color='blue')
plt.scatter(*zip(*get_locations('pizzahut.csv', state='DC')), color='yellow')
plt.scatter(*zip(*get_locations('dollar-tree.csv', state='DC')), color='black')
plt.plot(*zip(*dc_boundary))
```
### But where's AU?
```
# draw both the starbucks location and DC boundary together
plt.figure(figsize=(12, 9))
ax = plt.axes()
ax.set_aspect('equal', 'datalim')
plt.scatter(*zip(*starbucks_dc_locations))
plt.scatter(*zip(*burger_dc_locations), color='green')
plt.plot(*zip(*dc_boundary))
# add a red dot right over Anderson
plt.scatter([-77.0897511],[38.9363019],color='red')
from ipyleaflet import Map, basemaps, basemap_to_tiles, Marker, CircleMarker
m = Map(layers=(basemap_to_tiles(basemaps.OpenStreetMap.HOT), ),
center=(38.898082, -77.036696),
zoom=11)
# marker for AU
marker = Marker(location=(38.937831, -77.088852), radius=2, color='green')
m.add_layer(marker)
for (long,lat) in starbucks_dc_locations:
marker = CircleMarker(location=(lat,long), radius=1, color='steelblue')
m.add_layer(marker);
for (long,lat) in burger_dc_locations:
marker = CircleMarker(location=(lat,long), radius=1, color='green')
m.add_layer(marker);
m
```
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = 1/m*lambd/2*(np.sum(np.square(W1))+np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + lambd/m*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + lambd/m*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + lambd/m*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(*A1.shape) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 > keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(*A2.shape) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 > keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
### Calculating intensity in an inclined direction $\cos\theta \neq 1$
For this first part we are going to use our good old FALC model and calculate intensity in direction other than $\mu = 1$. This is also an essential part of scattering problems!
### We will assume that we are dealing with continuum everywhere!
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({
"text.usetex": False,
"font.size" : 16,
"font.family": "sans-serif",
"font.sans-serif": ["Helvetica"]})
# Let's load the data:
atmos = np.loadtxt("falc_71.dat",unpack=True,skiprows=1)
atmos.shape
# 0th parameter - log optical depth in continuum (lambda = 500 nm)
# 2nd parameter - Temperature
plt.figure(figsize=[9,5])
plt.plot(atmos[0],atmos[2])
llambda = 500E-9 # in nm
k = 1.38E-23 # Boltzmann constant
c = 3E8 # speed of light
h_p = 6.626E-34
T = np.copy(atmos[2])
logtau = np.copy(atmos[0])
tau = 10.** logtau
# We will separate now Planck function and the source function:
B = 2*h_p*c**2.0 / llambda**5.0 / (np.exp(h_p*c/(llambda*k*T))-1)
S = np.copy(B)
plt.figure(figsize=[9,5])
plt.plot(logtau,S)
```
#### We want to use a very simple formal solution:
### $$I = I_{inc} e^{-\Delta \tau} + S(1-e^{-\Delta \tau}) $$

```
# Let's define a function that is doing that:
def synthesis_whole(S,tau):
ND = len(S)
intensity = np.zeros(ND)
# At the bottom, intensity is equal to the source function (similar to Blackbody)
intensity[ND-1] = S[ND-1]
for d in range(ND-2,-1,-1):
# optical thickness of the layer:
delta_tau = tau[d+1] - tau[d]
# Mean source function:
S_mean = (S[d+1] + S[d])*0.5
intensity[d] = intensity[d+1] * np.exp(-delta_tau) + S_mean *(1.-np.exp(-delta_tau))
return intensity
intensity = synthesis_whole(S,tau)
plt.figure(figsize=[9,5])
plt.plot(logtau,S,label='Source Function')
plt.plot(logtau,intensity,label='Intensity in the direction theta =0 ')
plt.legend()
plt.xlabel("$\\log\\tau$")
```
### Discuss this 3-4 mins.
Write the conclusions here:
Intensity is not the same as the source function. It is "nonlocal" (ok Ivan, we get it, you said million times!)
### Now how about using a grid of $mu$ angles?
### We are solving
### $$\mu \frac{dI}{d\tau} = I-S $$
### $$\frac{dI}{d\tau / \mu} = I-S $$
### $\mu$ goes from 0 to 1
```
NM = 10
ND = len(S)
# create mu grid
mu = np.linspace(0.1,1.0,NM)
intensity_grid = np.zeros([NM,ND])
# Calculate the intensity in each direction:
for m in range(0,10):
intensity_grid[m] = synthesis_whole(S,tau/mu[m])
plt.figure(figsize=[9,5])
plt.plot(mu,intensity_grid[:,0])
plt.ylabel("$I^+(\mu)$")
plt.xlabel("$\mu = \cos\\theta$")
```
### Limb darkening!

### Limb darkening explanation:

# Now the scattering problem!
### Solve, iteratively:
### $$S = \epsilon B + (1-\epsilon) J $$
### $$ J = \frac{1}{4\pi} \int_0^{\pi} \int_0^{2\pi} I(\theta,\phi) \sin \theta d\theta d\phi = \frac{1}{2} \int_{-1}^{1} I(\mu) d\mu $$
and for this second equation we need to formally solve:
### $$ \frac{dI}{d\tau/\mu} = I-S $$
### How to calculate J at the top of the atmosphere from the intensity we just calculated?
Would you all agree that, at the top:
$$J = \frac{1}{2} \int_{-1}^{1} I(\mu) d\mu = \frac{1}{2} \int_{0}^{1} I(\mu) d\mu = \frac{1}{2}\sum_m (I_m + I_{m+1}) \frac{\mu_{m+1}-\mu_{m}}{2}$$
```
# simplified trapezoidal rule:
# Starting value:
J = 0.
for m in range(0,NM-1):
J += (intensity_grid[m,0] + intensity_grid[m+1,0]) * (mu[m+1] - mu[m])/2.0
J *= 0.5
print ("ratio of the mean intensity at the surface to the Planck function at the surface is: ", J/B[0])
```
### Now the next step would be, to say:
### $$S = \epsilon B + \epsilon J $$
There are, however, few hurdles before that:
- We don't know $\epsilon$. Let's assume there is a lot of scattering and set $\epsilon=10^{-2}$
- We don't know J everywhere, we only calculate at the top. This one is harder to fix.
To really calculate J everywhere properly we need to solve radiative transfer equation going inward.
### Convince yourself that that is really what we have to do.
How do we solve the RTE inward? Identically to upward except we start from the top and go down. Let's sketch a scheme for that.
```
# Old RTE solver looks like this. Renamed it to out so that we know it is the outgoing intensity
def synthesis_out(S,tau):
ND = len(S)
intensity = np.zeros(ND)
# At the bottom, intensity is equal to the source function (similar to Blackbody)
intensity[ND-1] = S[ND-1]
for d in range(ND-2,-1,-1):
# optical thickness of the layer:
delta_tau = tau[d+1] - tau[d]
# Mean source function:
S_mean = (S[d+1] + S[d])*0.5
intensity[d] = intensity[d+1] * np.exp(-delta_tau) + S_mean *(1.-np.exp(-delta_tau))
return intensity
def synthesis_in(S,tau):
ND = len(S)
intensity = np.zeros(ND)
# At the top, intensity is equal to ZERO
intensity[0] = 0.0
# The main difference now is that this "previous" or "upwind" point is the point before (d-1)
for d in range(1,ND,1):
# optical thickness of the layer:
delta_tau = tau[d] - tau[d-1] # Note that I am using previous point now
# Mean source function:
S_mean = (S[d] + S[d-1])*0.5
intensity[d] = intensity[d-1] * np.exp(-delta_tau) + S_mean *(1.-np.exp(-delta_tau))
return intensity
# Now we can solve and we will have two different intensities, in and out one
int_out = np.zeros([NM,ND])
int_in = np.zeros([NM,ND])
# We solve in multiple directions:
for m in range(0,10):
int_out[m] = synthesis_out(S,tau/mu[m])
int_in[m] = synthesis_in (S,tau/mu[m])
# It is interesting now to visualize the outgoing intensity, and,
# for example, the intensity at the bottom. How about that:
plt.figure(figsize=[9,5])
plt.plot(mu,int_out[:,0],label='Outgoing intensity')
plt.plot(mu,int_in[:,0], label='Ingoing intensity')
plt.legend()
plt.xlabel("$\\mu$")
plt.ylabel("Intensity")
plt.title("Intensity distribution at the top of the atmosphere")
# Here is the bottom intensity
# Don't be confused by the expression outgoing / ingoing. We mean "at that point"
plt.figure(figsize=[9,5])
plt.plot(mu,int_out[:,ND-1],label='Outgoing intensity')
plt.plot(mu,int_in[:,ND-1], label='Ingoing intensity')
plt.legend()
plt.xlabel("$\\mu$")
plt.ylabel("Intensity")
plt.ylim([0,1.5E14])
plt.title("Intensity distribution at the bottom of the atmosphere")
```
### Take a moment here and try to think about these two things:
- The intensity at the bottom is much more close to each other and much less strongly depends on $\mu$. Would you call that situation "isotropic"?
- Why is this so?
```
# Now we can calculate mean intensity similarly to above. We don't even have to use super strict
# trapezoidal integration. Just add all the intensities and divide by 20
J = np.sum(int_in+int_out,axis=0)/20.
# Then define the epsilon
epsilon = 1E-2
# And then we can update the source function:
S = epsilon * B + (1-epsilon) * J
# And we can now plot B (Planck function, our "old value for the source function") vs our "new"
# source function
plt.figure(figsize=[9,5])
plt.plot(logtau,B,label="Planck Function")
plt.plot(logtau,S,label="Source Function")
plt.xlabel("$\\log \\tau$")
plt.legend()
```
Now: This is new value of the source function! We should now recalculate the new intensity.
But wait, this new intensity now results in a new Source function.
Which then results in the new intensity....
This leads to an iterative process, known as "Lambda" iteration. We repeat the process until (a very slow) convergence. Let's try!
```
# I will define an additional function to make things clearer:
def update_S(int_in, int_out, epsilon, B):
J = np.sum(int_in+int_out,axis=0)/20.
S = epsilon * B + (1-epsilon) * J
return S
# And then we devise an iterative scheme:
max_iter = 100
# Start from the source function equal to the Planck function:
S = np.copy(B)
for i in range(0,max_iter):
# solve RTE:
for m in range(0,10):
int_out[m] = synthesis_out(S,tau/mu[m])
int_in[m] = synthesis_in (S,tau/mu[m])
# update the source function:
S_new = update_S(int_in,int_out,epsilon,B)
# find where did the source function change the most
relative_change = np.amax(np.abs((S_new-S)/S))
print("Relative change in this iteration is: ",relative_change)
# And assign new source function to the old one:
S = np.copy(S_new)
```
### Spend some time figuring this out. This is a common technique for solving various kinds of coupled equations in astrophysics!
To finish the story let's visualize the final solution of this, and then do one more example - constant temperature!
```
plt.figure(figsize=[9,5])
plt.plot(logtau,B,label="Planck Function")
plt.plot(logtau,S,label="Source Function")
plt.xlabel("$\\log \\tau$")
plt.title("Scattering Source function")
plt.legend()
```
### It drops very very very low, even in great depths! This is exactly not the greatest example as in the great depths the epsilon will never be so small. Epsilon is generally depth dependent.
## Example - Scattering in an isothermal atmosphere
```
## We will do all the same as before except we will now use a different tau grid, to make atmosphere
# much deeper. And we will use B = const, to reproduce some plots from the literature:
logtau = np.linspace(-4,4,81)
tau = 10.**logtau
ND = len(logtau)
B = np.zeros(ND)
B[:] = 1.0 # Units do not matter, we can say everything is in units of B
S = np.copy(B)
max_iter = 100
# Start from the source function equal to the Planck function:
int_out = np.zeros([NM,ND])
int_in = np.zeros([NM,ND])
for i in range(0,max_iter):
# solve RTE:
for m in range(0,10):
int_out[m] = synthesis_out(S,tau/mu[m])
int_in[m] = synthesis_in (S,tau/mu[m])
# update the source function:
S_new = update_S(int_in,int_out,epsilon,B)
# find where did the source function change the most
relative_change = np.amax(np.abs((S_new-S)/S))
print("Relative change in this iteration is: ",relative_change)
# And assign new source function to the old one:
S = np.copy(S_new)
```
### We have reached some sort of convergence. (Relative change of roughly 1E-3).
Let's have a look at our source function:
```
plt.figure(figsize=[9,5])
plt.plot(logtau,B,label="Planck Function")
plt.plot(logtau,S,label="Source Function")
plt.xlabel("$\\log \\tau$")
plt.title("Scattering (NLTE) Source function")
plt.legend()
```
### This is a famous plot! What this tells us that even in the isothermal atmosphere the source function, in presence of scattering, drops below the Planck function. Extremely important effect for formation of strong chromospheric spectral lines.
#### Note that this was a continuum example. The line situation is a tad more complicated and will involve an additional integration over the wavelengths. Numerically, results are different, but the spirit is the same!
| github_jupyter |
# Working with code cells
In this notebook you'll get some experience working with code cells.
First, run the cell below. As I mentioned before, you can run the cell by selecting it the click the "run cell" button above. However, it's easier to run it by pressing **Shift + Enter** so you don't have to take your hands away from the keyboard.
```
# Select the cell, then press Shift + Enter
3**2
```
Shift + Enter runs the cell then selects the next cell or creates a new one if necessary. You can run a cell without changing the selected cell by pressing **Control + Enter**.
The output shows up below the cell. It's printing out the result just like in a normal Python shell. Only the very last result in a cell will be printed though. Otherwise, you'll need to use `print()` print out any variables.
> **Exercise:** Run the next two cells to test this out. Think about what you expect to happen, then try it.
```
3**2
4**2
print(3**2)
4**2
```
Now try assigning a value to a variable.
```
import string
mindset = 'growth'
codes = [string.ascii_letters.index(char) for char in mindset]
```
There is no output, `'growth'` has been assigned to the variable `mindset`. All variables, functions, and classes created in a cell are available in every other cell in the notebook.
What do you think the output will be when you run the next cell? Feel free to play around with this a bit to get used to how it works.
```
print(mindset[:4])
print(codes)
```
## Code completion
When you're writing code, you'll often be using a variable or function often and can save time by using code completion. That is, you only need to type part of the name, then press **tab**.
> **Exercise:** Place the cursor at the end of `mind` in the next cell and press **tab**
```
mindset
```
Here, completing `mind` writes out the full variable name `mindset`. If there are multiple names that start the same, you'll get a menu, see below.
```
# Run this cell
mindful = True
# Complete the name here again, choose one from the menu
mindful
```
Remember that variables assigned in one cell are available in all cells. This includes cells that you've previously run and cells that are above where the variable was assigned. Try doing the code completion on the cell third up from here.
Code completion also comes in handy if you're using a module but don't quite remember which function you're looking for or what the available functions are. I'll show you how this works with the [random](https://docs.python.org/3/library/random.html) module. This module provides functions for generating random numbers, often useful for making fake data or picking random items from lists.
```
# Run this
import random
```
> **Exercise:** In the cell below, place the cursor after `random.` then press **tab** to bring up the code completion menu for the module. Choose `random.randint` from the list, you can move through the menu with the up and down arrow keys.
```
random.randint
```
Above you should have seen all the functions available from the random module. Maybe you're looking to draw random numbers from a [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution), also known as the normal distribution or the "bell curve".
## Tooltips
You see there is the function `random.gauss` but how do you use it? You could check out the [documentation](https://docs.python.org/3/library/random.html), or just look up the documentation in the notebook itself.
> **Exercise:** In the cell below, place the cursor after `random.gauss` the press **shift + tab** to bring up the tooltip.
```
random.gauss
```
You should have seen some simple documentation like this:
Signature: random.gauss(mu, sigma)
Docstring:
Gaussian distribution.
The function takes two arguments, `mu` and `sigma`. These are the standard symbols for the mean and the standard deviation, respectively, of the Gaussian distribution. Maybe you're not familiar with this though, and you need to know what the parameters actually mean. This will happen often, you'll find some function, but you need more information. You can show more information by pressing **shift + tab** twice.
> **Exercise:** In the cell below, show the full help documentation by pressing **shift + tab** twice.
```
random.gauss
```
You should see more help text like this:
mu is the mean, and sigma is the standard deviation. This is
slightly faster than the normalvariate() function.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.