prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
#Plotting Velocities and Tracers on Vertical Planes
This notebook contains discussion, examples, and best practices for plotting velocity field and tracer results from NEMO on vertical planes.
Topics include:
* Plotting colour meshes of velocity on vertical sections through the domain
* Using `nc_tools.timestamp()` to get time stamps from results datasets
* Plotting salinity as a colour mesh on thalweg section
We'll start with the usual imports, and activation of the Matplotlib inline backend:
```
from __future__ import division, print_function
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
from salishsea_tools import (
nc_tools,
viz_tools,
)
%matplotlib inline
```
Let's look at the results from the 17-Dec-2003 to 26-Dec-2003 spin-up run.
We'll also load the bathymetry so that we can plot land masks.
```
u_vel = nc.Dataset('/ocean/dlatorne/MEOPAR/SalishSea/results/spin-up/17dec26dec/SalishSea_1d_20031217_20031226_grid_U.nc')
v_vel = nc.Dataset('/ocean/dlatorne/MEOPAR/SalishSea/results/spin-up/17dec26dec/SalishSea_1d_20031217_20031226_grid_V.nc')
ugrid = u_vel.variables['vozocrtx']
vgrid = v_vel.variables['vomecrty']
zlevels = v_vel.variables['depthv']
timesteps = v_vel.variables['time_counter']
grid = nc.Dataset('/data/dlatorne/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc')
```
##Velocity Component Colour Mesh on a Vertical Plane
There's really not much new involved in plotting on vertical planes
compared to the horizontal plane plots that we've done in the previous notebooks.
Here's are plots of the v velocity component crossing a vertical plane
defined by a section line running from just north of Howe Sound
to a little north of Nanaimo,
and the surface current streamlines in the area
with the section line shown for orientation.
Things to note:
* The use of the `invert_yaxis()` method on the vertical plane y-axis
to make the depth scale go from 0 at the surface to positive depths below,
and the resulting reversal of the limit values passed to the `set_ylim()` method
* The use of the `set_axis_bgcolor()` method to make the extension of the axis area
below the maximum depth appear consistent with the rest of the non-water regions
```
fig, (axl, axr) = plt.subplots(1, 2, figsize=(16, 8))
land_colour = 'burlywood'
# Define the v velocity component slice to plot
t, zmax, ylocn = -1, 41, 500
section_slice = np.arange(208, 293)
timestamp = nc_tools.timestamp(v_vel, t)
# Slice and mask the v array
vgrid_tzyx = np.ma.masked_values(vgrid[t, :zmax, ylocn, section_slice], 0)
# Plot the v velocity colour mesh
cmap = plt.get_cmap('bwr')
cmap.set_bad(land_colour)
mesh = axl.pcolormesh(
section_slice[:], zlevels[:zmax], vgrid_tzyx,
cmap=cmap, vmin=-0.1, vmax=0.1,
)
axl.invert_yaxis()
cbar = fig.colorbar(mesh, ax=axl)
cbar.set_label('v Velocity [{.units}]'.format(vgrid))
# Axes labels and title
axl.set_xlabel('x Index')
axl.set_ylabel('{0.long_name} [{0.units}]'.format(zlevels))
axl.set_title(
'24h Average v Velocity at y={y} on {date}'
.format(y=ylocn, date=timestamp.format('DD-MMM-YYYY')))
# Axes limits and grid
axl.set_xlim(section_slice[1], section_slice[-1])
axl.set_ylim(zlevels[zmax - 2] + 10, 0)
axl.set_axis_bgcolor(land_colour)
axl.grid()
# Define surface current magnitude slice
x_slice = np.arange(150, 350)
y_slice = np.arange(425, 575)
# Slice and mask the u and v arrays
ugrid_tzyx = np.ma.masked_values(ugrid[t, 0, y_slice, x_slice], 0)
vgrid_tzyx = np.ma.masked_values(vgrid[t, 0, y_slice, x_slice], 0)
# "Unstagger" the velocity values by interpolating them to the T-grid points
# and calculate the surface current speeds
u_tzyx, v_tzyx = viz_tools.unstagger(ugrid_tzyx, vgrid_tzyx)
speeds = np.sqrt(np.square(u_tzyx) + np.square(v_tzyx))
max_speed = viz_tools.calc_abs_max(speeds)
# Plot section line on surface streamlines map
viz_tools.set_aspect(axr)
axr.streamplot(
x_slice[1:], y_slice[1:], u_tzyx, v_tzyx,
linewidth=7*speeds/max_speed,
)
viz_tools.plot_land_mask(
axr, grid, xslice=x_slice, yslice=y_slice, color=land_colour)
axr.plot(
section_slice, ylocn*np.ones_like(section_slice),
linestyle='solid', linewidth=3, color='black',
label='Section Line',
)
# Axes labels and title
axr.set_xlabel('x Index')
axr.set_ylabel('y Index')
axr.set_title(
'24h Average Surface Streamlines on {date}'
.format(date=timestamp.format('DD-MMM-YYYY')))
legend = axr.legend(loc='best', fancybox=True, framealpha=0.25)
# Axes limits and grid
axr.set_xlim(x_slice[0], x_slice[-1])
axr.set_ylim(y_slice[0], y_slice[-1])
axr.grid()
```
The code above uses the `nc_tools.timestamp()` function
to obtain the time stamp of the plotted results from the dataset
and formats that value as a date in the axes titles.
Documentation for `nc_tools.timestamp()`
(and the other functions in the `nc_tools` module) is available
at http://salishsea-meopar-tools.readthedocs.org/en/latest/SalishSeaTools/salishsea-tools.html#nc_tools.timestamp
and via shift-TAB or the `help()` command in notebooks:
```
help(nc_tools.timestamp)
```
Passing a tuple or list of time indices;
e.g. `[0, 3, 6, 9]`,
to `nc_tools.timestamp()` causes a list of time stamp values to be returned
The time stamp value(s) returned are [Arrow](http://crsmithdev.com/arrow/) instances.
The `format()` method can be used to produce a string representation of
a time stamp,
for example:
```
timestamp.format('YYYY-MM-DD HH:mm:ss')
```
NEMO results are calculated using the UTC time zone
but `Arrow` time stamps can easily be converted to other time zones:
```
timestamp.to('Canada/Pacific')
```
Please see the [Arrow](http://crsmithdev.com/arrow/) docs for other useful methods
and ways of manipulating dates and times in Python.
##Salinity Colour Mesh on Thalweg Section
For this plot we'll look at results from the spin-up run that includes 27-Sep-2003
because it shows deep water renewal in the Strait of Georgia.
```
tracers = nc.Dataset('/ocean/dlatorne/MEOPAR/SalishSea/results/spin-up/18sep27sep/SalishSea_1d_20030918_20030927_grid_T.nc')
```
The salinity netCDF4 variables needs to be changed to a NumPy array.
```
sal = tracers.variables['vosaline']
npsal = sal[:]
zlevels = tracers.variables['deptht']
```
The thalweg is a line that connects the deepest points
of successive cross-sections through the model domain.
The grid indices of the thalweg are calculated in the
[compute_thalweg.ipynb](https://nbviewer.jupyter.org/github/SalishSeaCast/tools/blob/master/analysis_tools/compute_thalweg.ipynb)
notebook and stored as `(j, i)` ordered pairs in the
`tools/analysis_tools/thalweg.txt/thalweg.txt` file:
```
!head thalweg.txt
```
We use the NumPy `loadtxt()` function to read the thalweg points
into a pair of arrays.
The `unpack` argument causes the result to be transposed from an
array of ordered pairs to arrays of `j` and `i` values.
```
thalweg = np.loadtxt('/data/dlatorne/MEOPAR/tools/bathymetry/thalweg_working.txt', dtype=int, unpack=True)
```
Plotting salinity along the thalweg is an example of plotting
a model result quantity on an arbitrary section through the domain.
```
# Set up the figure and axes
fig, (axl, axcb, axr) = plt.subplots(1, 3, figsize=(16, 8))
land_colour = 'burlywood'
for ax in (axl, axr):
ax.set_axis_bgcolor(land_colour)
axl.set_position((0.125, 0.125, 0.6, 0.775))
axcb.set_position((0.73, 0.125, 0.02, 0.775))
axr.set_position((0.83, 0.125, 0.2, 0.775))
# Plot thalweg points on bathymetry map
viz_tools.set_aspect(axr)
cmap = plt.get_cmap('winter_r')
cmap.set_bad(land_colour)
bathy = grid.variables['Bathymetry']
x_slice = np.arange(bathy.shape[1])
y_slice = np.arange(200, 800)
axr.pcolormesh(x_slice, y_slice, bathy[y_slice, x_slice], cmap=cmap)
axr.plot(
thalweg[1], thalweg[0],
linestyle='-', marker='+', color='red',
label='Thalweg Points',
)
legend = axr.legend(loc='best', fancybox=True, framealpha=0.25)
axr.set_xlabel('x Index')
axr.set_ylabel('y Index')
axr.grid()
# Plot 24h average salinity at all depths along thalweg line
t = -1 # 27-Dec-2003
smin, smax, dels = 26, 34, 0.5
cmap = plt.get_cmap('rainbow')
cmap.set_bad(land_colour)
sal_0 = npsal[t, :, thalweg[0], thalweg[1]]
sal_tzyx = np.ma.masked_values(sal_0, 0)
x, z = np.meshgrid(np.arange(thalweg.shape[1]), zlevels)
mesh = axl.pcolormesh(x, z, sal_tzyx.T, cmap=cmap, vmin=smin, vmax=smax)
cbar = plt.colorbar(mesh, cax=axcb)
cbar.set_label('Practical Salinity')
clines = axl.contour(x, z, sal_tzyx.T, np.arange(smin, smax, dels), colors='black')
axl.clabel(clines, fmt='%1.1f', inline=True)
axl.invert_yaxis()
axl.set_xlim(0, thalweg[0][-1])
axl.set_xlabel('x Index')
axl.set_ylabel('{0.long_name} [{0.units}]'.format(zlevels))
axl.grid()
```
| true |
code
| 0.674881 | null | null | null | null |
|
**This notebook is an exercise in the [Natural Language Processing](https://www.kaggle.com/learn/natural-language-processing) course. You can reference the tutorial at [this link](https://www.kaggle.com/matleonard/word-vectors).**
---
# Vectorizing Language
Embeddings are both conceptually clever and practically effective.
So let's try them for the sentiment analysis model you built for the restaurant. Then you can find the most similar review in the data set given some example text. It's a task where you can easily judge for yourself how well the embeddings work.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import spacy
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.nlp.ex3 import *
print("\nSetup complete")
# Load the large model to get the vectors
nlp = spacy.load('en_core_web_lg')
review_data = pd.read_csv('../input/nlp-course/yelp_ratings.csv')
review_data.head()
```
Here's an example of loading some document vectors.
Calculating 44,500 document vectors takes about 20 minutes, so we'll get only the first 100. To save time, we'll load pre-saved document vectors for the hands-on coding exercises.
```
reviews = review_data[:100]
# We just want the vectors so we can turn off other models in the pipeline
with nlp.disable_pipes():
vectors = np.array([nlp(review.text).vector for idx, review in reviews.iterrows()])
vectors.shape
```
The result is a matrix of 100 rows and 300 columns.
Why 100 rows?
Because we have 1 row for each column.
Why 300 columns?
This is the same length as word vectors. See if you can figure out why document vectors have the same length as word vectors (some knowledge of linear algebra or vector math would be needed to figure this out).
Go ahead and run the following cell to load in the rest of the document vectors.
```
# Loading all document vectors from file
vectors = np.load('../input/nlp-course/review_vectors.npy')
```
# 1) Training a Model on Document Vectors
Next you'll train a `LinearSVC` model using the document vectors. It runs pretty quick and works well in high dimensional settings like you have here.
After running the LinearSVC model, you might try experimenting with other types of models to see whether it improves your results.
```
from sklearn.svm import LinearSVC
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(vectors, review_data.sentiment,
test_size=0.1, random_state=1)
# Create the LinearSVC model
model = LinearSVC(random_state=1, dual=False)
# Fit the model
model.fit(X_train, y_train)
# Uncomment and run to see model accuracy
print(f'Model test accuracy: {model.score(X_test, y_test)*100:.3f}%')
# Uncomment to check your work
q_1.check()
# Lines below will give you a hint or solution code
#q_1.hint()
#q_1.solution()
# Scratch space in case you want to experiment with other models
from sklearn.neural_network import MLPClassifier
second_model = MLPClassifier(hidden_layer_sizes=(128,32,),
early_stopping=True, random_state=1)
second_model.fit(X_train, y_train)
print(f'Model test accuracy: {second_model.score(X_test, y_test)*100:.3f}%')
```
# Document Similarity
For the same tea house review, find the most similar review in the dataset using cosine similarity.
# 2) Centering the Vectors
Sometimes people center document vectors when calculating similarities. That is, they calculate the mean vector from all documents, and they subtract this from each individual document's vector. Why do you think this could help with similarity metrics?
Run the following line after you've decided your answer.
```
# Check your answer (Run this code cell to receive credit!)
#q_2.solution()
q_2.check()
```
# 3) Find the most similar review
Given an example review below, find the most similar document within the Yelp dataset using the cosine similarity.
```
review = """I absolutely love this place. The 360 degree glass windows with the
Yerba buena garden view, tea pots all around and the smell of fresh tea everywhere
transports you to what feels like a different zen zone within the city. I know
the price is slightly more compared to the normal American size, however the food
is very wholesome, the tea selection is incredible and I know service can be hit
or miss often but it was on point during our most recent visit. Definitely recommend!
I would especially recommend the butternut squash gyoza."""
def cosine_similarity(a, b):
return np.dot(a, b)/np.sqrt(a.dot(a)*b.dot(b))
review_vec = nlp(review).vector
## Center the document vectors
# Calculate the mean for the document vectors, should have shape (300,)
vec_mean = vectors.mean(axis=0)
# Subtract the mean from the vectors
centered = vectors - vec_mean
# Calculate similarities for each document in the dataset
# Make sure to subtract the mean from the review vector
review_centered = review_vec - vec_mean
sims = np.array([cosine_similarity(v, review_centered) for v in centered])
# Get the index for the most similar document
most_similar = sims.argmax()
# Uncomment to check your work
q_3.check()
# Lines below will give you a hint or solution code
#q_3.hint()
#q_3.solution()
print(review_data.iloc[most_similar].text)
```
Even though there are many different sorts of businesses in our Yelp dataset, you should have found another tea shop.
# 4) Looking at similar reviews
If you look at other similar reviews, you'll see many coffee shops. Why do you think reviews for coffee are similar to the example review which mentions only tea?
```
# Check your answer (Run this code cell to receive credit!)
#q_4.solution()
q_4.check()
```
# Congratulations!
You've finished the NLP course. It's an exciting field that will help you make use of vast amounts of data you didn't know how to work with before.
This course should be just your introduction. Try a project **[with text](https://www.kaggle.com/datasets?tags=14104-text+data)**. You'll have fun with it, and your skills will continue growing.
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161466) to chat with other Learners.*
| true |
code
| 0.624666 | null | null | null | null |
|
<img src="../img/logo_white_bkg_small.png" align="right" />
# Worksheet 3: Detecting Domain Generation Algorithm (DGA) Domains against DNS
This worksheet covers concepts covered in the second half of Module 6 - Hunting with Data Science. It should take no more than 20-30 minutes to complete. Please raise your hand if you get stuck.
Your objective is to reduce a dataset that has thousands of domain names and identify those created by DGA.
## Import the Libraries
For this exercise, we will be using:
* Pandas (http://pandas.pydata.org/pandas-docs/stable/)
* Flare (https://github.com/austin-taylor/flare)
* Json (https://docs.python.org/3/library/json.html)
* WHOIS (https://pypi.python.org/pypi/whois)
Beacon writeup: <a href="http://www.austintaylor.io/detect/beaconing/intrusion/detection/system/command/control/flare/elastic/stack/2017/06/10/detect-beaconing-with-flare-elasticsearch-and-intrusion-detection-systems/"> Detect Beaconing
<a href="../answers/Worksheet 10 - Hunting with Data Science - Answers.ipynb"> Answers for this section </a>
```
from flare.data_science.features import entropy
from flare.data_science.features import dga_classifier
from flare.data_science.features import domain_tld_extract
from flare.tools.alexa import Alexa
from pandas.io.json import json_normalize
from whois import whois
import pandas as pd
import json
import warnings
warnings.filterwarnings('ignore')
```
## This is an example of how to generate domain generated algorithms.
```
def generate_domain(year, month, day):
"""Generates a domain name for the given date."""
domain = ""
for i in range(16):
year = ((year ^ 8 * year) >> 11) ^ ((year & 0xFFFFFFF0) << 17)
month = ((month ^ 4 * month) >> 25) ^ 16 * (month & 0xFFFFFFF8)
day = ((day ^ (day << 13)) >> 19) ^ ((day & 0xFFFFFFFE) << 12)
domain += chr(((year ^ month ^ day) % 25) + 97)
return domain + '.com'
generate_domain(2017, 6, 23)
```
### A large portion of data science is data preparation. In this exercise, we'll take output from Suricata's eve.json file and extract the DNS records so we can find anything using DGA.
First you'll need to **unzip the large_eve_json.zip file** in the data directory and specify the path.
```
eve_json = '../data/large_eve.json'
```
### Next read the data in and build a list
```
all_suricata_data = [json.loads(record) for record in open(eve_json).readlines()]
len(all_suricata_data)
```
### Our output from Suricata has 746909 records, and for we are only interested in DNS records. Let's narrow our data down to records that only contain dns
### Read in suricata data and load each record as json if DNS is in the key. This will help pandas json_normalize feature
```
# YOUR CODE (hint check if dns is in key)
len(dns_records)
```
### Down to 21484 -- much better.
### Somewhere in our _21484_ records is communication from infected computers. It's up to you to narrow the results down and find the malicious DNS request.
```
dns_records[2]
```
### The data is nested json and has varying lengths, so you will need to use the json_normalize feature
```
suricata_df = json_normalize(dns_records)
suricata_df.shape
suricata_df.head(2)
```
### Next we need to filter out all A records
```
# YOUR CODE to filter out all A records
a_records.shape
```
### By filtering out the A records, our dataset is down to 2849.
```
a_records['dns.rrname'].value_counts().head()
```
### Next we can figure out how many unique DNS names there are.
```
a_records_unique = pd.DataFrame(a_records['dns.rrname'].unique(), columns=['dns_rrname'])
```
### Should have a much smaller set of domains to process now
```
a_records_unique.head()
```
### Next we need to train extract the top level domains (remove subdomains) using flare so we can feed it to our classifier
```
#Apply extract to the dns_rrname and create a column named domain_tld
a_records_unique.head()
```
### Train DGA Classifier with dictionary words, n-grams and DGA Domains
```
dga_predictor = dga_classifier()
```
You can apply dga prediction to a column by using dga_predictor.predict('baddomain.com')
```
# YOUR CODE
```
### A quick sampling of the data shows our prediction has labelled our data.
```
a_records_unique.sample(10)
```
Create a new dataframe called dga_df and filter it out to only show names predicted as DGA
```
# YOUR CODE
dga_df
```
### Our dataset is down to 5 results! Let's run the domains through alexa to see if ny are in the top 1 million
```
alexa = Alexa()
# Example: dga_df['in_alexa'] = dga_df.dns_rrname.apply(alexa.domain_in_alexa)
def get_creation_date(domain):
try:
lookup = whois(domain)
output = lookup.get('creation_date','No results')
except:
output = 'No Creation Date!'
if output is None:
output = 'No Creation Date!'
return output
get_creation_date('google.com')
```
### It appears none of our domains are in Alexa, but let's check creation dates.
```
# YOUR CODE
```
### Congrats! If you did this exercise right, you should have 2 domains with no creation date which were generated by DGA! Bonus points if you can figure out the dates for each domain.
| true |
code
| 0.44342 | null | null | null | null |
|
# Ungraded Lab: Mask R-CNN Image Segmentation Demo
In this lab, you will see how to use a [Mask R-CNN](https://arxiv.org/abs/1703.06870) model from Tensorflow Hub for object detection and instance segmentation. This means that aside from the bounding boxes, the model is also able to predict segmentation masks for each instance of a class in the image. You have already encountered most of the commands here when you worked with the Object Dection API and you will see how you can use it with instance segmentation models. Let's begin!
*Note: You should use a TPU runtime for this colab because of the processing requirements for this model. We have already enabled it for you but if you'll be using it in another colab, you can change the runtime from `Runtime --> Change runtime type` then select `TPU`.*
## Installation
As mentioned, you will be using the Tensorflow 2 [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). You can do that by cloning the [Tensorflow Model Garden](https://github.com/tensorflow/models) and installing the object detection packages just like you did in Week 2.
```
# Clone the tensorflow models repository
!git clone --depth 1 https://github.com/tensorflow/models
%%bash
sudo apt install -y protobuf-compiler
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
```
## Import libraries
```
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from six import BytesIO
from PIL import Image
from six.moves.urllib.request import urlopen
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import ops as utils_ops
tf.get_logger().setLevel('ERROR')
%matplotlib inline
```
## Utilities
For convenience, you will use a function to convert an image to a numpy array. You can pass in a relative path to an image (e.g. to a local directory) or a URL. You can see this in the `TEST_IMAGES` dictionary below. Some paths point to test images that come with the API package (e.g. `Beach`) while others are URLs that point to images online (e.g. `Street`).
```
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: the file path to the image
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
image = None
if(path.startswith('http')):
response = urlopen(path)
image_data = response.read()
image_data = BytesIO(image_data)
image = Image.open(image_data)
else:
image_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(image_data))
(im_width, im_height) = (image.size)
return np.array(image.getdata()).reshape(
(1, im_height, im_width, 3)).astype(np.uint8)
# dictionary with image tags as keys, and image paths as values
TEST_IMAGES = {
'Beach' : 'models/research/object_detection/test_images/image2.jpg',
'Dogs' : 'models/research/object_detection/test_images/image1.jpg',
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg',
# By 663highland, Source: https://commons.wikimedia.org/wiki/File:Kitano_Street_Kobe01s5s4110.jpg
'Street' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/08/Kitano_Street_Kobe01s5s4110.jpg/2560px-Kitano_Street_Kobe01s5s4110.jpg'
}
```
## Load the Model
Tensorflow Hub provides a Mask-RCNN model that is built with the Object Detection API. You can read about the details [here](https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1). Let's first load the model and see how to use it for inference in the next section.
```
model_display_name = 'Mask R-CNN Inception ResNet V2 1024x1024'
model_handle = 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'
print('Selected model:'+ model_display_name)
print('Model Handle at TensorFlow Hub: {}'.format(model_handle))
# This will take 10 to 15 minutes to finish
print('loading model...')
hub_model = hub.load(model_handle)
print('model loaded!')
```
## Inference
You will use the model you just loaded to do instance segmentation on an image. First, choose one of the test images you specified earlier and load it into a numpy array.
```
# Choose one and use as key for TEST_IMAGES below:
# ['Beach', 'Street', 'Dogs','Phones']
image_path = TEST_IMAGES['Street']
image_np = load_image_into_numpy_array(image_path)
plt.figure(figsize=(24,32))
plt.imshow(image_np[0])
plt.show()
```
You can run inference by simply passing the numpy array of a *single* image to the model. Take note that this model does not support batching. As you've seen in the notebooks in Week 2, this will output a dictionary containing the results. These are described in the `Outputs` section of the [documentation](https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1)
```
# run inference
results = hub_model(image_np)
# output values are tensors and we only need the numpy()
# parameter when we visualize the results
result = {key:value.numpy() for key,value in results.items()}
# print the keys
for key in result.keys():
print(key)
```
## Visualizing the results
You can now plot the results on the original image. First, you need to create the `category_index` dictionary that will contain the class IDs and names. The model was trained on the [COCO2017 dataset](https://cocodataset.org/) and the API package has the labels saved in a different format (i.e. `mscoco_label_map.pbtxt`). You can use the [create_category_index_from_labelmap](https://github.com/tensorflow/models/blob/5ee7a4627edcbbaaeb8a564d690b5f1bc498a0d7/research/object_detection/utils/label_map_util.py#L313) internal utility function to convert this to the required dictionary format.
```
PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
# sample output
print(category_index[1])
print(category_index[2])
print(category_index[4])
```
Next, you will preprocess the masks then finally plot the results.
* The result dictionary contains a `detection_masks` key containing segmentation masks for each box. That will be converted first to masks that will overlay to the full image size.
* You will also select mask pixel values that are above a certain threshold. We picked a value of `0.6` but feel free to modify this and see what results you will get. If you pick something lower, then you'll most likely notice mask pixels that are outside the object.
* As you've seen before, you can use `visualize_boxes_and_labels_on_image_array()` to plot the results on the image. The difference this time is the parameter `instance_masks` and you will pass in the reframed detection boxes to see the segmentation masks on the image.
You can see how all these are handled in the code below.
```
# Handle models with masks:
label_id_offset = 0
image_np_with_mask = image_np.copy()
if 'detection_masks' in result:
# convert np.arrays to tensors
detection_masks = tf.convert_to_tensor(result['detection_masks'][0])
detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])
# reframe the the bounding box mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes,
image_np.shape[1], image_np.shape[2])
# filter mask pixel values that are above a specified threshold
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.6,
tf.uint8)
# get the numpy array
result['detection_masks_reframed'] = detection_masks_reframed.numpy()
# overlay labeled boxes and segmentation masks on the image
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_mask[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=100,
min_score_thresh=.70,
agnostic_mode=False,
instance_masks=result.get('detection_masks_reframed', None),
line_thickness=8)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_mask[0])
plt.show()
```
| true |
code
| 0.786192 | null | null | null | null |
|
# Joint Probability
This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python.
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
The following cell downloads `utils.py`, which contains some utility function we'll need.
```
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py')
```
If everything we need is installed, the following cell should run with no error messages.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Review
So far we have been working with distributions of only one variable. In this notebook we'll take a step toward multivariate distributions, starting with two variables.
We'll use cross-tabulation to compute a **joint distribution**, then use the joint distribution to compute **conditional distributions** and **marginal distributions**.
We will re-use `pmf_from_seq`, which I introduced in a previous notebook.
```
def pmf_from_seq(seq):
"""Make a PMF from a sequence of values.
seq: sequence
returns: Series representing a PMF
"""
pmf = pd.Series(seq).value_counts(sort=False).sort_index()
pmf /= pmf.sum()
return pmf
```
## Cross tabulation
To understand joint distributions, I'll start with cross tabulation. And to demonstrate cross tabulation, I'll generate a dataset of colors and fruits.
Here are the possible values.
```
colors = ['red', 'yellow', 'green']
fruits = ['apple', 'banana', 'grape']
```
And here's a random sample of 100 fruits.
```
np.random.seed(2)
fruit_sample = np.random.choice(fruits, 100, replace=True)
```
We can use `pmf_from_seq` to compute the distribution of fruits.
```
pmf_fruit = pmf_from_seq(fruit_sample)
pmf_fruit
```
And here's what it looks like.
```
pmf_fruit.plot.bar(color='C0')
plt.ylabel('Probability')
plt.title('Distribution of fruit');
```
Similarly, here's a random sample of colors.
```
color_sample = np.random.choice(colors, 100, replace=True)
```
Here's the distribution of colors.
```
pmf_color = pmf_from_seq(color_sample)
pmf_color
```
And here's what it looks like.
```
pmf_color.plot.bar(color='C1')
plt.ylabel('Probability')
plt.title('Distribution of colors');
```
Looking at these distributions, we know the proportion of each fruit, ignoring color, and we know the proportion of each color, ignoring fruit type.
But if we only have the distributions and not the original data, we don't know how many apples are green, for example, or how many yellow fruits are bananas.
We can compute that information using `crosstab`, which computes the number of cases for each combination of fruit type and color.
```
xtab = pd.crosstab(color_sample, fruit_sample,
rownames=['color'], colnames=['fruit'])
xtab
```
The result is a DataFrame with colors along the rows and fruits along the columns.
## Heatmap
The following function plots a cross tabulation using a pseudo-color plot, also known as a heatmap.
It represents each element of the cross tabulation with a colored square, where the color corresponds to the magnitude of the element.
The following function generates a heatmap using the Matplotlib function `pcolormesh`:
```
def plot_heatmap(xtab):
"""Make a heatmap to represent a cross tabulation.
xtab: DataFrame containing a cross tabulation
"""
plt.pcolormesh(xtab)
# label the y axis
ys = xtab.index
plt.ylabel(ys.name)
locs = np.arange(len(ys)) + 0.5
plt.yticks(locs, ys)
# label the x axis
xs = xtab.columns
plt.xlabel(xs.name)
locs = np.arange(len(xs)) + 0.5
plt.xticks(locs, xs)
plt.colorbar()
plt.gca().invert_yaxis()
plot_heatmap(xtab)
```
## Joint Distribution
A cross tabulation represents the "joint distribution" of two variables, which is a complete description of two distributions, including all of the conditional distributions.
If we normalize `xtab` so the sum of the elements is 1, the result is a joint PMF:
```
joint = xtab / xtab.to_numpy().sum()
joint
```
Each column in the joint PMF represents the conditional distribution of color for a given fruit.
For example, we can select a column like this:
```
col = joint['apple']
col
```
If we normalize it, we get the conditional distribution of color for a given fruit.
```
col / col.sum()
```
Each row of the cross tabulation represents the conditional distribution of fruit for each color.
If we select a row and normalize it, like this:
```
row = xtab.loc['red']
row / row.sum()
```
The result is the conditional distribution of fruit type for a given color.
## Conditional distributions
The following function takes a joint PMF and computes conditional distributions:
```
def conditional(joint, name, value):
"""Compute a conditional distribution.
joint: DataFrame representing a joint PMF
name: string name of an axis
value: value to condition on
returns: Series representing a conditional PMF
"""
if joint.columns.name == name:
cond = joint[value]
elif joint.index.name == name:
cond = joint.loc[value]
return cond / cond.sum()
```
The second argument is a string that identifies which axis we want to select; in this example, `'fruit'` means we are selecting a column, like this:
```
conditional(joint, 'fruit', 'apple')
```
And `'color'` means we are selecting a row, like this:
```
conditional(joint, 'color', 'red')
```
**Exercise:** Compute the conditional distribution of color for bananas. What is the probability that a banana is yellow?
```
# Solution
cond = conditional(joint, 'fruit', 'banana')
cond
# Solution
cond['yellow']
```
## Marginal distributions
Given a joint distribution, we can compute the unconditioned distribution of either variable.
If we sum along the rows, which is axis 0, we get the distribution of fruit type, regardless of color.
```
joint.sum(axis=0)
```
If we sum along the columns, which is axis 1, we get the distribution of color, regardless of fruit type.
```
joint.sum(axis=1)
```
These distributions are called "[marginal](https://en.wikipedia.org/wiki/Marginal_distribution#Multivariate_distributions)" because of the way they are often displayed. We'll see an example later.
As we did with conditional distributions, we can write a function that takes a joint distribution and computes the marginal distribution of a given variable:
```
def marginal(joint, name):
"""Compute a marginal distribution.
joint: DataFrame representing a joint PMF
name: string name of an axis
returns: Series representing a marginal PMF
"""
if joint.columns.name == name:
return joint.sum(axis=0)
elif joint.index.name == name:
return joint.sum(axis=1)
```
Here's the marginal distribution of fruit.
```
pmf_fruit = marginal(joint, 'fruit')
pmf_fruit
```
And the marginal distribution of color:
```
pmf_color = marginal(joint, 'color')
pmf_color
```
The sum of the marginal PMF is the same as the sum of the joint PMF, so if the joint PMF was normalized, the marginal PMF should be, too.
```
joint.to_numpy().sum()
pmf_color.sum()
```
However, due to floating point error, the total might not be exactly 1.
```
pmf_fruit.sum()
```
**Exercise:** The following cells load the data from the General Social Survey that we used in Notebooks 1 and 2.
```
# Load the data file
import os
if not os.path.exists('gss_bayes.csv'):
!wget https://github.com/AllenDowney/BiteSizeBayes/raw/master/gss_bayes.csv
gss = pd.read_csv('gss_bayes.csv', index_col=0)
```
As an exercise, you can use this data to explore the joint distribution of two variables:
* `partyid` encodes each respondent's political affiliation, that is, the party the belong to. [Here's the description](https://gssdataexplorer.norc.org/variables/141/vshow).
* `polviews` encodes their political alignment on a spectrum from liberal to conservative. [Here's the description](https://gssdataexplorer.norc.org/variables/178/vshow).
The values for `partyid` are
```
0 Strong democrat
1 Not str democrat
2 Ind,near dem
3 Independent
4 Ind,near rep
5 Not str republican
6 Strong republican
7 Other party
```
The values for `polviews` are:
```
1 Extremely liberal
2 Liberal
3 Slightly liberal
4 Moderate
5 Slightly conservative
6 Conservative
7 Extremely conservative
```
1. Make a cross tabulation of `gss['partyid']` and `gss['polviews']` and normalize it to make a joint PMF.
2. Use `plot_heatmap` to display a heatmap of the joint distribution. What patterns do you notice?
3. Use `marginal` to compute the marginal distributions of `partyid` and `polviews`, and plot the results.
4. Use `conditional` to compute the conditional distribution of `partyid` for people who identify themselves as "Extremely conservative" (`polviews==7`). How many of them are "strong Republicans" (`partyid==6`)?
5. Use `conditional` to compute the conditional distribution of `polviews` for people who identify themselves as "Strong Democrat" (`partyid==0`). How many of them are "Extremely liberal" (`polviews==1`)?
```
# Solution
xtab2 = pd.crosstab(gss['partyid'], gss['polviews'])
joint2 = xtab2 / xtab2.to_numpy().sum()
# Solution
plot_heatmap(joint2)
plt.xlabel('polviews')
plt.title('Joint distribution of polviews and partyid');
# Solution
marginal(joint2, 'polviews').plot.bar(color='C2')
plt.ylabel('Probability')
plt.title('Distribution of polviews');
# Solution
marginal(joint2, 'polviews').plot.bar(color='C3')
plt.ylabel('Probability')
plt.title('Distribution of polviews');
# Solution
cond1 = conditional(joint2, 'polviews', 7)
cond1.plot.bar(label='Extremely conservative', color='C4')
plt.ylabel('Probability')
plt.title('Distribution of partyid')
cond1[6]
# Solution
cond2 = conditional(joint2, 'partyid', 0)
cond2.plot.bar(label='Strong democrat', color='C6')
plt.ylabel('Probability')
plt.title('Distribution of polviews')
cond2[1]
```
## Review
In this notebook we started with cross tabulation, which we normalized to create a joint distribution, which describes the distribution of two (or more) variables and all of their conditional distributions.
We used heatmaps to visualize cross tabulations and joint distributions.
Then we defined `conditional` and `marginal` functions that take a joint distribution and compute conditional and marginal distributions for each variables.
As an exercise, you had a chance to apply the same methods to explore the relationship between political alignment and party affiliation using data from the General Social Survey.
You might have noticed that we did not use Bayes's Theorem in this notebook. [In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/11_faceoff.ipynb) we'll take the ideas from this notebook and apply them Bayesian inference.
| true |
code
| 0.63892 | null | null | null | null |
|
# Generate Region of Interests (ROI) labeled arrays for simple shapes
This example notebook explain the use of analysis module "skbeam/core/roi" https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/roi.py
```
import skbeam.core.roi as roi
import skbeam.core.correlation as corr
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import LogNorm
import xray_vision.mpl_plotting as mpl_plot
```
### Easily switch between interactive and static matplotlib plots
```
interactive_mode = False
import matplotlib as mpl
if interactive_mode:
%matplotlib notebook
else:
%matplotlib inline
backend = mpl.get_backend()
cmap='viridis'
```
## Draw annular (ring-shaped) regions of interest
```
center = (100., 100.) # center of the rings
# Image shape which is used to determine the maximum extent of output pixel coordinates
img_shape = (200, 205)
first_q = 10.0 # inner radius of the inner-most ring
delta_q = 5.0 #ring thickness
num_rings = 7 # number of Q rings
# step or spacing, spacing between rings
one_step_q = 5.0 # one spacing between rings
step_q = [2.5, 3.0, 5.8] # differnt spacing between rings
```
### Test when there is same spacing between rings
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=one_step_q,
num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Same spacing between rings")
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
### Test when there is different spacing between rings
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=step_q,
num_rings=4)
print("edges when there is different spacing between rings", edges)
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Different spacing between rings")
axes.set_xlim(50, 150)
axes.set_ylim(50, 150)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
### Test when there is no spacing between rings
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("There is no spacing between rings")
axes.set_xlim(50, 150)
axes.set_ylim(50, 150)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
### Generate a ROI of Segmented Rings¶
```
center = (75, 75) # center of the rings
#Image shape which is used to determine the maximum extent of output pixel coordinates
img_shape = (150, 140)
first_q = 5.0 # inner radius of the inner-most ring
delta_q = 5.0 #ring thickness
num_rings = 4 # number of rings
slicing = 4 # number of pie slices or list of angles in radians
spacing = 4 # margin between rings, 0 by default
```
#### find the inner and outer radius of each ring
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=spacing,
num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Segmented Rings")
axes.set_xlim(38, 120)
axes.set_ylim(38, 120)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
## Segmented rings using list of angles in radians
```
slicing = np.radians([0, 60, 120, 240, 300])
slicing
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Segmented Rings")
axes.set_xlim(38, 120)
axes.set_ylim(38, 120)
im = mpl_plot.show_label_array(axes, label_array, cmap="gray")
plt.show()
```
### Generate a ROI of Pies
```
first_q = 0
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=50, num_rings=1)
edges
slicing = 10 # number of pie slices or list of angles in radians
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Pies")
axes.set_xlim(20, 140)
axes.set_ylim(20, 140)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
## Rectangle region of interests.
```
# Image shape which is used to determine the maximum extent of output pixel coordinates
shape = (15, 26)
# coordinates of the upper-left corner and width and height of each rectangle
roi_data = np.array(([2, 2, 6, 3], [6, 7, 8, 5], [8, 18, 5, 10]),
dtype=np.int64)
#Elements not inside any ROI are zero; elements inside each ROI are 1, 2, 3, corresponding
# to the order they are specified in coords.
label_array = roi.rectangles(roi_data, shape)
roi_inds, pixel_list = roi.extract_label_indices(label_array)
```
## Generate Bar ROI's
```
edges = [[3, 4], [5, 7], [12, 15]]
edges
```
## Create Horizontal bars and Vertical bars
```
h_label_array = roi.bar(edges, (20, 25)) # Horizontal Bars
v_label_array = roi.bar(edges, (20, 25), horizontal=False) # Vertical Bars
```
## Create Box ROI's
```
b_label_array = roi.box((20, 25), edges)
```
## Plot bar rois, box rois and rectangle rois
```
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
axes[1, 0].set_title("Horizontal Bars")
im = mpl_plot.show_label_array(axes[1, 0], h_label_array, cmap)
axes[0, 1].set_title("Vertical Bars")
im = mpl_plot.show_label_array(axes[0, 1], v_label_array, cmap)
axes[1, 1].set_title("Box Rois")
im = mpl_plot.show_label_array(axes[1, 1], b_label_array, cmap)
axes[0, 0].set_title("Rectangle Rois")
im = mpl_plot.show_label_array(axes[0, 0], label_array, cmap)
plt.show()
```
# Create line ROI's
```
label_lines= roi.lines(([0, 45, 50, 256], [56, 60, 80, 150]), (150, 250))
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Lines")
im = mpl_plot.show_label_array(axes, label_lines, cmap)
plt.show()
import skbeam
print(skbeam.__version__)
```
| true |
code
| 0.733019 | null | null | null | null |
|
<p align="center">
<img src="http://www.di.uoa.gr/themes/corporate_lite/logo_el.png" title="Department of Informatics and Telecommunications - University of Athens"/> </p>
---
<h1 align="center">
Artificial Intelligence
</h1>
<h1 align="center" >
Deep Learning for Natural Language Processing
</h1>
---
<h2 align="center">
<b>Konstantinos Nikoletos</b>
</h2>
<h3 align="center">
<b>Winter 2020-2021</b>
</h3>
---
---
### __Task__
This exercise is about developing a document retrieval system to return titles of scientific
papers containing the answer to a given user question. You will use the first version of
the COVID-19 Open Research Dataset (CORD-19) in your work (articles in the folder
comm use subset).
For example, for the question “What are the coronaviruses?”, your system can return the
paper title “Distinct Roles for Sialoside and Protein Receptors in Coronavirus Infection”
since this paper contains the answer to the asked question.
To achieve the goal of this exercise, you will need first to read the paper Sentence-BERT:
Sentence Embeddings using Siamese BERT-Networks, in order to understand how you
can create sentence embeddings. In the related work of this paper, you will also find other
approaches for developing your model. For example, you can using Glove embeddings,
etc. In this link, you can find the extended versions of this dataset to test your model, if
you want. You are required to:
<ol type="a">
<li>Preprocess the provided dataset. You will decide which data of each paper is useful
to your model in order to create the appropriate embeddings. You need to explain
your decisions.</li>
<li>Implement at least 2 different sentence embedding approaches (see the related work
of the Sentence-BERT paper), in order for your model to retrieve the titles of the
papers related to a given question.</li>
<li>Compare your 2 models based on at least 2 different criteria of your choice. Explain
why you selected these criteria, your implementation choices, and the results. Some
questions you can pose are included here. You will need to provide the extra questions
you posed to your model and the results of all the questions as well.</li>
</ol>
### __Notebook__
Same implementation as Sentence Bert notebook but with adding CrossEncoders that I read that they perform even better
---
---
__Import__ of essential libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys # only needed to determine Python version number
import matplotlib # only needed to determine Matplotlib version
import nltk
from nltk.stem import WordNetLemmatizer
import pprint
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext import data
import logging
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('averaged_perceptron_tagger')
```
Selecting device (GPU - CUDA if available)
```
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
```
# Loading data
---
```
# Opening data file
import io
from google.colab import drive
from os import listdir
from os.path import isfile, join
import json
drive.mount('/content/drive',force_remount=True)
```
Loading the dictionary if it has been created
```
#@title Select number of papers that will be feeded in the model { vertical-output: true, display-mode: "both" }
number_of_papers = "9000" #@param ["1000","3000", "6000","9000"]
import pickle
CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_"+number_of_papers+".pkl"
with open(CORD19_Dataframe, 'rb') as drivef:
CORD19Dictionary = pickle.load(drivef)
```
OR the summary of the papers
```
#@title Select number of summarized papers that will be feeded in the model { vertical-output: true, display-mode: "both" }
number_of_papers = "9000" #@param ["1000", "3000", "6000", "9000"]
import pickle
CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_Summarized_"+number_of_papers+".pkl"
with open(CORD19_Dataframe, 'rb') as drivef:
CORD19Dictionary = pickle.load(drivef)
```
## Queries
---
```
query_list = [
'What are the coronoviruses?',
'What was discovered in Wuhuan in December 2019?',
'What is Coronovirus Disease 2019?',
'What is COVID-19?',
'What is caused by SARS-COV2?', 'How is COVID-19 spread?',
'Where was COVID-19 discovered?','How does coronavirus spread?'
]
proposed_answers = [
'Coronaviruses (CoVs) are common human and animal pathogens that can transmit zoonotically and cause severe respiratory disease syndromes. ',
'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.',
'Coronavirus Disease 2019 (COVID-19) is an emerging disease with a rapid increase in cases and deaths since its first identification in Wuhan, China, in December 2019.',
'COVID-19 is a viral respiratory illness caused by a new coronavirus called SARS-CoV-2.',
'Coronavirus disease (COVID-19) is caused by SARS-COV2 and represents the causative agent of a potentially fatal disease that is of great global public health concern.',
'First, although COVID-19 is spread by the airborne route, air disinfection of cities and communities is not known to be effective for disease control and needs to be stopped.',
'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.',
'The new coronavirus was reported to spread via droplets, contact and natural aerosols from human-to-human.'
]
myquery_list = [
"How long can the coronavirus survive on surfaces?",
"What means COVID-19?",
"Is COVID19 worse than flue?",
"When the vaccine will be ready?",
"Whats the proteins that consist COVID-19?",
"Whats the symptoms of COVID-19?",
"How can I prevent COVID-19?",
"What treatments are available for COVID-19?",
"Is hand sanitizer effective against COVID-19?",
"Am I at risk for serious complications from COVID-19 if I smoke cigarettes?",
"Are there any FDA-approved drugs (medicines) for COVID-19?",
"How are people tested?",
"Why is the disease being called coronavirus disease 2019, COVID-19?",
"Am I at risk for COVID-19 from mail, packages, or products?",
"What is community spread?",
"How can I protect myself?",
"What is a novel coronavirus?",
"Was Harry Potter a good magician?"
]
```
# Results dataframes
```
resultsDf = pd.DataFrame(columns=['Number of papers','Embeddings creation time'])
queriesDf = pd.DataFrame(columns=['Query','Proposed_answer','Model_answer','Cosine_similarity'])
queriesDf['Query'] = query_list
queriesDf['Proposed_answer'] = proposed_answers
myQueriesDf = pd.DataFrame(columns=['Query','Model_answer','Cosine_similarity'])
myQueriesDf['Query'] = myquery_list
queriesDf
```
# SBERT
---
```
!pip install -U sentence-transformers
```
# Selecting transformer and Cross Encoder
```
from sentence_transformers import SentenceTransformer, util, CrossEncoder
import torch
import time
encoder = SentenceTransformer('msmarco-distilbert-base-v2')
cross_encoder = CrossEncoder('cross-encoder/ms-marco-TinyBERT-L-6')
```
# Initializing corpus
```
corpus = list(CORD19Dictionary.keys())
```
# Creating the embeddings
Encoding the papers
```
%%time
corpus_embeddings = encoder.encode(corpus, convert_to_tensor=True, show_progress_bar=True,device='cuda')
```
# Saving corpus as tensors to drive
```
corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt"
torch.save(corpus_embeddings,corpus_embeddings_path)
```
# Loading embeddings if have been created and saved
---
```
corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt"
with open(corpus_embeddings_path, 'rb') as f:
corpus_embeddings = torch.load(f)
```
# Evaluation
---
```
import re
from nltk import tokenize
from termcolor import colored
def paperTitle(answer,SentenceMap):
record = SentenceMap[answer]
print("Paper title:",record[1])
print("Paper id: ",record[0])
def evaluation(query_list,top_k,resultsDf):
query_answers = []
scores = []
for query in query_list:
#Encode the query using the bi-encoder and find potentially relevant corpus
start_time = time.time()
question_embedding = encoder.encode(query, convert_to_tensor=True,device='cuda')
hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)
hits = hits[0] # Get the hits for the first query
#Now, score all retrieved corpus with the cross_encoder
cross_inp = [[query, corpus[hit['corpus_id']]] for hit in hits]
cross_scores = cross_encoder.predict(cross_inp)
#Sort results by the cross-encoder scores
for idx in range(len(cross_scores)):
hits[idx]['cross-score'] = cross_scores[idx]
hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)
end_time = time.time()
#Output of top-5 hits
print("\n\n======================\n\n")
print("Query:",colored(query,'green') )
print("Results (after {:.3f} seconds):".format(end_time - start_time))
iter=0
for hit in hits[0:top_k]:
print("\n-> ",iter+1)
answer = ' '.join([re.sub(r"^\[.*\]", "", x) for x in corpus[hit['corpus_id']].split()])
if len(tokenize.word_tokenize(answer)) > 1:
print("Score: {:.4f}".format(hit['cross-score']))
paperTitle(corpus[hit['corpus_id']],CORD19Dictionary)
print("Anser size: ",len(tokenize.word_tokenize(answer)))
print("Anser: ")
if iter==0:
query_answers.append(answer)
scores.append(hit['cross-score'].item())
iter+=1
print(colored(answer,'yellow'))
resultsDf['Model_answer'] = query_answers
resultsDf['Cosine_similarity'] = scores
top_k = 3
evaluation(query_list,top_k,queriesDf)
top_k = 3
evaluation(myquery_list,top_k,myQueriesDf)
```
# Overall results
## 6000 papers with no summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user 13min 10s
- sys: 5min 40s
- total: 18min 51s
- Wall time: 18min 26s
### Remarks
Best results among the notebooks so far, almost 5/7 questions are answered and from mine 7/17. I expected better results since Cross Encoders enhance much the performance of Sentence Bert.
__Top-k__
Top-2 and 3 have lots of answers, as I noticed that are better that the first one. Also good results and with some tunning would be nearly to the wanted.
### Results
```
with pd.option_context('display.max_colwidth', None):
display(queriesDf)
with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)
```
## 9000 papers with no summarization
---
Session crashed due to RAM
## 6000 papers with paraphrase-distilroberta-base-v1 model and summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user: 1min 18s
- sys: 22.8 s
- total: 1min 37s
- Wall time: 1min 37s
### Remarks
Not good results. From these results I think that the BERT summarizer parameters were not the appropriate and I should experiment with them. I shouldn't have so strict summarization and I may over summarized the papers.
__Top-k__
Not good.
### Results
```
with pd.option_context('display.max_colwidth', None):
display(queriesDf)
with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)
```
## 9000 papers with summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user: 1min 48s
- sys: 32.6 s
- total: 2min 20s
- Wall time: 2min 16s
### Remarks
Again not good results and this is due my summarization tunning.
** Again I didn't have the time to re run and process again.
### Results
```
with pd.option_context('display.max_colwidth', None):
display(queriesDf)
with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)
```
# References
[1] https://colab.research.google.com/drive/1l6stpYdRMmeDBK_vw0L5NitdiAuhdsAr?usp=sharing#scrollTo=D_hDi8KzNgMM
[2] https://www.sbert.net/docs/package_reference/cross_encoder.html
| true |
code
| 0.464112 | null | null | null | null |
|
### Distributed MCMC Retrieval
This notebook runs the MCMC retrievals on a local cluster using `ipyparallel`.
```
import ipyparallel as ipp
c = ipp.Client(profile='gold')
lview = c.load_balanced_view()
```
## Retrieval Setup
```
%%px
%env ARTS_BUILD_PATH=/home/simonpf/build/arts
%env ARTS_INCLUDE_PATH=/home/simonpf/src/atms_simulations/:/home/simonpf/src/arts/controlfiles
%env ARTS_DATA_PATH=/home/simonpf/src/arts_xml/
%env OMP_NUM_THREADS=1
import sys
sys.path.insert(1,"/home/simonpf/src/atms_simulations/")
sys.path.insert(1, "/home/simonpf/src/typhon/")
import os
os.chdir("/home/simonpf/src/atms_simulations")
# This is important otherwise engines just crash.
import matplotlib; matplotlib.use("agg")
from typhon.arts.workspace import Workspace
import atms
import numpy as np
ws = Workspace()
channels = [0,15,16,17,19]
atms.setup_atmosphere(ws)
atms.setup_sensor(ws, channels)
atms.checks(ws)
ws.yCalc()
%%px
from typhon.arts.workspace import Workspace
import atms
import numpy as np
ws = Workspace()
channels = [0,15,16,17,19]
atms.setup_atmosphere(ws)
atms.setup_sensor(ws, channels)
atms.checks(ws)
ws.yCalc()
```
## A Priori State
The simulations are based on the a priori assumptions, that the profiles of specific humidity, temperature and ozone vary independently and that the relative variations can be described by Log-Gaussian distributions.
```
%%px
qt_mean = np.load("data/qt_mean.npy").ravel()
qt_cov = np.load("data/qt_cov.npy")
qt_cov_inv = np.linalg.inv(qt_cov)
```
## Jumping Functions
The jumping functions are used inside the MCMC iteration and propose new atmospheric states for specific humidity, temperature and ozone, respectively. The proposed states are generated from random walks that use scaled versions of the a priori covariances.
```
%%px
import numpy as np
from typhon.retrieval.mcmc import RandomWalk
c = (1.0 / np.sqrt(qt_mean.size)) ** 2
rw_qt = RandomWalk(c * qt_cov)
def j_qt(ws, x, revert = False):
if revert:
x_new = x
else:
x_new = rw_qt.step(x)
q_new = (np.exp(x_new[14::-1]).reshape((15,)))
q_new = atms.mmr2vmr(ws, q_new, "h2o")
ws.vmr_field.value[0, :, 0, 0] = q_new
ws.t_field.value[:, 0, 0] = x_new[:14:-1]
ws.sst = np.maximum(ws.t_field.value[0, 0, 0], 270.0)
return x_new
```
## A Priori Distributions
These functions return the likelihood (up to an additive constant) of a given state for each of the variables. Note that the states of specific humidity, temperature and ozone are given by the logs of the relative variations.
```
%%px
def p_a_qt(x):
dx = x - qt_mean
l = - 0.5 * np.dot(dx, np.dot(qt_cov_inv, dx))
return l
```
## Measurement Uncertainty
We assume that uncertainty of the measured brightness temperatures can be described by independent Gaussian error with a standard deviation of $1 K$.
```
%%px
covmat_y = np.diag(np.ones(len(channels)))
covmat_y_inv = np.linalg.inv(covmat_y)
def p_y(y, yf):
dy = y - yf
l = - 0.5 * np.dot(dy, np.dot(covmat_y_inv, dy))
return l
```
# Running MCMC
### The Simulated Measurement
For the simulated measurement, we sample a state from the a priori distribution of atmsopheric states and simulate the measured brightness temperatures.
A simple heuristic is applied to ensure that reasonable acceptance rates are obtained during the MCMC simulations. After the initial burn-in phase, 1000 simulation steps are performed. If the acceptance rates during this simulation are too low/high that covariance matrices of the corresponding random walks are scaled by a factor 0.1 / 9.0, respectively.
```
%%px
def adapt_covariances(a):
if (np.sum(a[:, 0]) / a.shape[0]) < 0.2:
rw_qt.covmat *= 0.7
if (np.sum(a[:, 0]) / a.shape[0]) > 0.4:
rw_qt.covmat *= 1.5
%%px
from typhon.retrieval.mcmc import MCMC
from atms import vmr2cd
dist = atms.StateDistribution()
n_burn_in = 500
n_prod = 5000
drop = 10
from typhon.retrieval.mcmc import MCMC
from atms import vmr2cd
def run_retrieval(i):
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
# Generate True State
dist.sample(ws)
ws.yCalc()
y_true = np.copy(ws.y)
q_true = np.copy(ws.vmr_field.value[0, :, 0, 0].ravel())
t_true = np.copy(ws.t_field.value[:, 0, 0].ravel())
cwv_true = atms.vmr2cd(ws)
dist.a_priori(ws)
qt = np.zeros(qt_mean.size)
# Add Noise
y_true += np.random.randn(*y_true.shape)
#try:
mcmc = MCMC([[qt, p_a_qt, j_qt]], y_true, p_y, [vmr2cd])
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_1, s_1, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_2, s_2, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_3, s_3, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_4, s_4, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_5, s_5, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_6, s_6, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_7, s_7, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_8, s_8, _, _ = mcmc.run(ws, n_prod)
profiles_q = np.stack([hist_1[0][::drop, :15],
hist_2[0][::drop, :15],
hist_3[0][::drop, :15],
hist_4[0][::drop, :15],
hist_5[0][::drop, :15],
hist_6[0][::drop, :15],
hist_7[0][::drop, :15],
hist_8[0][::drop, :15]])
profiles_t = np.stack([hist_1[0][::drop, 15:],
hist_2[0][::drop, 15:],
hist_3[0][::drop, 15:],
hist_4[0][::drop, 15:],
hist_5[0][::drop, 15:],
hist_6[0][::drop, 15:],
hist_7[0][::drop, 15:],
hist_8[0][::drop, 15:]])
cwv = np.stack([s_1[::drop], s_2[::drop], s_3[::drop], s_4[::drop],
s_5[::drop],s_6[::drop],s_7[::drop],s_8[::drop]], axis=0)
return y_true, q_true, cwv_true, profiles_q, profiles_t, cwv
```
## Running the Retrievals
```
import numpy as np
ids = np.arange(3500)
rs = lview.map_async(run_retrieval, ids)
from atms import create_output_file
root_group, v_y_true, v_cwv_true, v_cwv ,v_h2o = create_output_file("data/mcmc_retrievals_5.nc", 5, 15)
for y_true, h2o_true, cwv_true, profiles_q, profiles_t, cwv in rs:
if not y_true is None:
t = v_cwv_true.shape[0]
print("saving simulation: " + str(t))
steps=cwv.size
v_y_true[t,:] = y_true
ws.vmr_field.value[0,:,:,:] = h2o_true.reshape(-1,1,1)
v_cwv_true[t] = cwv_true
v_cwv[t, :steps] = cwv[:]
v_h2o[t, :steps,:] = profiles_q.ravel().reshape(-1, 15)
else:
print("failure in simulation: " + str(t))
print(h2o_true)
print(cwv_true)
print(profiles)
import matplotlib_settings
import matplotlib.pyplot as plt
root_group.close()
root_group, v_y_true, v_cwv_true, v_cwv ,v_h2o = create_output_file("data/mcmc_retrievals_5.nc", 5, 27)
for i in range(1000, 1100):
plt.plot(v_cwv[i, :])
plt.gca().axhline(v_cwv_true[i], c = 'k', ls = '--')
v_h2o[118, 250:500, :].shape
plt.plot(np.mean(profs_t[2, 0:200], axis = 0), p)
plt.plot(np.mean(profs_t[2, 200:400], axis = 0), p)
plt.title("Temperature Profiles")
plt.xlabel("T [K]")
plt.ylabel("P [hPa]")
plt.gca().invert_yaxis()
p = np.load("data/p_grid.npy")
profiles_t[1, :, :].shape
plt.plot(np.mean(np.exp(profs_q[1, 0:200]) * 18.0 / 28.9, axis = 0), p)
plt.plot(np.mean(np.exp(profs_q[1, 200:400]) * 18.0/ 28.9, axis = 0), p)
plt.gca().invert_yaxis()
plt.title("Water Vapor Profiles")
plt.xlabel("$H_2O$ [mol / mol]")
plt.ylabel("P [hPa]")
```
| true |
code
| 0.344995 | null | null | null | null |
|
# Introduction

This notebook provides a demo to use the methods used in the paper with new data. If new to collaboratory ,please check the following [link](https://medium.com/lean-in-women-in-tech-india/google-colab-the-beginners-guide-5ad3b417dfa) to know how to run the code.
### Import the required libraries:
```
#import
from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
import os
import joblib
import json
import pandas as pd
import numpy as np
###ipywigets
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from sklearn import *
from sklearn.model_selection import *
from sklearn.metrics import *
import nltk
nltk.download('stopwords')
#copy the git clone address here
!git clone https://github.com/binny-mathew/Countering_Hate_Speech.git
#Best binary classifier was XGBclassifier
#Best multilabel classifier was XGBclassifier
best_binary_classifier = joblib.load('Countering_Hate_Speech/Best_model/XGB_classifier_task_1.joblib.pkl')
best_multiclass_classifier = joblib.load('Countering_Hate_Speech/Best_model/XGB_classifier_task_3.joblib.pkl')
best_black_classifier = joblib.load('Countering_Hate_Speech/Best_model/black_XGB_classifier_task_2.joblib.pkl')
best_jew_classifier = joblib.load('Countering_Hate_Speech/Best_model/jew_XGB_classifier_task_2.joblib.pkl')
best_lgbt_classifier = joblib.load('Countering_Hate_Speech/Best_model/lgbt_XGB_classifier_task_2.joblib.pkl')
```
###Word Embeddings Loaded Here
```
####downloading the word embeddings
!wget http://nlp.stanford.edu/data/glove.840B.300d.zip
!unzip glove.840B.300d.zip
####extracting the glove model file
#import zipfile
#archive = zipfile.ZipFile('glove.840B.300d.zip', 'r')
GLOVE_MODEL_FILE ='glove.840B.300d.txt'
import numpy as np
## change the embedding dimension according to the model
EMBEDDING_DIM = 300
###change the method type
### method two
def loadGloveModel2(glove_file):
tmp_file = get_tmpfile("test_crawl_200.txt")
# call glove2word2vec script
# default way (through CLI): python -m gensim.scripts.glove2word2vec --input <glove_file> --output <w2v_file>
glove2word2vec(glove_file, tmp_file)
model=KeyedVectors.load_word2vec_format(tmp_file)
return model
word2vec_model = loadGloveModel2(GLOVE_MODEL_FILE)
```
## Dataset is loaded here
```
#@title Select the type of file used
type_of_file = 'X.json' #@param ['X.json','X.csv']
```
### File type information
If the file type is **.json** then each element should contain the following fields:-
1. Community
2. CounterSpeech
3. Category
4. commentText
5. id
If the file type is **.csv** then it must have the following columns:-
1. Community
2. CounterSpeech
3. Category
4. commentText
5. id
Note:- If you don't have the Category or Community add an dummy element or column
```
####CHANGE THE PATH OF THE FILE
path_of_file='Countering_Hate_Speech/Data/Counterspeech_Dataset.json'
def convert_class_label(input_text):
if input_text:
return 'counter'
else:
return 'noncounter'
if(type_of_file=='X.json'):
with open(path_of_file) as fp:
train_data = json.load(fp)
pd_train = pd.DataFrame(columns=['id','class','community','category','text'])
for count, each in enumerate(train_data):
try:
pd_train.loc[count] = [each['id'], convert_class_label(each['CounterSpeech']), each['Community'],each['Category'],each['commentText']]
except:
pass
print('Training Data Loading Completed...')
elif(type_of_file=='X.csv'):
pd_train=pd.read_csv(path_of_the_file)
pd_train.head()
#@title How your dataframe should look like after extraction {display-mode: "form"}
# This code will be hidden when the notebook is loaded.
path_of_data_file='Countering_Hate_Speech/Data/Counterspeech_Dataset.json'
def convert_class_label(input_text):
if input_text:
return 'counter'
else:
return 'noncounter'
with open(path_of_data_file) as fp:
train_data = json.load(fp)
pd_train_sample = pd.DataFrame(columns=['id','class','community','category','text'])
for count, each in enumerate(train_data):
try:
pd_train_sample.loc[count] = [each['id'], convert_class_label(each['CounterSpeech']), each['Community'],each['Category'],each['commentText']]
except:
pass
print('Training Data Loading Completed...')
pd_train_sample.head()
pd_train['text'].replace('', np.nan, inplace=True)
pd_train.dropna(subset=['text'], inplace=True)
import sys
####features module has the necessary function for feature generation
from Countering_Hate_Speech.utils import features
from Countering_Hate_Speech.utils import multi_features
###tokenize module has the tokenization funciton
from Countering_Hate_Speech.utils.tokenize import *
###helper prints confusion matrix and stores results
from Countering_Hate_Speech.utils.helper import *
###common preprocessing imports
from Countering_Hate_Speech.utils.commen_preprocess import *
```
#### Next few sections cover three different classifiers namely -
* Binary classification
* Multlabel classification
* Cross community
You can run the cells corresponding to the result you want to analyse.
### **Binary Classification**
```
X,y= features.combine_tf_rem_google_rem_embed(pd_train,word2vec_model)
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y:
temp.append(label_map[data])
y=np.array(temp)
y_pred=best_binary_classifier.predict(X)
report = classification_report(y, y_pred)
cm=confusion_matrix(y, y_pred)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('Confusion_matrix.png')
df_result=pandas_classification_report(y,y_pred)
df_result.to_csv('Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
```
### **Multilabel Classification**
```
import scipy
pd_train_multilabel =pd_train.copy()
pd_train_multilabel =pd_train_multilabel[pd_train_multilabel['category']!='Default']
list1=[[],[],[],[],[],[],[],[],[],[]]
for ele in pd_train_multilabel['category']:
temp=[]
if type(ele) is int:
ele =str(ele)
for i in range(0,len(ele),2):
temp.append(ord(ele[i])-ord('0'))
#print(temp)
if(len(temp)==0):
print(temp)
for i in range(0,10):
if i+1 in temp:
list1[i].append(1)
else:
list1[i].append(0)
y_train=np.array([np.array(xi) for xi in list1])
### final dataframe for the task created
pd_train_multilabel = pd.DataFrame({'text':list(pd_train_multilabel['text']),'cat0':list1[0],'cat1':list1[1],'cat2':list1[2],'cat3':list1[3],'cat4':list1[4],'cat5':list1[5],'cat6':list1[6],'cat7':list1[7],'cat8':list1[8],'cat9':list1[9]})
### drop the entries having blank entries
pd_train_multilabel['text'].replace('', np.nan, inplace=True)
pd_train_multilabel.dropna(subset=['text'], inplace=True)
X,y= multi_features.combine_tf_rem_google_rem_embed(pd_train_multilabel,word2vec_model)
path='multilabel_res'
os.makedirs(path, exist_ok=True)
X = np.array(X)
y = np.array(y)
y_pred = best_multiclass_classifier.predict(X)
if(scipy.sparse.issparse(y_pred)):
ham,acc,pre,rec,f1=calculate_score(y,y_pred.toarray())
accuracy_test=accuracy_score(y,y_pred.toarray())
else:
ham,acc,pre,rec,f1=calculate_score(y,y_pred)
accuracy_test=my_accuracy_score(y,y_pred)
for i in range(10):
df_result=pandas_classification_report(y[:,i],y_pred[:,i])
df_result.to_csv(path+'/report'+str(i)+'.csv')
f = open(path+'/final_report.txt', "w")
f.write("best_model")
f.write("The hard metric score is :- " + str(accuracy_test))
f.write("The accuracy is :- " + str(acc))
f.write("The precision is :- " + str(pre))
f.write("The recall is :- " + str(rec))
f.write("The f1_score is :- " + str(f1))
f.write("The hamming loss is :-" + str(ham))
f.close()
!zip -r mulitlabel_results.zip multilabel_res
```
### **Cross CommunityClassification**
```
pd_cross=pd_train.copy()
part_j=pd_cross.loc[pd_train['community']=='jews']
part_b=pd_cross.loc[pd_train['community']=='black']
part_l=pd_cross.loc[pd_train['community']=='lgbt']
X_black,y_black= features.combine_tf_rem_google_rem_embed(part_b,word2vec_model)
X_jew,y_jew= features.combine_tf_rem_google_rem_embed(part_j,word2vec_model)
X_lgbt,y_lgbt= features.combine_tf_rem_google_rem_embed(part_l,word2vec_model)
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y_black:
temp.append(label_map[data])
y_black=np.array(temp)
y_pred_black=best_black_classifier.predict(X_black)
report = classification_report(y_black, y_pred_black)
cm=confusion_matrix(y_black, y_pred_black)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('black_Confusion_matrix.png')
df_result=pandas_classification_report(y_black,y_pred_black)
df_result.to_csv('black_Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y_jew:
temp.append(label_map[data])
y_jew=np.array(temp)
y_pred_jew=best_jew_classifier.predict(X_jew)
report = classification_report(y_jew, y_pred_jew)
cm=confusion_matrix(y_jew, y_pred_jew)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('jew_Confusion_matrix.png')
df_result=pandas_classification_report(y_jew,y_pred_jew)
df_result.to_csv('jew_Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y_lgbt:
temp.append(label_map[data])
y_lgbt=np.array(temp)
y_pred_lgbt=best_lgbt_classifier.predict(X_lgbt)
report = classification_report(y_lgbt, y_pred_lgbt)
cm=confusion_matrix(y_lgbt, y_pred_lgbt)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('lgbt_Confusion_matrix.png')
df_result=pandas_classification_report(y_lgbt,y_pred_lgbt)
df_result.to_csv('lgbt_Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
```
| true |
code
| 0.295497 | null | null | null | null |
|
## Outline
* Recap of data
* Feedforward network with Pytorch tensors and autograd
* Using Pytorch's NN -> Functional, Linear, Sequential & Pytorch's Optim
* Moving things to CUDA
```
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error, log_loss
from tqdm import tqdm_notebook
import seaborn as sns
import time
from IPython.display import HTML
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import make_blobs
import torch
torch.manual_seed(0)
my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","yellow","green"])
```
## Generate Dataset
```
data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0)
print(data.shape, labels.shape)
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)
print(X_train.shape, X_val.shape, labels.shape)
```
## Using torch tensors and autograd
```
X_train, Y_train, X_val, Y_val = map(torch.tensor, (X_train, Y_train, X_val, Y_val))
print(X_train.shape, Y_train.shape)
def model(x):
a1 = torch.matmul(x, weights1) + bias1 # (N, 2) x (2, 2) -> (N, 2)
h1 = a1.sigmoid() # (N, 2)
a2 = torch.matmul(h1, weights2) + bias2 # (N, 2) x (2, 4) -> (N, 4)
h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1) # (N, 4)
return h2
y_hat = torch.tensor([[0.1, 0.2, 0.3, 0.4], [0.8, 0.1, 0.05, 0.05]])
y = torch.tensor([2, 0])
(-y_hat[range(y_hat.shape[0]), y].log()).mean().item()
(torch.argmax(y_hat, dim=1) == y).float().mean().item()
def loss_fn(y_hat, y):
return -(y_hat[range(y.shape[0]), y].log()).mean()
def accuracy(y_hat, y):
pred = torch.argmax(y_hat, dim=1)
return (pred == y).float().mean()
torch.manual_seed(0)
weights1 = torch.randn(2, 2) / math.sqrt(2)
weights1.requires_grad_()
bias1 = torch.zeros(2, requires_grad=True)
weights2 = torch.randn(2, 4) / math.sqrt(2)
weights2.requires_grad_()
bias2 = torch.zeros(4, requires_grad=True)
learning_rate = 0.2
epochs = 10000
X_train = X_train.float()
Y_train = Y_train.long()
loss_arr = []
acc_arr = []
for epoch in range(epochs):
y_hat = model(X_train)
loss = loss_fn(y_hat, Y_train)
loss.backward()
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
with torch.no_grad():
weights1 -= weights1.grad * learning_rate
bias1 -= bias1.grad * learning_rate
weights2 -= weights2.grad * learning_rate
bias2 -= bias2.grad * learning_rate
weights1.grad.zero_()
bias1.grad.zero_()
weights2.grad.zero_()
bias2.grad.zero_()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
```
## Using NN.Functional
```
import torch.nn.functional as F
torch.manual_seed(0)
weights1 = torch.randn(2, 2) / math.sqrt(2)
weights1.requires_grad_()
bias1 = torch.zeros(2, requires_grad=True)
weights2 = torch.randn(2, 4) / math.sqrt(2)
weights2.requires_grad_()
bias2 = torch.zeros(4, requires_grad=True)
learning_rate = 0.2
epochs = 10000
loss_arr = []
acc_arr = []
for epoch in range(epochs):
y_hat = model(X_train)
loss = F.cross_entropy(y_hat, Y_train)
loss.backward()
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
with torch.no_grad():
weights1 -= weights1.grad * learning_rate
bias1 -= bias1.grad * learning_rate
weights2 -= weights2.grad * learning_rate
bias2 -= bias2.grad * learning_rate
weights1.grad.zero_()
bias1.grad.zero_()
weights2.grad.zero_()
bias2.grad.zero_()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
```
## Using NN.Parameter
```
import torch.nn as nn
class FirstNetwork(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.weights1 = nn.Parameter(torch.randn(2, 2) / math.sqrt(2))
self.bias1 = nn.Parameter(torch.zeros(2))
self.weights2 = nn.Parameter(torch.randn(2, 4) / math.sqrt(2))
self.bias2 = nn.Parameter(torch.zeros(4))
def forward(self, X):
a1 = torch.matmul(X, self.weights1) + self.bias1
h1 = a1.sigmoid()
a2 = torch.matmul(h1, self.weights2) + self.bias2
h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1)
return h2
def fit(epochs = 1000, learning_rate = 1):
loss_arr = []
acc_arr = []
for epoch in range(epochs):
y_hat = fn(X_train)
loss = F.cross_entropy(y_hat, Y_train)
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
loss.backward()
with torch.no_grad():
for param in fn.parameters():
param -= learning_rate * param.grad
fn.zero_grad()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
fn = FirstNetwork()
fit()
```
## Using NN.Linear and Optim
```
class FirstNetwork_v1(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.lin1 = nn.Linear(2, 2)
self.lin2 = nn.Linear(2, 4)
def forward(self, X):
a1 = self.lin1(X)
h1 = a1.sigmoid()
a2 = self.lin2(h1)
h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1)
return h2
fn = FirstNetwork_v1()
fit()
from torch import optim
def fit_v1(epochs = 1000, learning_rate = 1):
loss_arr = []
acc_arr = []
opt = optim.SGD(fn.parameters(), lr=learning_rate)
for epoch in range(epochs):
y_hat = fn(X_train)
loss = F.cross_entropy(y_hat, Y_train)
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
loss.backward()
opt.step()
opt.zero_grad()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
fn = FirstNetwork_v1()
fit_v1()
```
## Using NN.Sequential
```
class FirstNetwork_v2(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 4),
nn.Softmax()
)
def forward(self, X):
return self.net(X)
fn = FirstNetwork_v2()
fit_v1()
def fit_v2(x, y, model, opt, loss_fn, epochs = 1000):
for epoch in range(epochs):
loss = loss_fn(model(x), y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
fn = FirstNetwork_v2()
loss_fn = F.cross_entropy
opt = optim.SGD(fn.parameters(), lr=1)
fit_v2(X_train, Y_train, fn, opt, loss_fn)
```
## Running it on GPUs
```
device = torch.device("cuda")
X_train=X_train.to(device)
Y_train=Y_train.to(device)
fn = FirstNetwork_v2()
fn.to(device)
tic = time.time()
print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn))
toc = time.time()
print('Time taken', toc - tic)
class FirstNetwork_v3(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 1024*4),
nn.Sigmoid(),
nn.Linear(1024*4, 4),
nn.Softmax()
)
def forward(self, X):
return self.net(X)
device = torch.device("cpu")
X_train=X_train.to(device)
Y_train=Y_train.to(device)
fn = FirstNetwork_v3()
fn.to(device)
tic = time.time()
print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn))
toc = time.time()
print('Time taken', toc - tic)
```
## Exercises
1. Try out a deeper neural network, eg. 2 hidden layers
2. Try out different parameters in the optimizer (eg. try momentum, nestrov) -> check `optim.SGD` docs
3. Try out other optimization methods (eg. RMSProp and Adam) which are supported in `optim`
4. Try out different initialisation methods which are supported in `nn.init`
| true |
code
| 0.744221 | null | null | null | null |
|
# Grid algorithm for the beta-binomial hierarchical model
[Bayesian Inference with PyMC](https://allendowney.github.io/BayesianInferencePyMC)
Copyright 2021 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install PyMC and ArviZ
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pymc3
!pip install arviz
# PyMC generates a FutureWarning we don't need to deal with yet
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import seaborn as sns
def plot_hist(sample, **options):
"""Plot a histogram of goals.
sample: sequence of values
"""
sns.histplot(sample, stat='probability', discrete=True,
alpha=0.5, **options)
def plot_kde(sample, **options):
"""Plot a distribution using KDE.
sample: sequence of values
"""
sns.kdeplot(sample, cut=0, **options)
import matplotlib.pyplot as plt
def legend(**options):
"""Make a legend only if there are labels."""
handles, labels = plt.gca().get_legend_handles_labels()
if len(labels):
plt.legend(**options)
def decorate(**options):
plt.gca().set(**options)
legend()
plt.tight_layout()
def decorate_heads(ylabel='Probability'):
"""Decorate the axes."""
plt.xlabel('Number of heads (k)')
plt.ylabel(ylabel)
plt.title('Distribution of heads')
legend()
def decorate_proportion(ylabel='Likelihood'):
"""Decorate the axes."""
plt.xlabel('Proportion of heads (x)')
plt.ylabel(ylabel)
plt.title('Distribution of proportion')
legend()
from empiricaldist import Cdf
def compare_cdf(pmf, sample):
pmf.make_cdf().plot(label='grid')
Cdf.from_seq(sample).plot(label='mcmc')
print(pmf.mean(), sample.mean())
decorate()
```
## The Grid Algorithm
```
import numpy as np
from scipy.stats import gamma
alpha = 4
beta = 0.5
qs = np.linspace(0.1, 25, 100)
ps = gamma(alpha, scale=1/beta).pdf(qs)
from empiricaldist import Pmf
prior_alpha = Pmf(ps, qs)
prior_alpha.normalize()
prior_alpha.index.name = 'alpha'
prior_alpha.shape
prior_alpha.plot()
prior_alpha.mean()
qs = np.linspace(0.1, 25, 90)
ps = gamma(alpha, scale=1/beta).pdf(qs)
prior_beta = Pmf(ps, qs)
prior_beta.normalize()
prior_beta.index.name = 'beta'
prior_beta.shape
prior_beta.plot()
prior_beta.mean()
def make_hyper(prior_alpha, prior_beta):
PA, PB = np.meshgrid(prior_alpha.ps, prior_beta.ps, indexing='ij')
hyper = PA * PB
return hyper
hyper = make_hyper(prior_alpha, prior_beta)
hyper.shape
import pandas as pd
from utils import plot_contour
plot_contour(pd.DataFrame(hyper))
```
## Make Prior
```
from scipy.stats import beta as betadist
xs = np.linspace(0.01, 0.99, 80)
prior_x = Pmf(betadist.pdf(xs, 2, 2), xs)
prior_x.plot()
from scipy.stats import beta as betadist
def make_prior(hyper, prior_alpha, prior_beta, xs):
A, B, X = np.meshgrid(prior_alpha.qs, prior_beta.qs, xs, indexing='ij')
ps = betadist.pdf(X, A, B)
totals = ps.sum(axis=2)
nc = hyper / totals
shape = nc.shape + (1,)
prior = ps * nc.reshape(shape)
return prior
xs = np.linspace(0.01, 0.99, 80)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
prior.sum()
def marginal(joint, axis):
axes = [i for i in range(3) if i != axis]
return joint.sum(axis=tuple(axes))
prior_a = Pmf(marginal(prior, 0), prior_alpha.qs)
prior_alpha.plot()
prior_a.plot()
prior_a.mean()
prior_b = Pmf(marginal(prior, 1), prior_beta.qs)
prior_beta.plot()
prior_b.plot()
prior_x = Pmf(marginal(prior, 2), xs)
prior_x.plot()
```
## The Update
```
from scipy.stats import binom
n = 250
ks = 140
X, K = np.meshgrid(xs, ks)
like_x = binom.pmf(K, n, X).prod(axis=0)
like_x.shape
plt.plot(xs, like_x)
def update(prior, data):
n, ks = data
X, K = np.meshgrid(xs, ks)
like_x = binom.pmf(K, n, X).prod(axis=0)
posterior = prior * like_x
posterior /= posterior.sum()
return posterior
data = 250, 140
posterior = update(prior, data)
marginal_x = Pmf(marginal(posterior, 2), xs)
marginal_x.plot()
marginal_x.mean()
marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs)
marginal_alpha.plot()
marginal_alpha.mean()
marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs)
marginal_beta.plot()
marginal_beta.mean()
```
## One coin with PyMC
```
import pymc3 as pm
n = 250
with pm.Model() as model1:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
x1 = pm.Beta('x1', alpha, beta)
k1 = pm.Binomial('k1', n=n, p=x1, observed=140)
pred = pm.sample_prior_predictive(1000)
```
Here's the graphical representation of the model.
```
pm.model_to_graphviz(model1)
from utils import kde_from_sample
kde_from_sample(pred['alpha'], prior_alpha.qs).plot()
prior_alpha.plot()
kde_from_sample(pred['beta'], prior_beta.qs).plot()
prior_beta.plot()
kde_from_sample(pred['x1'], prior_x.qs).plot()
prior_x.plot()
```
Now let's run the sampler.
```
with model1:
trace1 = pm.sample(500)
```
Here are the posterior distributions for the two coins.
```
compare_cdf(marginal_alpha, trace1['alpha'])
compare_cdf(marginal_beta, trace1['beta'])
compare_cdf(marginal_x, trace1['x1'])
```
## Two coins
```
def get_hyper(joint):
return joint.sum(axis=2)
posterior_hyper = get_hyper(posterior)
posterior_hyper.shape
prior2 = make_prior(posterior_hyper, prior_alpha, prior_beta, xs)
data = 250, 110
posterior2 = update(prior2, data)
marginal_alpha2 = Pmf(marginal(posterior2, 0), prior_alpha.qs)
marginal_alpha2.plot()
marginal_alpha2.mean()
marginal_beta2 = Pmf(marginal(posterior2, 1), prior_beta.qs)
marginal_beta2.plot()
marginal_beta2.mean()
marginal_x2 = Pmf(marginal(posterior2, 2), xs)
marginal_x2.plot()
marginal_x2.mean()
```
## Two coins with PyMC
```
with pm.Model() as model2:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
x1 = pm.Beta('x1', alpha, beta)
x2 = pm.Beta('x2', alpha, beta)
k1 = pm.Binomial('k1', n=n, p=x1, observed=140)
k2 = pm.Binomial('k2', n=n, p=x2, observed=110)
```
Here's the graph for this model.
```
pm.model_to_graphviz(model2)
```
Let's run the sampler.
```
with model2:
trace2 = pm.sample(500)
```
And here are the results.
```
kde_from_sample(trace2['alpha'], marginal_alpha.qs).plot()
marginal_alpha2.plot()
trace2['alpha'].mean(), marginal_alpha2.mean()
kde_from_sample(trace2['beta'], marginal_beta.qs).plot()
marginal_beta2.plot()
trace2['beta'].mean(), marginal_beta2.mean()
kde_from_sample(trace2['x2'], marginal_x.qs).plot()
marginal_x2.plot()
```
## Heart Attack Data
This example is based on [Chapter 10 of *Probability and Bayesian Modeling*](https://bayesball.github.io/BOOK/bayesian-hierarchical-modeling.html#example-deaths-after-heart-attack); it uses data on death rates due to heart attack for patients treated at various hospitals in New York City.
We can use Pandas to read the data into a `DataFrame`.
```
import os
filename = 'DeathHeartAttackManhattan.csv'
if not os.path.exists(filename):
!wget https://github.com/AllenDowney/BayesianInferencePyMC/raw/main/DeathHeartAttackManhattan.csv
import pandas as pd
df = pd.read_csv(filename)
df
```
The columns we need are `Cases`, which is the number of patients treated at each hospital, and `Deaths`, which is the number of those patients who died.
```
# shuffled = df.sample(frac=1)
data_ns = df['Cases'].values
data_ks = df['Deaths'].values
```
Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals.
## Hospital Data with grid
```
alpha = 4
beta = 0.5
qs = np.linspace(0.1, 25, 100)
ps = gamma(alpha, scale=1/beta).pdf(qs)
prior_alpha = Pmf(ps, qs)
prior_alpha.normalize()
prior_alpha.index.name = 'alpha'
qs = np.linspace(0.1, 50, 90)
ps = gamma(alpha, scale=1/beta).pdf(qs)
prior_beta = Pmf(ps, qs)
prior_beta.normalize()
prior_beta.index.name = 'beta'
prior_beta.shape
prior_alpha.plot()
prior_beta.plot()
prior_alpha.mean()
hyper = make_hyper(prior_alpha, prior_beta)
hyper.shape
xs = np.linspace(0.01, 0.99, 80)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
prior.shape
for data in zip(data_ns, data_ks):
print(data)
posterior = update(prior, data)
hyper = get_hyper(posterior)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs)
marginal_alpha.plot()
marginal_alpha.mean()
marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs)
marginal_beta.plot()
marginal_beta.mean()
marginal_x = Pmf(marginal(posterior, 2), prior_x.qs)
marginal_x.plot()
marginal_x.mean()
```
## Hospital Data with PyMC
```
with pm.Model() as model4:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
xs = pm.Beta('xs', alpha, beta, shape=len(data_ns))
ks = pm.Binomial('ks', n=data_ns, p=xs, observed=data_ks)
trace4 = pm.sample(500)
```
Here's the graph representation of the model, showing that the observable is an array of 13 values.
```
pm.model_to_graphviz(model4)
```
Here's the trace.
```
kde_from_sample(trace4['alpha'], marginal_alpha.qs).plot()
marginal_alpha.plot()
trace4['alpha'].mean(), marginal_alpha.mean()
kde_from_sample(trace4['beta'], marginal_beta.qs).plot()
marginal_beta.plot()
trace4['beta'].mean(), marginal_beta.mean()
trace_xs = trace4['xs'].transpose()
trace_xs.shape
kde_from_sample(trace_xs[-1], marginal_x.qs).plot()
marginal_x.plot()
trace_xs[-1].mean(), marginal_x.mean()
xs = np.linspace(0.01, 0.99, 80)
hyper = get_hyper(posterior)
post_all = make_prior(hyper, prior_alpha, prior_beta, xs)
def forget(posterior, data):
n, ks = data
X, K = np.meshgrid(xs, ks)
like_x = binom.pmf(K, n, X).prod(axis=0)
prior = posterior / like_x
prior /= prior.sum()
return prior
def get_marginal_x(post_all, data):
prior = forget(post_all, data)
hyper = get_hyper(prior)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
posterior = update(prior, data)
marginal_x = Pmf(marginal(posterior, 2), prior_x.qs)
return marginal_x
data = 270, 16
marginal_x = get_marginal_x(post_all, data)
kde_from_sample(trace_xs[0], marginal_x.qs).plot()
marginal_x.plot()
trace_xs[0].mean(), marginal_x.mean()
```
## One at a time
```
prior.shape, prior.sum()
likelihood = np.empty((len(df), len(xs)))
for i, row in df.iterrows():
n = row['Cases']
k = row['Deaths']
likelihood[i] = binom.pmf(k, n, xs)
prod = likelihood.prod(axis=0)
prod.shape
i = 3
all_but_one = prod / likelihood[i]
prior
hyper_i = get_hyper(prior * all_but_one)
hyper_i.sum()
prior_i = make_prior(hyper_i, prior_alpha, prior_beta, xs)
data = df.loc[i, 'Cases'], df.loc[i, 'Deaths']
data
posterior_i = update(prior_i, data)
marginal_alpha = Pmf(marginal(posterior_i, 0), prior_alpha.qs)
marginal_beta = Pmf(marginal(posterior_i, 1), prior_beta.qs)
marginal_x = Pmf(marginal(posterior_i, 2), prior_x.qs)
compare_cdf(marginal_alpha, trace4['alpha'])
compare_cdf(marginal_beta, trace4['beta'])
compare_cdf(marginal_x, trace_xs[i])
```
| true |
code
| 0.734322 | null | null | null | null |
|
# BB84 Quantum Key Distribution (QKD) Protocol using Qiskit
This notebook is a _demonstration_ of the BB84 Protocol for QKD using Qiskit.
BB84 is a quantum key distribution scheme developed by Charles Bennett and Gilles Brassard in 1984 ([paper]).
The first three sections of the paper are readable and should give you all the necessary information required.

[paper]: http://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute
from qiskit.providers.aer import QasmSimulator
from qiskit.visualization import *
```
## Choosing bases and encoding states
Alice generates two binary strings. One encodes the basis for each qubit:
$0 \rightarrow$ Computational basis
$1 \rightarrow$ Hadamard basis
The other encodes the state:
$0 \rightarrow|0\rangle$ or $|+\rangle $
$1 \rightarrow|1\rangle$ or $|-\rangle $
Bob also generates a binary string and uses the same convention to choose a basis for measurement
```
num_qubits = 32
alice_basis = np.random.randint(2, size=num_qubits)
alice_state = np.random.randint(2, size=num_qubits)
bob_basis = np.random.randint(2, size=num_qubits)
print(f"Alice's State:\t {np.array2string(alice_state, separator='')}")
print(f"Alice's Bases:\t {np.array2string(alice_basis, separator='')}")
print(f"Bob's Bases:\t {np.array2string(bob_basis, separator='')}")
```
## Creating the circuit
Based on the following results:
$X|0\rangle = |1\rangle$
$H|0\rangle = |+\rangle$
$ HX|0\rangle = |-\rangle$
Our algorithm to construct the circuit is as follows:
1. Whenever Alice wants to encode 1 in a qubit, she applies an $X$ gate to the qubit. To encode 0, no action is needed.
2. Wherever she wants to encode it in the Hadamard basis, she applies an $H$ gate. No action is necessary to encode a qubit in the computational basis.
3. She then _sends_ the qubits to Bob (symbolically represented in this circuit using wires)
4. Bob measures the qubits according to his binary string. To measure a qubit in the Hadamard basis, he applies an $H$ gate to the corresponding qubit and then performs a mesurement on the computational basis.
```
def make_bb84_circ(enc_state, enc_basis, meas_basis):
'''
enc_state: array of 0s and 1s denoting the state to be encoded
enc_basis: array of 0s and 1s denoting the basis to be used for encoding
0 -> Computational Basis
1 -> Hadamard Basis
meas_basis: array of 0s and 1s denoting the basis to be used for measurement
0 -> Computational Basis
1 -> Hadamard Basis
'''
num_qubits = len(enc_state)
bb84_circ = QuantumCircuit(num_qubits)
# Sender prepares qubits
for index in range(len(enc_basis)):
if enc_state[index] == 1:
bb84_circ.x(index)
if enc_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Receiver measures the received qubits
for index in range(len(meas_basis)):
if meas_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
bb84_circ.measure_all()
return bb84_circ
```
## Creating the key
Alice and Bob only keep the bits where their bases match.
The following outcomes are possible for each bit sent using the BB84 protocol
| Alice's bit | Alice's basis | Alice's State | Bob's basis | Bob's outcome | Bob's bit | Probability |
|---------------------- |------------------------ |------------------------ |---------------------- |------------------------ |-------------------- |-------------------- |
| 0 | C | 0 | C | 0 | 0 | 1/8 |
| 0 | C | 0 | H | + | 0 | 1/16 |
| 0 | C | 0 | H | - | 1 | 1/16 |
| 0 | H | + | C | 0 | 0 | 1/16 |
| 0 | H | + | C | 1 | 1 | 1/16 |
| 0 | H | + | H | + | 0 | 1/8 |
| 1 | C | 1 | C | 1 | 1 | 1/8 |
| 1 | C | 1 | H | + | 0 | 1/16 |
| 1 | C | 1 | H | - | 1 | 1/16 |
| 1 | H | - | C | 0 | 0 | 1/16 |
| 1 | H | - | C | 1 | 1 | 1/16 |
| 1 | H | - | H | - | 1 | 1/8 |
\begin{align*}
P_{\text{same basis}} &= P_A(C)\times P_B(C) + P_A(H)\times P_B(H)\\
&= \frac{1}{2} \times \frac{1}{2} + \frac{1}{2} \times \frac{1}{2} \\
&= \frac{1}{2}
\end{align*}
Thus, on average, only half of the total bits will be in the final key. It is also interesting to note that half of the key bits will be 0 and the other half will be 1 (again, on average)
```
bb84_circ = make_bb84_circ(alice_state, alice_basis, bob_basis)
temp_key = execute(bb84_circ.reverse_bits(),backend=QasmSimulator(),shots=1).result().get_counts().most_frequent()
key = ''
for i in range(num_qubits):
if alice_basis[i] == bob_basis[i]: # Only choose bits where Alice and Bob chose the same basis
key += str(temp_key[i])
print(f'The length of the key is {len(key)}')
print(f"The key contains {(key).count('0')} zeroes and {(key).count('1')} ones")
print(f"Key: {key}")
```
| true |
code
| 0.476458 | null | null | null | null |
|
My family know I like puzzles so they gave me this one recently:

When you take it out the box it looks like this:

And very soon after it looked like this (which explains why I've christened the puzzle "the snake puzzle"):

The way it works is that there is a piece of elastic running through each block. On the majority of the blocks the elastic runs straight through, but on some of the it goes through a 90 degree bend. The puzzle is trying to make it back into a cube.
After playing with it a while, I realised that it really is quite hard so I decided to write a program to solve it.
The first thing to do is find a representation for the puzzle. Here is the one I chose.
```
# definition - number of straight bits, before 90 degree bend
snake = [3,2,2,2,1,1,1,2,2,1,1,2,1,2,1,1,2]
assert sum(snake) == 27
```
If you look at the picture of it above where it is flattened you can see where the numbers came from. Start from the right hand side.
That also gives us a way of calculating how many combinations there are. At each 90 degree joint, there are 4 possible rotations (ignoring the rotations of the 180 degree blocks) so there are
```
4**len(snake)
```
17 billion combinations. That will include some rotations and reflections, but either way it is a big number.
However it is very easy to know when you've gone wrong with this kind of puzzle - as soon as you place a piece outside of the boundary of the 3x3x3 block you know it is wrong and should try something different.
So how to represent the solution? The way I've chosen is to represent it as a 5x5x5 cube. This is larger than it needs to be but if we fill in the edges then we don't need to do any complicated comparisons to see if a piece is out of bounds. This is a simple trick but it saves a lot of code.
I've also chosen to represent the 3d structure not as a 3d array but as a 1D array (or `list` in python speak) of length 5*5*5 = 125.
To move in the `x` direction you add 1, to move in the `y` direction you add 5 and to move in the `z` direction you move 25. This simplifies the logic of the solver considerably - we don't need to deal with vectors.
The basic definitions of the cube look like this:
```
N = 5
xstride=1 # number of pieces to move in the x direction
ystride=N # number of pieces to move in the y direction
zstride=N*N # number of pieces to move in the z direction
```
In our `list` we will represent empty space with `0` and space which can't be used with `-1`.
```
empty = 0
```
Now define the empty cube with the boundary round the edges.
```
# Define cube as 5 x 5 x 5 with filled in edges but empty middle for
# easy edge detection
top = [-1]*N*N
middle = [-1]*5 + [-1,0,0,0,-1]*3 + [-1]*5
cube = top + middle*3 + top
```
We're going to want a function to turn `x, y, z` co-ordinates into an index in the `cube` list.
```
def pos(x, y, z):
"""Convert x,y,z into position in cube list"""
return x+y*ystride+z*zstride
```
So let's see what that cube looks like...
```
def print_cube(cube, margin=1):
"""Print the cube"""
for z in range(margin,N-margin):
for y in range(margin,N-margin):
for x in range(margin,N-margin):
v = cube[pos(x,y,z)]
if v == 0:
s = " . "
else:
s = "%02d " % v
print(s, sep="", end="")
print()
print()
print_cube(cube, margin = 0)
```
Normally we'll print it without the margin.
Now let's work out how to place a segment.
Assuming that the last piece was placed at `position` we want to place a segment of `length` in `direction`. Note the `assert` to check we aren't placing stuff on top of previous things, or out of the edges.
```
def place(cube, position, direction, length, piece_number):
"""Place a segment in the cube"""
for _ in range(length):
position += direction
assert cube[position] == empty
cube[position] = piece_number
piece_number += 1
return position
```
Let's just try placing some segments and see what happens.
```
cube2 = cube[:] # copy the cube
place(cube2, pos(0,1,1), xstride, 3, 1)
print_cube(cube2)
place(cube2, pos(3,1,1), ystride, 2, 4)
print_cube(cube2)
place(cube2, pos(3,3,1), zstride, 2, 6)
print_cube(cube2)
```
The next thing we'll need is to undo a place. You'll see why in a moment.
```
def unplace(cube, position, direction, length):
"""Remove a segment from the cube"""
for _ in range(length):
position += direction
cube[position] = empty
unplace(cube2, pos(3,3,1), zstride, 2)
print_cube(cube2)
```
Now let's write a function which returns whether a move is valid given a current `position` and a `direction` and a `length` of the segment we are trying to place.
```
def is_valid(cube, position, direction, length):
"""Returns True if a move is valid"""
for _ in range(length):
position += direction
if cube[position] != empty:
return False
return True
is_valid(cube2, pos(3,3,1), zstride, 2)
is_valid(cube2, pos(3,3,1), zstride, 3)
```
Given `is_valid` it is now straight forward to work out what moves are possible at a given time, given a `cube` with a `position`, a `direction` and a `length` we are trying to place.
```
# directions next piece could go in
directions = [xstride, -xstride, ystride, -ystride, zstride, -zstride]
def moves(cube, position, direction, length):
"""Returns the valid moves for the current position"""
valid_moves = []
for new_direction in directions:
# Can't carry on in same direction, or the reverse of the same direction
if new_direction == direction or new_direction == -direction:
continue
if is_valid(cube, position, new_direction, length):
valid_moves.append(new_direction)
return valid_moves
moves(cube2, pos(3,3,1), ystride, 2)
```
So that is telling us that you can insert a segment of length 2 using a direction of `-xstride` or `zstride`. If you look at previous `print_cube()` output you'll see those are the only possible moves.
Now we have all the bits to build a recursive solver.
```
def solve(cube, position, direction, snake, piece_number):
"""Recursive cube solver"""
if len(snake) == 0:
print("Solution")
print_cube(cube)
return
length, snake = snake[0], snake[1:]
valid_moves = moves(cube, position, direction, length)
for new_direction in valid_moves:
new_position = place(cube, position, new_direction, length, piece_number)
solve(cube, new_position, new_direction, snake, piece_number+length)
unplace(cube, position, new_direction, length)
```
This works by being passed in the `snake` of moves left. If there are no moves left then it must be solved, so we print the solution. Otherwise it takes the head off the `snake` with `length, snake = snake[0], snake[1:]` and makes the list of valid moves of that `length`.
Then we `place` each move, and try to `solve` that cube using a recursive call to `solve`. We `unplace` the move so we can try again.
This very quickly runs through all the possible solutions.
```
# Start just off the side
position = pos(0,1,1)
direction = xstride
length = snake[0]
# Place the first segment along one edge - that is the only possible place it can go
position = place(cube, position, direction, length, 1)
# Now solve!
solve(cube, position, direction, snake[1:], length+1)
```
Wow! It came up with 2 solutions! However they are the same solution just rotated and reflected.
But how do you use the solution? Starting from the correct end of the snake, place each piece into its corresponding number. Take the first layer of the solution as being the bottom (or top - whatever is easiest), the next layer is the middle and the one after the top.

After a bit of fiddling around you'll get...

I hope you enjoyed that introduction to puzzle solving with computer.
If you want to try one yourselves, use the same technique to solve solitaire.
| true |
code
| 0.569853 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn import svm
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from pyod.utils.data import generate_data, get_outliers_inliers
import warnings
warnings.filterwarnings('ignore')
```
## 샘플 데이터 생성
OCSVM은 Unsupervised Learning Method 중 하나이며, Novelty Detection에서 사용되는 방법 중 하나이다.
따라서, 모든 데이터를 정상이라고 가정하고 모델 훈련을 수행해야 한다.
샘플 데이터를 생성하는 과정은 다음과 같다.
- PyoD 라이브러리를 사용하여 샘플 데이터 생성. 이 때, 실제 Outlier 비율은 전체 데이터의 5%로 지정한다.
- 전체 데이터에서 훈련 데이터와 테스트 데이터를 분할한다.
```
train, test = generate_data(random_state = 42, train_only = True, contamination = 0.05)
X_train, X_test, y_train, y_test = train_test_split(train, test, test_size = 0.2, random_state = 42)
```
## 모델 적합
앞서 말했듯이, OCSVM은 라벨 데이터를 필요로하지 않는다. 따라서, 피쳐 데이터만을 이용해 모델을 적합시킨다.
```
clf = svm.OneClassSVM(nu = 0.1, kernel = 'rbf', gamma = 0.1)
clf.fit(X_train) # Unsupervised Learning Method
```
## 적합 모델을 이용한 라벨 분류
```
class OCSVM:
def __init__(self, nu, kernel, gamma):
self.nu = nu
self.kernel = kernel
self.gamma = gamma
self.result_df = pd.DataFrame()
self.clf = svm.OneClassSVM(nu = self.nu, kernel = self.kernel, gamma = self.gamma)
def fit(self, X_train, ground_truth):
self.X_train = X_train
self.y_train = ground_truth
self.clf.fit(self.X_train)
return self.clf
def predict(self, X_test, is_return = False):
self.X_test = X_test
self.prediction = self.clf.predict(self.X_test)
if is_return:
return self.prediction
def visualization(self):
self.result_df['X1'] = self.X_train[:, 0]
self.result_df['X2'] = self.X_train[:, 1]
self.result_df['Prediction'] = pd.Series(self.prediction).apply(lambda x: 0 if x == 1 else 1)
self.result_df['Actual'] = self.y_train
xx, yy = np.meshgrid(np.linspace(self.result_df['X1'].min() - 1, self.result_df['X1'].max() + 1, 500),
np.linspace(self.result_df['X2'].min() - 1, self.result_df['X2'].max() + 1, 500))
z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
z = z.reshape(xx.shape)
plt.title("Novelty Detection\nNu = {}, Kernel = {}, Gamma = {}".format(self.nu, self.kernel, self.gamma))
plt.contourf(xx, yy, levels = np.linspace(z.min(), 0, 7), cmap = plt.cm.PuBu)
a = plt.contourf(xx, yy, z, level = [0], linewidths = 2, color = 'darkred')
plt.contourf(xx, yy, z, levels=[0, z.max()], colors='palevioletred')
s = 40
b1 = plt.scatter(self.X_train[:, 0], self.X_train[:, 1], c = 'white', s = s, edgecolors = 'k')
outlier = plt.scatter(self.result_df.loc[self.result_df['Prediction'] == 1]['X1'], self.result_df.loc[self.result_df['Prediction'] == 1]['X2'],
c = 'red', edgecolor = 'k')
actual = plt.scatter(self.result_df.loc[self.result_df['Actual'] == 1]['X1'], self.result_df.loc[self.result_df['Actual'] == 1]['X2'],
c = 'gold', edgecolor = 'k', alpha = 0.8)
plt.axis('tight')
plt.xlim((self.result_df['X1'].min() - 1, self.result_df['X1'].max() + 1))
plt.ylim((self.result_df['X2'].min() - 1, self.result_df['X2'].max() + 1))
plt.show()
nu = 0.1
kernel = 'rbf'
gamma = 0.007
model = OCSVM(nu = nu, kernel = kernel, gamma = gamma)
model.fit(X_train, y_train)
model.predict(X_train)
```
## 시각화
```
model.visualization()
```
그래프를 통해 알 수 있다시피, OCSVM 하이퍼파라미터의 Nu는 SVM의 c와 비슷한 의미를 가진다. 다른 의미로 말하면, 오분류 비율에 대한 최대 상한 값이라고 볼 수도 있다. 예를 들어, Nu = 0.05로 설정하면 훈련 데이터의 최대 5%가 잘 못 분류된다고 말할 수 있다.
| true |
code
| 0.618579 | null | null | null | null |
|
# Hugging Face Transformers with `Pytorch`
### Text Classification Example using vanilla `Pytorch`, `Transformers`, `Datasets`
# Introduction
Welcome to this end-to-end multilingual Text-Classification example using PyTorch. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with `Pytorch` to fine-tune a multilingual transformer for text-classification. This example is a derived version of the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) notebook and uses Amazon SageMaker for distributed training. In the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) we showed how to fine-tune `distilbert-base-multilingual-cased` on the `amazon_reviews_multi` dataset for `sentiment-analysis`. This dataset has over 1.2 million data points, which is huge. Running training would take on 1x NVIDIA V100 takes around 6,5h for `batch_size` 16, which is quite long.
To scale and accelerate our training we will use [Amazon SageMaker](https://aws.amazon.com/de/sagemaker/), which provides two strategies for [distributed training](https://huggingface.co/docs/sagemaker/train#distributed-training), [data parallelism](https://huggingface.co/docs/sagemaker/train#data-parallelism) and model parallelism. Data parallelism splits a training set across several GPUs, while [model parallelism](https://huggingface.co/docs/sagemaker/train#model-parallelism) splits a model across several GPUs. We are going to use [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/), which has been built into the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) API. To be able use data-parallelism we only have to define the `distribution` parameter in our `HuggingFace` estimator.
I moved the "training" part of the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) notebook into a separate training script [train.py](./scripts/train.py), which accepts the same hyperparameter and can be run on Amazon SageMaker using the `HuggingFace` estimator.
Our goal is to decrease the training duration by scaling our global/effective batch size from 16 up to 128, which is 8x bigger than before. For monitoring our training we will use the new Training Metrics support by the [Hugging Face Hub](hf.co/models)
### Installation
```
#!pip install sagemaker
!pip install transformers datasets tensorboard datasets[s3] --upgrade
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step we will define global configurations and parameters, which are used across the whole end-to-end fine-tuning proccess, e.g. `tokenizer` and `model` we will use.
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
_Note: The execution role is only available when running a notebook within SageMaker (SageMaker Notebook Instances or Studio). If you run `get_execution_role` in a notebook not on SageMaker, expect a region error._
You can comment in the cell below and provide a an IAM Role name with SageMaker permissions to setup your environment out side of SageMaker.
```
# import sagemaker
# import boto3
# import os
# os.environ["AWS_DEFAULT_REGION"]="your-region"
# # This ROLE needs to exists with your associated AWS Credentials and needs permission for SageMaker
# ROLE_NAME='role-name-of-your-iam-role-with-right-permissions'
# iam_client = boto3.client('iam')
# role = iam_client.get_role(RoleName=ROLE_NAME)['Role']['Arn']
# sess = sagemaker.Session()
# print(f"sagemaker role arn: {role}")
# print(f"sagemaker bucket: {sess.default_bucket()}")
# print(f"sagemaker session region: {sess.boto_region_name}")
```
In this example are we going to fine-tune the [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) a multilingual DistilBERT model.
```
model_id = "distilbert-base-multilingual-cased"
# name for our repository on the hub
model_name = model_id.split("/")[-1] if "/" in model_id else model_id
repo_name = f"{model_name}-sentiment"
```
## Dataset & Pre-processing
As Dataset we will use the [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) a multilingual text-classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.
```
dataset_id="amazon_reviews_multi"
dataset_config="all_languages"
seed=33
```
To load the `amazon_reviews_multi` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```
from datasets import load_dataset
dataset = load_dataset(dataset_id,dataset_config)
```
### Pre-processing & Tokenization
The [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) has 5 classes (`stars`) to match those into a `sentiment-analysis` task we will map those star ratings to the following classes `labels`:
* `[1-2]`: `Negative`
* `[3]`: `Neutral`
* `[4-5]`: `Positive`
Those `labels` can be later used to create a user friendly output after we fine-tuned our model.
```
from datasets import ClassLabel
def map_start_to_label(review):
if review["stars"] < 3:
review["stars"] = 0
elif review["stars"] == 3:
review["stars"] = 1
else:
review["stars"] = 2
return review
# convert 1-5 star reviews to 0,1,2
dataset = dataset.map(map_start_to_label)
# convert feature from Value to ClassLabel
class_feature = ClassLabel(names=['negative','neutral', 'positive'])
dataset = dataset.cast_column("stars", class_feature)
# rename our target column to labels
dataset = dataset.rename_column("stars","labels")
# drop columns that are not needed
dataset = dataset.remove_columns(['review_id', 'product_id', 'reviewer_id', 'review_title', 'language', 'product_category'])
dataset["train"].features
```
Before we prepare the dataset for training. Lets take a quick look at the class distribution of the dataset.
```
import pandas as pd
df = dataset["train"].to_pandas()
df.hist()
```
The Distribution is not perfect, but lets give it a try and improve on this later.
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Additionally we add the `truncation=True` and `max_length=512` to align the length and truncate texts that are bigger than the maximum size allowed by the model.
```
def process(examples):
tokenized_inputs = tokenizer(
examples["review_body"], truncation=True, max_length=512
)
return tokenized_inputs
tokenized_datasets = dataset.map(process, batched=True)
tokenized_datasets["train"].features
```
Before we can start our distributed Training, we need to upload our already pre-processed dataset to Amazon S3. Therefore we will use the built-in utils of `datasets`
```
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{dataset_id}/train'
tokenized_datasets["train"].save_to_disk(training_input_path, fs=s3)
# save validation_dataset to s3
eval_input_path = f's3://{sess.default_bucket()}/{dataset_id}/test'
tokenized_datasets["validation"].save_to_disk(eval_input_path, fs=s3)
```
## Creating an Estimator and start a training job
Last step before we can start our managed training is to define our Hyperparameters, create our sagemaker `HuggingFace` estimator and configure distributed training.
```
from sagemaker.huggingface import HuggingFace
from huggingface_hub import HfFolder
# hyperparameters, which are passed into the training job
hyperparameters={
'model_id':'distilbert-base-multilingual-cased',
'epochs': 3,
'per_device_train_batch_size': 16,
'per_device_eval_batch_size': 16,
'learning_rate': 3e-5*8,
'fp16': True,
# logging & evaluation strategie
'strategy':'steps',
'steps':5_000,
'save_total_limit':2,
'load_best_model_at_end':True,
'metric_for_best_model':"f1",
# push to hub config
'push_to_hub': True,
'hub_model_id': 'distilbert-base-multilingual-cased-sentiment-2',
'hub_token': HfFolder.get_token()
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py',
source_dir = './scripts',
instance_type = 'ml.p3.16xlarge',
instance_count = 1,
role = role,
transformers_version = '4.12',
pytorch_version = '1.9',
py_version = 'py38',
hyperparameters = hyperparameters,
distribution = distribution
)
```
Since, we are using SageMaker Data Parallelism our total_batch_size will be per_device_train_batch_size * n_gpus.
```
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'eval': eval_input_path
}
# starting the train job with our uploaded datasets as input
# setting wait to False to not expose the HF Token
huggingface_estimator.fit(data,wait=False)
```
Since we are using the Hugging Face Hub intergration with Tensorboard we can inspect our progress directly on the hub, as well as testing checkpoints during the training.
```
from huggingface_hub import HfApi
whoami = HfApi().whoami()
username = whoami['name']
print(f"https://huggingface.co/{username}/{hyperparameters['hub_model_id']}")
```

| true |
code
| 0.523603 | null | null | null | null |
|
# Quantum teleportation
By the end of this post, we will teleport the quantum state
$$\sqrt{0.70}\vert0\rangle + \sqrt{0.30}\vert1\rangle$$ from Alice's qubit to Bob's qubit.
Recall that the teleportation algorithm consists of four major components:
1. Initializing the state to be teleported. We will do this on Alice's qubit `q0`.
2. Creating entanglement between two qubits. We will use qubits `q1` and `q2` for this. Recall that Alice owns `q1`, and Bob owns `q2`.
3. Applying a Bell measurement on Alice's qubits `q0` and `q1`.
4. Applying classically controlled operations on Bob's qubit `q2` depending on the outcomes of the Bell measurement on Alice's qubits.
This exercise guides you through each of these steps.
### Initializing the state to be teleported
First, we create a quantum circuit that has the state $$\sqrt{0.70}\vert0\rangle + \sqrt{0.30}\vert1\rangle$$ We can do this by using `Qiskit`'s `initialize` function.
```
import numpy as np
import math
def initialize_qubit(given_circuit, qubit_index):
### WRITE YOUR CODE BETWEEN THESE LINES - START
desired_vector = [math.sqrt(0.70),math.sqrt(0.30)]
given_circuit.initialize(desired_vector, [all_qubits_Alice[qubit_index]])
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
Next, we need to create entanglement between Alice's and Bob's qubits.
```
def entangle_qubits(given_circuit, qubit_Alice, qubit_Bob):
### WRITE YOUR CODE BETWEEN THESE LINES - START
given_circuit.h(qubit_Alice)
given_circuit.cx(qubit_Alice,qubit_Bob)
mycircuit.barrier()
given_circuit.cx(0,1)
given_circuit.h(0)
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
Next, we need to do a Bell measurement of Alice's qubits.
```
def bell_meas_Alice_qubits(given_circuit, qubit1_Alice, qubit2_Alice, clbit1_Alice, clbit2_Alice):
### WRITE YOUR CODE BETWEEN THESE LINES - START
given_circuit.measure([0,1], [0,1])
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
Finally, we apply controlled operations on Bob's qubit. Recall that the controlled operations are applied in this order:
- an $X$ gate is applied on Bob's qubit if the measurement coutcome of Alice's second qubit, `clbit2_Alice`, is `1`.
- a $Z$ gate is applied on Bob's qubit if the measurement coutcome of Alice's first qubit, `clbit1_Alice`, is `1`.
```
def controlled_ops_Bob_qubit(given_circuit, qubit_Bob, clbit1_Alice, clbit2_Alice):
### WRITE YOUR CODE BETWEEN THESE LINES - START
given_circuit.x(qubit_Bob).c_if(clbit2_Alice, 1)
given_circuit.z(qubit_Bob).c_if(clbit1_Alice, 1)
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
The next lines of code put everything together.
```
### imports
from qiskit import QuantumRegister, ClassicalRegister
### set up the qubits and classical bits
all_qubits_Alice = QuantumRegister(2)
all_qubits_Bob = QuantumRegister(1)
creg1_Alice = ClassicalRegister(1)
creg2_Alice = ClassicalRegister(1)
### quantum teleportation circuit here
# Initialize
mycircuit = QuantumCircuit(all_qubits_Alice, all_qubits_Bob, creg1_Alice, creg2_Alice)
initialize_qubit(mycircuit, 0)
mycircuit.barrier()
# Entangle
entangle_qubits(mycircuit, 1, 2)
mycircuit.barrier()
# Do a Bell measurement
bell_meas_Alice_qubits(mycircuit, all_qubits_Alice[0], all_qubits_Alice[1], creg1_Alice, creg2_Alice)
mycircuit.barrier()
# Apply classically controlled quantum gates
controlled_ops_Bob_qubit(mycircuit, all_qubits_Bob[0], creg1_Alice, creg2_Alice)
### Look at the complete circuit
mycircuit.draw()
from qiskit import BasicAer
from qiskit.visualization import plot_histogram, plot_bloch_multivector
backend = BasicAer.get_backend('statevector_simulator')
out_vector = execute(mycircuit, backend).result().get_statevector()
plot_bloch_multivector(out_vector)
```
As you can see, the state from the qubit 1 is teleported to the qubit 2. However, please note something, in the teleportation process the original qubit state was destroyed when we did the measurements of Alice's qubits.
## References:
The original lab can be found in the link: https://qiskit.org/learn/intro-qc-qh/
Here is just my solution to the original lab file. I made some modifications to fit the style of the blog as well.
| true |
code
| 0.644477 | null | null | null | null |
|
# How to search the IOOS CSW catalog with Python tools
This notebook demonstrates a how to query a [Catalog Service for the Web (CSW)](https://en.wikipedia.org/wiki/Catalog_Service_for_the_Web), like the IOOS Catalog, and to parse its results into endpoints that can be used to access the data.
```
import os
import sys
ioos_tools = os.path.join(os.path.pardir)
sys.path.append(ioos_tools)
```
Let's start by creating the search filters.
The filter used here constraints the search on a certain geographical region (bounding box), a time span (last week), and some [CF](http://cfconventions.org/Data/cf-standard-names/37/build/cf-standard-name-table.html) variable standard names that represent sea surface temperature.
```
from datetime import datetime, timedelta
import dateutil.parser
service_type = 'WMS'
min_lon, min_lat = -90.0, 30.0
max_lon, max_lat = -80.0, 40.0
bbox = [min_lon, min_lat, max_lon, max_lat]
crs = 'urn:ogc:def:crs:OGC:1.3:CRS84'
# Temporal range: Last week.
now = datetime.utcnow()
start, stop = now - timedelta(days=(7)), now
start = dateutil.parser.parse('2017-03-01T00:00:00Z')
stop = dateutil.parser.parse('2017-04-01T00:00:00Z')
# Ocean Model Names
model_names = ['NAM', 'GFS']
```
With these 3 elements it is possible to assemble a [OGC Filter Encoding (FE)](http://www.opengeospatial.org/standards/filter) using the `owslib.fes`\* module.
\* OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models.
```
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(wildCard='*', escapeChar='\\',
singleChar='?', propertyname='apiso:AnyText')
or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw)
for val in model_names])
kw = dict(wildCard='*', escapeChar='\\',
singleChar='?', propertyname='apiso:ServiceType')
serviceType = fes.PropertyIsLike(literal=('*%s*' % service_type), **kw)
begin, end = fes_date_filter(start, stop)
bbox_crs = fes.BBox(bbox, crs=crs)
filter_list = [
fes.And(
[
bbox_crs, # bounding box
begin, end, # start and end date
or_filt, # or conditions (CF variable names)
serviceType # search only for datasets that have WMS services
]
)
]
from owslib.csw import CatalogueServiceWeb
endpoint = 'https://data.ioos.us/csw'
csw = CatalogueServiceWeb(endpoint, timeout=60)
```
The `csw` object created from `CatalogueServiceWeb` did not fetched anything yet.
It is the method `getrecords2` that uses the filter for the search. However, even though there is a `maxrecords` option, the search is always limited by the server side and there is the need to iterate over multiple calls of `getrecords2` to actually retrieve all records.
The `get_csw_records` does exactly that.
```
def get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000):
"""Iterate `maxrecords`/`pagesize` times until the requested value in
`maxrecords` is reached.
"""
from owslib.fes import SortBy, SortProperty
# Iterate over sorted results.
sortby = SortBy([SortProperty('dc:title', 'ASC')])
csw_records = {}
startposition = 0
nextrecord = getattr(csw, 'results', 1)
while nextrecord != 0:
csw.getrecords2(constraints=filter_list, startposition=startposition,
maxrecords=pagesize, sortby=sortby)
csw_records.update(csw.records)
if csw.results['nextrecord'] == 0:
break
startposition += pagesize + 1 # Last one is included.
if startposition >= maxrecords:
break
csw.records.update(csw_records)
get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)
records = '\n'.join(csw.records.keys())
print('Found {} records.\n'.format(len(csw.records.keys())))
for key, value in list(csw.records.items()):
print('[{}]\n{}\n'.format(value.title, key))
csw.request
#write to JSON for use in TerriaJS
csw_request = '"{}": {}"'.format('getRecordsTemplate',str(csw.request,'utf-8'))
import io
import json
with io.open('query.json', 'a', encoding='utf-8') as f:
f.write(json.dumps(csw_request, ensure_ascii=False))
f.write('\n')
```
| true |
code
| 0.332686 | null | null | null | null |
|
# Vladislav Abramov and Sergei Garshin DSBA182
## The Task
### Что ждем от туториала?
1. Оценить конкретную модель заданного класса. Не только сделать .fit, но и выписать полученное уравнение!
2. Автоматически подобрать модель (встроенный подбор)
3. Построить графики прогнозов, интервальные прогнозы где есть.
4. Сравнить несколько (две-три) модели данного класса с помощью скользящего окна.
5. Творчество, любые дополнения, мемасики :)
### Класс выбираем: ETS, ARIMA, BATS + TBATS, PROPHET, случайный лес + создание признаков, GARCH, своё предложить
### Цель: когда через год будут люди спрашивать "как в питоне оценить ets/arima?" ответ должен быть "читайте туториалы от нашего курса!"
---
---
---
# Real Data Analysis with ARIMA models
Let's begin with collecting stock data
```
import pandas as pd
import yfinance as yf
from matplotlib import pyplot as plt
import numpy as np
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from pmdarima.arima import auto_arima, ARIMA, ADFTest
from sklearn.metrics import mean_squared_error
from math import sqrt
from tqdm import tqdm
from sklearn.metrics import r2_score
import warnings
warnings.filterwarnings('ignore')
def should_diff(data):
adf_test = ADFTest(alpha = 0.05)
return adf_test.should_diff(data)
def get_stock_data(ticker, start, end):
tickerData = yf.Ticker(ticker)
tickerDf = tickerData.history(period='1d', start = start, end = end)
return tickerDf
def train_test_devision(n, data):
train = data[:-n]
test = data[-n:]
return train, test
def differentiate_data(dataset, interval=1):
diff = list()
for i in range(interval, len(dataset)):
value = dataset[i] - dataset[i - interval]
diff.append(value)
return diff
def autocorrelation_plot(data):
data = np.array(data)**2
plot_acf(data)
plt.show()
def p_autocorrelation_plot(data):
data = np.array(data)**2
plot_pacf(data)
plt.show()
data = get_stock_data('AAPL', '2015-1-1', '2021-2-1')
data.head(10)
```
---
Here we may observe the graph of stock price for Apple Inc. on the perios 1st Jan 2015 till 1st Feb 2021
```
plt.plot(data['Close'])
plt.title('Close Stock Prices')
```
---
Looking at the graph it is obvious that data is not stationary and has a strong trend. However, lets make sure that data is not stationary by Autocorrelation plot and Augmented Dickey-Fuller test.
```
print('Should differentiate? :', should_diff(data['Close']))
print()
print('ACF of undifferentiated data')
autocorrelation_plot(data['Close'])
```
---
As we can see, we were right, the data is not stationary!
## Stationarity check & convertion data to stationary
For now, lets differentiate our initial stock data to build a stationary graph of deltas
```
X = pd.DataFrame()
X['Diff_Close'] = differentiate_data(data['Close'])
plt.plot(X['Diff_Close'])
plt.title('Stationary stock data plot')
```
As we may notice we have vanished trend and made the data much more stationary than it was before, for the next step, lets check the stationary feature by Autocorrelation, Partial Autocorrelation plot and Augmented Dickey-Fuller test again.
```
print('Should differentiate? :', should_diff(X['Diff_Close']))
print()
print('ACF of differentiated data')
autocorrelation_plot(X['Diff_Close'])
print('PACF of differentiated data')
p_autocorrelation_plot(X['Diff_Close'])
```
Wow! The data has become stationary! We may go further!
---
## Train / Test devision
On this step we have devide our data into two parts, train and test. Our model will use the training set to make predictions and compare them with testing set.
```
n = 50
train, test = train_test_devision(n, data['Close'])
fig, ax = plt.subplots()
ax.plot(train, label = 'Train Set')
ax.plot(test, label = 'Test Set')
fig.set_figheight(6)
fig.set_figwidth(10)
ax.legend()
```
---
# Manual Model
In this part we have decided to train ARIMA(3,1,2) model, where p = 3 AR parts, d = 1 as we need 1 differentiation and q = 2 MA parts
```
X = data['Close'].values
size = len(train.values)
train, test = train.values, test.values
history = [x for x in train]
predictions, CI = [],[]
for t in tqdm(range(len(test))):
model = ARIMA((3,1,2))
model.fit(history)
y_hat, conf_int = model.predict(n_periods = 1, return_conf_int = True, alpha=0.05)
predictions.append(y_hat)
CI.append(conf_int)
obs = test[t]
history.append(obs)
# print('predicted=%f, expected=%f' % (yhat, obs))
rmse = sqrt(mean_squared_error(test, predictions))
r_squared = r2_score(test, predictions)
print('Test RMSE: %.3f' % rmse)
print('Test R^2: %.3f' % r_squared)
fig, ax = plt.subplots(figsize=(15,8))
ax.plot(test, label = 'Test Set')
ax.plot(predictions, label = 'Prediction Set')
ax.set_title('ARIMA (3,1,2)')
ax.set_xlabel('Price')
ax.set_ylabel('Day')
ax.legend()
model.summary()
```
## The ARIMA equation we got
$\Delta y_t = -0.0090 \Delta y_{t-1} -0.1220 \Delta y_{t-2} -0.0377 \Delta y_{t-3} -0.1042 \varepsilon_{t-1} -0.1690 y_{t-2}\varepsilon_{t-2}$
where $\\ \Delta y_t = y_t - y_{t-1}$
As we may se the model works pretty well
---
## Automatic choice of the model
In this section we would like to play with autosetting parameters, which also include sesonal dependency
```
n = 50
train, test = train_test_devision(n, data['Close'])
model = auto_arima(train, start_p=1, start_q=1,
max_p=3, max_q=3, m=12,
start_P=0, seasonal=True,
d=1, D=1, trace = True,
error_action='ignore',
suppress_warnings = True,
stepwise = True)
model.summary()
y_hat, conf_int = model.predict(n_periods = n, return_conf_int = True, alpha=0.05)
predictions = pd.DataFrame(y_hat, index = test.index, columns = ['Prediction'])
CI = pd.DataFrame({'CI lower': conf_int[:, 0], 'CI upper': conf_int[:, 1]}, index = test.index)
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(20,8))
ax1.plot(train[1400:], label = 'Train Set')
ax1.plot(test, label = 'Test Set')
ax1.plot(predictions, label = 'Prediction Set')
ax1.plot(CI['CI lower'], label = 'CI lower', c = 'r')
ax1.plot(CI['CI upper'], label = 'CI upper', c = 'r')
ax1.set_title('Close look at the predictions')
ax1.set_xlabel('Price')
ax1.set_ylabel('Date')
ax1.legend()
ax2.plot(train[900:], label = 'Train Set')
ax2.plot(test, label = 'Test Set')
ax2.plot(predictions, label = 'Prediction Set')
ax2.plot(CI['CI lower'], label = 'CI lower', c = 'r')
ax2.plot(CI['CI upper'], label = 'CI upper', c = 'r')
ax2.set_title('Global look at the predictions')
ax2.set_xlabel('Price')
ax2.set_ylabel('Date')
ax2.legend()
```
To observe the data we have built two graphs, the left one catches more localy than the right one.
---
---
---
К сожалению, на вечер среды мы не успели выполнить все пункиы и дать подробное описание нашим шагам. Очень просим прокоммментировать выполненные этапы, дать советы и наставления :)
| true |
code
| 0.634685 | null | null | null | null |
|
[exercises](intro.ipynb)
```
import numpy as np
np.arange(6)
np.arange(0, 0.6, 0.1), np.arange(6) * 0.1 # two possibilities
np.arange(0.5, 1.1, 0.1), "<-- wrong result!"
np.arange(5, 11) * 0.1, "<-- that's right!"
np.linspace(0, 6, 7)
np.linspace(0, 6, 6, endpoint=False), np.linspace(0, 5, 6) # two possibilities
np.linspace(0, 0.6, 6, endpoint=False), np.linspace(0, 0.5, 6) # again two possibilities
np.linspace(0.5, 1.1, 6, endpoint=False), np.linspace(0.5, 1, 6) # and again ...
```
If the number of elements is known and the step size should be obtained automatically $\Rightarrow$ `np.linspace()`
If the step size is known an if it's an integer and the number of elements should be obtained automatically $\Rightarrow$ `np.arange()`
If the step size is not an integer:
* If the step size is a fraction of integers, you can use `np.arange()` with integers and divide the result accordingly.
* If that's not feasible, calculate the expected number of elements beforehand and use `np.linspace()`
```
dur, amp, freq, fs = 1, 0.3, 500, 44100
t = np.arange(np.ceil(dur * fs)) / fs
y = amp * np.sin(2 * np.pi * freq * t)
```
alternative (but inferior) methods to get $t$:
```
t1 = np.arange(0, dur, 1/fs) # implicit rounding of dur!
t2 = np.arange(0, np.round(dur), 1/fs) # still problematic: arange with floats
# wrong if dur isn't an integer multiple of 1/fs:
t3 = np.linspace(0, dur, np.round(dur * fs), endpoint=False)
```
Length of `y` must be *exactly* 44100 (using a half-open interval for $t$), not 44101 (which would be longer than 1 second).
Plotting: 2 ways to zoom (there are probably more): draw a rectangle, drag with the right mouse button in pan/zoom mode.
Clicks? Because of discontinuities (also in the derivatives) $\Rightarrow$ Fade in/out! See [tools.fade()](tools.py).
```
import sounddevice as sd
import tools
def myplay(data):
"""Apply fade in/out and play with 44.1 kHz."""
data = tools.fade(data, 2000, 5000)
sd.play(data, 44100)
myplay(y)
def mysine(frequency, amplitude, duration):
"""Generate sine tone with the given parameters @ 44.1 kHz."""
samplerate = 44100
times = np.arange(np.ceil(duration * samplerate)) / samplerate
return amplitude * np.sin(2 * np.pi * frequency * times)
z = mysine(440, 0.4, 3)
myplay(z)
%matplotlib
import matplotlib.pyplot as plt
def myplot(data):
"""Create a simple plot @ 44.1 kHz."""
samplerate = 44100
times = np.arange(len(data)) / samplerate
plt.plot(times, data)
plt.xlabel("Time / Seconds")
myplot(mysine(440, 0.4, 3))
import soundfile as sf
dur, amp = 1, 0.3
frequencies = 400, 500, 600 # Hz
fadetime = 2000 # samples
for freq in frequencies:
sig = mysine(freq, amp, dur)
sig = tools.fade(sig, fadetime)
sf.write("sine_{}hz.wav".format(freq), sig, 44100)
from scipy import signal
f0, f1 = 100, 5000 # Hz
amp = 0.2
dur = 2 # seconds
fadetime = 2000 # samples
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
for method in 'linear', 'log':
sweep = amp * signal.chirp(t, f0, dur, f1, method)
sweep = tools.fade(sweep, fadetime)
sf.write('sweep_{}.wav'.format(method), sweep, fs)
sinetone = mysine(frequency=500, amplitude=0.3, duration=1.5)
noise = np.random.normal(scale=0.1, size=len(sinetone))
sine_plus_noise = sinetone + noise
myplay(sine_plus_noise)
myplot(sine_plus_noise)
dur = 2
amp = 0.2
two_sines = mysine(500, amp, dur) + mysine(507, amp, dur)
myplay(two_sines)
myplot(two_sines)
```
Two sine tones with similar frequencies create "beats", see <http://en.wikipedia.org/wiki/Beat_(acoustics)>.
The sum of these two tones is equivalent to an amplitude modulation with a carrier frequency of $\frac{f_1+f_2}{2}$ and a modulation frequency of $\frac{f_1-f_2}{2}$.
$$\cos(2\pi f_1t)+\cos(2\pi f_2t) = 2\cos\left(2\pi\frac{f_1+f_2}{2}t\right)\cos\left(2\pi\frac{f_1-f_2}{2}t\right)$$
We don't really *hear* the modulation frequency itself, we only hear the envelope of the modulation, therefore the *perceived* beat frequency is $f_{\text{beat}} = f_1-f_2$.
```
stereo_sines = np.column_stack([mysine(400, amp, dur), mysine(600, amp, dur)])
myplay(stereo_sines)
```
The first column should be the left channel!
```
dur, amp = 1, 0.3
freq = 500 # Hz
delay = 0.5 # ms
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
times = np.column_stack((t, t - delay/1000))
sig = amp * np.sin(2 * np.pi * freq * times)
myplay(sig)
dur, amp = 0.5, 0.3
frequencies = 500, 1000, 2000 # Hz
delays = 0.6, 0.4, 0.2, 0, -0.2, -0.4, -0.6 # ms
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
for f in frequencies:
for delay in delays:
times = np.column_stack((t, t - delay/1000))
sig = amp * np.sin(2 * np.pi * f * times)
myplay(sig)
sd.wait()
```
This is supposed to illustrate [Lord Rayleigh's Duplex Theory](http://en.wikipedia.org/wiki/Interaural_time_difference#Duplex_theory) (at least the part about time differences).
```
dur, amp = 2, 0.3
frequencies = np.array([200, 400, 600, 800, 1000])
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
t.shape = -1, 1
t
amplitudes = amp * 1 / np.arange(1, len(frequencies)+1)
amplitudes
five_sines = amplitudes * np.sin(2 * np.pi * frequencies * t)
five_sines.shape
sum_of_sines = five_sines.sum(axis=1)
myplot(sum_of_sines)
myplay(five_sines[:, [0, 1, 2, 3, 4]].sum(axis=1))
myplay(five_sines[:, [0, 1, 2, 3]].sum(axis=1))
myplay(five_sines[:, [0, 1, 2, 4]].sum(axis=1))
myplay(five_sines[:, [0, 1, 3, 4]].sum(axis=1))
myplay(five_sines[:, [0, 2, 3, 4]].sum(axis=1))
myplay(five_sines[:, [1, 2, 3, 4]].sum(axis=1))
```
<https://en.wikipedia.org/wiki/Harmonic_series_(music)>
```
f0 = 200 # Hz
partials = 20
frequencies = f0 * np.arange(1, partials + 1)
frequencies
amplitudes = amp * 1 / np.arange(1, len(frequencies)+1)
amplitudes
many_sines = amplitudes * np.sin(2 * np.pi * frequencies * t)
many_sines.shape
sawtooth = many_sines.sum(axis=1)
myplot(sawtooth)
myplay(sawtooth)
```
https://en.wikipedia.org/wiki/Sawtooth_wave
```
square = many_sines[:, ::2].sum(axis=1)
myplot(square)
myplay(square)
```
https://en.wikipedia.org/wiki/Square_wave
```
c = 343
samplerate = 44100
dur = 0.01
phat = 0.2
freq = 500
omega = 2 * np.pi * freq
kx = omega / c
x = 0
time = np.arange(np.ceil(dur * fs)) / fs
p = phat * np.exp(1j*(kx*x - omega*time))
plt.plot(time*1000, np.real(p))
plt.xlabel('$t$ / ms')
plt.ylabel('$\mathcal{R}\{p(x,t)\}$ / Pa')
plt.grid()
plt.title('$f = {}$ Hz, $T = {}$ ms'.format(freq, 1000/freq));
xrange = 3
dx = 0.001
time = 0
x = np.arange(np.ceil(xrange/dx)) * dx
p = phat * np.exp(1j*(kx*x - omega*time))
plt.plot(x*100, np.real(p))
plt.xlabel('$x$ / cm')
plt.ylabel('$\mathcal{R}\{p(x,t)\}$ / Pa')
plt.grid()
plt.title('$f = {}$ Hz, $\lambda = {}$ cm'.format(freq, c*100/freq));
```
<p xmlns:dct="http://purl.org/dc/terms/">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<span rel="dct:publisher" resource="[_:publisher]">the person who associated CC0</span>
with this work has waived all copyright and related or neighboring
rights to this work.
</p>
| true |
code
| 0.603611 | null | null | null | null |
|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## Load data
On connaît l'âge et l'expérience d'une personne, on veut pouvoir déduire si une personne est badass dans son domaine ou non.
```
df = pd.DataFrame({
'Age': [20,16.2,20.2,18.8,18.9,16.7,13.6,20.0,18.0,21.2,
25,31.2,25.2,23.8,23.9,21.7,18.6,25.0,23.0,26.2],
'Experience': [2.3,2.2,1.8,1.4,3.2,3.9,1.4,1.4,3.6,4.3,
4.3,4.2,3.8,3.4,5.2,5.9,3.4,3.4,5.6,6.3],
'Badass': [0,0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,1]
})
df
colors = np.full_like(df['Badass'], 'red', dtype='object')
colors[df['Badass'] == 1] = 'blue'
plt.scatter(df['Age'], df['Experience'], color=colors)
X = df.drop('Badass', axis=1).values
Y = df['Badass'].values
# Cas à prédire
x = [21.2, 4.3]
```
## Using sklearn
### Fit
```
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=1e20, solver='liblinear', random_state=0)
%time model.fit(X, Y)
print(model.intercept_, model.coef_)
```
### Plot Decision Boundary
<details>
<summary>Where does the equation come from? ↓</summary>
<img src="https://i.imgur.com/YxSDJZA.png?1">
</details>
```
b0 = model.intercept_[0]
b1 = model.coef_[0][0]
b2 = model.coef_[0][1]
plt.scatter(df['Age'], df['Experience'], color=colors)
# Decision boundary (with threshold 0.5)
_X = np.linspace(df['Age'].min(), df['Age'].max(),10)
_Y = (-b1/b2)*_X + (-b0/b2)
plt.plot(_X, _Y, '-k')
# Plot using contour
_X1 = np.linspace(df['Age'].min(), df['Age'].max(),10)
_X2 = np.linspace(df['Experience'].min(), df['Experience'].max(),10)
xx1, xx2 = np.meshgrid(_X1, _X2)
grid = np.c_[xx1.ravel(), xx2.ravel()]
preds = model.predict_proba(grid)[:, 1].reshape(xx1.shape)
plt.scatter(df['Age'], df['Experience'], color=colors)
plt.contour(xx1, xx2, preds, levels=[.5], cmap="Greys", vmin=0, vmax=.6)
```
### Predict
```
print('Probabilité de badass:', model.predict_proba([x])[0][1])
print('Prediction:', model.predict([x])[0])
```
## From scratch
### Fit
Source: https://github.com/martinpella/logistic-reg/blob/master/logistic_reg.ipynb
```
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def loss(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
def gradientDescent(X, y, theta, alpha, epochs, verbose=True):
m = len(y)
for i in range(epochs):
h = sigmoid(X.dot(theta))
gradient = (X.T.dot(h - y)) / m
theta -= alpha * gradient
if(verbose and i % 1000 == 0):
z = np.dot(X, theta)
h = sigmoid(z)
print('loss:', loss(h, y))
return theta
# Add intercept
m = len(X)
b = np.ones((m,1))
Xb = np.concatenate([b, X], axis=1)
# Fit
theta = np.random.rand(3)
theta = gradientDescent(Xb, Y, theta=theta, alpha=0.1, epochs=10000)
theta
```
### Plot
```
b0 = theta[0]
b1 = theta[1]
b2 = theta[2]
plt.scatter(df['Age'], df['Experience'], color=colors)
# Decision boundary (with threshold 0.5)
_X = np.linspace(df['Age'].min(), df['Age'].max(),10)
_Y = (-b1/b2)*_X + (-b0/b2)
plt.plot(_X, _Y, '-k')
```
### Predict
```
z = b0 + b1 * x[0] + b2 * x[1]
p = 1 / (1 + np.exp(-z))
print('Probabilité de badass:', p)
print('Prediction:', (1 if p > 0.5 else 0))
```
| true |
code
| 0.471406 | null | null | null | null |
|
### Introduction
An example of implementing the Metapath2Vec representation learning algorithm using components from the `stellargraph` and `gensim` libraries.
**References**
**1.** Metapath2Vec: Scalable Representation Learning for Heterogeneous Networks. Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 135–144, 2017. ([link](https://ericdongyx.github.io/papers/KDD17-dong-chawla-swami-metapath2vec.pdf))
**2.** Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))
**3.** Gensim: Topic modelling for humans. ([link](https://radimrehurek.com/gensim/))
**4.** Social Computing Data Repository at ASU [http://socialcomputing.asu.edu]. R. Zafarani and H. Liu. Tempe, AZ: Arizona State University, School of Computing, Informatics and Decision Systems Engineering. 2009.
```
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import os
import networkx as nx
import numpy as np
import pandas as pd
from stellargraph.data.loader import load_dataset_BlogCatalog3
%matplotlib inline
```
### Load the dataset
The dataset is the BlogCatalog3 network.
It can be downloaded from [here.](http://socialcomputing.asu.edu/datasets/BlogCatalog3)
The following is the description of the dataset from the publisher [4]:
> This is the data set crawled from BlogCatalog ( http://www.blogcatalog.com ). BlogCatalog is a social blog directory website. This contains the friendship network crawled and group memberships. For easier understanding, all the contents are organized in CSV file format.
The statistics of this network are,
- Number of bloggers : 10,312
- Number of friendship pairs: 333,983
- Number of groups: 39
We assume that the dataset file `BlogCatalog-dataset.zip` has been downloaded and unzipped in the directory,
`~/data`
and the data in `csv` format (the files `edges.csv`, `nodes.csv`, `groups.csv`, and `group-edges.csv` can be found in directory,
`~/data/BlogCatalog-dataset/data/`
```
dataset_location = os.path.expanduser("~/data/BlogCatalog-dataset/data")
g_nx = load_dataset_BlogCatalog3(location=dataset_location)
print("Number of nodes {} and number of edges {} in graph.".format(g_nx.number_of_nodes(), g_nx.number_of_edges()))
```
### The Metapath2Vec algorithm
The Metapath2Vec algorithm introduced in [1] is a 2-step representation learning algorithm. The two steps are:
1. Use uniform random walks to generate sentences from a graph. A sentence is a list of node IDs. The set of all sentences makes a corpus. The random walk is driven by a metapath that defines the node type order by which the random walker explores the graph.
2. The corpus is then used to learn an embedding vector for each node in the graph. Each node ID is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [2] is used for calculating the embedding vectors.
## Corpus generation using random walks
The `stellargraph` library provides an implementation for uniform, first order, random walks as required by Metapath2Vec. The random walks have fixed maximum length and are controlled by the list of metapath schemas specified in parameter `metapaths`.
A metapath schema defines the type of node that the random walker is allowed to transition to from its current location. In the `stellargraph` implementation of metapath-driven random walks, the metapath schemas are given as a list of node types under the assumption that the input graph is not a multi-graph, i.e., two nodes are only connected by one edge type.
See [1] for a detailed description of metapath schemas and metapth-driven random walks.
For the **BlogCatalog3** dataset we use the following 3 metapaths.
- "user", "group", "user"
- "user", "group", "user", "user"
- "user", "user"
```
from stellargraph.data import UniformRandomMetaPathWalk
from stellargraph import StellarGraph
# Create the random walker
rw = UniformRandomMetaPathWalk(StellarGraph(g_nx))
# specify the metapath schemas as a list of lists of node types.
metapaths = [
["user", "group", "user"],
["user", "group", "user", "user"],
["user", "user"],
]
walks = rw.run(nodes=list(g_nx.nodes()), # root nodes
length=100, # maximum length of a random walk
n=1, # number of random walks per root node
metapaths=metapaths # the metapaths
)
print("Number of random walks: {}".format(len(walks)))
```
### Representation Learning using Word2Vec
We use the Word2Vec [2] implementation in the free Python library gensim [3] to learn representations for each node in the graph.
We set the dimensionality of the learned embedding vectors to 128 as in [1].
```
from gensim.models import Word2Vec
model = Word2Vec(walks, size=128, window=5, min_count=0, sg=1, workers=2, iter=1)
model.wv.vectors.shape # 128-dimensional vector for each node in the graph
```
### Visualise Node Embeddings
We retrieve the Word2Vec node embeddings that are 128-dimensional vectors and then we project them down to 2 dimensions using the [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) algorithm.
```
# Retrieve node embeddings and corresponding subjects
node_ids = model.wv.index2word # list of node IDs
node_embeddings = model.wv.vectors # numpy.ndarray of size number of nodes times embeddings dimensionality
node_targets = [ g_nx.node[node_id]['label'] for node_id in node_ids]
```
Transform the embeddings to 2d space for visualisation
```
transform = TSNE #PCA
trans = transform(n_components=2)
node_embeddings_2d = trans.fit_transform(node_embeddings)
# draw the points
label_map = { l: i for i, l in enumerate(np.unique(node_targets))}
node_colours = [ label_map[target] for target in node_targets]
plt.figure(figsize=(20,16))
plt.axes().set(aspect="equal")
plt.scatter(node_embeddings_2d[:,0],
node_embeddings_2d[:,1],
c=node_colours, alpha=0.3)
plt.title('{} visualization of node embeddings'.format(transform.__name__))
plt.show()
```
### Downstream task
The node embeddings calculated using Metapath2Vec can be used as feature vectors in a downstream task such as node attribute inference (e.g., inferring the gender or age attribute of 'user' nodes), community detection (e.g., clustering of 'user' nodes based on the similarity of their embedding vectors), and link prediction (e.g., prediction of friendship relation between 'user' nodes).
| true |
code
| 0.635873 | null | null | null | null |
|
##### Training and Tuning
La principal razón del anterior notebook ha sido probar varios modelos de la forma más rápida posible, ver sus métricas y los impactos de diversos cambios. El principal problema (hasta ahora) con la versión de PyCaret es que al desplegar el modelo es un objeto de la misma librería, haciendo que se requiera instalar la PyCaret en producción lo cual es muy poco eficiente y complica mucho más las cosas. Por otro lado, PyCaret hace su hyperparameter tuning por RandomSearchCV, que no está mal pero sería más optimo hacerlo de manera Bayesiana. En ese sentido este notebook servirá para entrenar denuevo el(los) modelo(s) guardarlos y posteriormente desplegarlos de manera rápida y sencilla siendo prioridad el hacer el modelo lo más ligero posible.
```
import pandas as pd
import numpy as np
import warnings
import lightgbm as lgb
import xgboost as xgb
from sklearn.ensemble import RandomForestRegressor
from bayes_opt import BayesianOptimization
csv_path = (
"../data/train_encoded.csv",
"../data/test_encoded.csv"
)
train = pd.read_csv(csv_path[0]).drop(["latitud","longitud"], axis=1)
test = pd.read_csv(csv_path[1]).drop(["latitud","longitud"], axis=1)
```
##### Para LightGBM.
Como ya lo hemos tuneado con Pycaret, los parámetros son:
```
Sin Tunear:
LGBMRegressor(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
importance_type='split', learning_rate=0.1, max_depth=-1,
min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
n_estimators=100, n_jobs=-1, num_leaves=31, objective=None,
random_state=104, reg_alpha=0.0, reg_lambda=0.0, silent=True,
subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
Tuneado:
LGBMRegressor(bagging_fraction=1.0, bagging_freq=6, boosting_type='gbdt',
class_weight=None, colsample_bytree=1.0, feature_fraction=0.9,
importance_type='split', learning_rate=0.15, max_depth=-1,
min_child_samples=46, min_child_weight=0.001, min_split_gain=0,
n_estimators=150, n_jobs=-1, num_leaves=2, objective=None,
random_state=104, reg_alpha=0.7, reg_lambda=5, silent=True,
subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
```
```
import warnings
warnings.filterwarnings('ignore')
random_state = 104 #Para benchmark.
def bayes_parameter_opt_lgb(X, y, init_points=15, opt_round=25, n_folds=5, random_seed=6, n_estimators=10000, learning_rate=0.05, output_process=False):
def lgb_eval(num_leaves, bagging_fraction, lambda_l1, lambda_l2, min_split_gain):
"""
Defino los parametros que serán tuneados. Así como los parámetros fijos
"""
params = {'application':'regression','num_iterations':5000, 'learning_rate':0.05, 'early_stopping_round':100, 'metric':'rmse',
'feature_fraction':0.9,'n_estimators':200,'feature_fraction':0.9, 'max_depth':-1,'min_child_weight':0.001,'verbose':-1}
params["num_leaves"] = round(num_leaves)
params['bagging_fraction'] = max(min(bagging_fraction, 1), 0)
params['max_depth'] = -1
params['lambda_l1'] = max(lambda_l1, 0)
params['lambda_l2'] = max(lambda_l2, 0)
params['min_split_gain'] = min_split_gain
train_data = lgb.Dataset(data=X, label=y)
cv_result = lgb.cv(params, train_data, nfold=5, seed=random_state, verbose_eval =200, metrics=['mae'], shuffle=False,
stratified=False)
return -max(cv_result['l1-mean'])
#Configuro el rango de cada parametro
lgbm_optimization = BayesianOptimization(lgb_eval, {'num_leaves': (2, 25),
'bagging_fraction':(0.8,1),
'lambda_l1':(0.5,3),
'lambda_l2':(3,20),
'min_split_gain': (0.001, 0.1)
})
lgbm_optimization.maximize(init_points=init_points, n_iter=opt_round) #CHECK
if output_process == True:
lgbm_optimization.points_to_csv('lgbm_bayers_opt_result.csv')
return lgbm_optimization
X = train.select_dtypes(exclude='object').drop('Precio_m2_total',axis=1)
y = train['Precio_m2_total']
opt_params = bayes_parameter_opt_lgb(X=X,y=y, init_points= 30, opt_round=100)
min_ = min([res['target'] for res in opt_params.res])
[res['params'] for res in opt_params.res if res['target'] == min_]
#Fit model
train_data = lgb.Dataset(X,y)
params = {'application':'regression','num_iterations':5000, 'learning_rate':0.05, 'metric':'rmse',
'feature_fraction':0.9,'n_estimators':200,'feature_fraction':0.9, 'max_depth':-1,'min_child_weight':0.001,'verbose':-1,
'bagging_fraction': 0.9164810602504456,'lambda_l1': 0.5005454948781294,'lambda_l2': 6.60276585681876,
'min_split_gain': 0.07385271072078259,'num_leaves': 3}
model = lgb.cv(params, train_data, nfold=5, seed=random_state, verbose_eval =200, metrics=['mae'], shuffle=False,
stratified=False)
#l1_error = Mae
X_test = test.select_dtypes(exclude='object').drop('Precio_m2_total',axis=1)
y_test = test['Precio_m2_total']
model = lgb.train(params, train_data)
preds = model.predict(X_test)
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
r2 = r2_score(y_test, preds)
mae = mean_absolute_error(y_test, preds)
mse = mean_squared_error(y_test, preds)
print('r2:{}\nmae:{}\nmse:{}'.format(r2, mae, mse))
```
Entrenando modelo final:
```
data_x = pd.concat([X,X_test])
data_y = pd.concat([y,y_test])
data = lgb.Dataset(data_x,data_y)
model_final = lgb.train(params, data)
import pickle
with open('../webapp/artifacts/models/lgbm_base.pkl','wb') as handle:
pickle.dump(model_final, handle, protocol=pickle.HIGHEST_PROTOCOL)
```
#### Random Forest:
```
from sklearn.model_selection import cross_val_score
def rf_cv(min_impurity_decrease, min_samples_split, max_features,max_depth, data, target):
"""Random Forest Cross Validation
Esta funcion instanciará un regressor de Random Forest con los parámetros a optimizar:
min_samples_split, max_features, min_impurity_decrease.
"""
model = RandomForestRegressor(
n_estimators = 150,
min_impurity_decrease=min_impurity_decrease,
min_samples_split = min_samples_split,
max_features = max_features,
max_depth = max_depth, #No olvidar tenerlo en integer.
random_state = 123,
n_jobs=-1
)
cross_val = cross_val_score(model, data, target,
scoring='neg_mean_absolute_error', cv=4)
return cross_val.mean()
def optimize_rf(data, target):
"""Aplicamos Optimización Bayesiana a los parámetros del Random Forest Regressor"""
def inside_rf_cv(min_impurity_decrease, min_samples_split, max_features, max_depth):
"""Wrapper of RandomForest cross validation.
Tenemos que evitar que los parametros que toman valores enteros no se repitan, además de tener que
restringir aquellos parámetros que van de 0 a 1.
"""
return rf_cv(
min_samples_split = int(min_samples_split),
min_impurity_decrease = max(min(min_impurity_decrease, 0.999), 1e-3),
max_features = max(min(max_features, 0.999), 1e-3),
max_depth = int(max_depth),
data = data,
target = target,
)
optimizer = BayesianOptimization(
f = inside_rf_cv,
pbounds={
"min_samples_split":(2,25),
"min_impurity_decrease":(0.1,0.999),
"max_features":(0.1, 0.999),
"max_depth":(5, 25),
},
random_state=123,
verbose=2
)
optimizer.maximize(init_points = 30, n_iter=100)
print("Resultado Final", optimizer.max)
return optimizer
X_train = train.select_dtypes(exclude='object').drop('Precio_m2_total',axis=1)
y_train = train['Precio_m2_total']
from bayes_opt.util import Colours
print(Colours.yellow("----Random Forest Regressor Optimizer----"))
optimize_rf(X_train, y_train)
from sklearn.metrics import r2_score, mean_absolute_error
rf_reg = RandomForestRegressor(n_estimators = 300, n_jobs = -1, max_depth = 15, max_features = 0.67, min_impurity_decrease=0.1, min_samples_split=6)
rf_reg.fit(X_train, y_train)
preds = rf_reg.predict(X_test)
r2_score(y_test, preds) #0.38?
final_model_rf = rf_reg.fit(pd.concat([X_train,X_test]), pd.concat([y_train, y_test]))
import pickle
with open('../webapp/artifacts/models/rf_base.pkl','wb') as handle:
pickle.dump(final_model_rf, handle, protocol=pickle.HIGHEST_PROTOCOL)
```
| true |
code
| 0.441312 | null | null | null | null |
|
# Walk through all streets in a city
Preparation of the examples for the challenge: find the shortest path through a set of streets.
```
import matplotlib.pyplot as plt
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
## Problem description
Find the shortest way going through all streets from a set of streets? This problem is known as the *Route inspection problem*.
## Data
[Seattle streets](https://data.seattle.gov/dataset/Street-Network-Database/afip-2mzr/data) from [data.seattle.gov](https://data.seattle.gov/)
### Read the data
```
import shapefile, os
if os.path.exists("Street_Network_Database/WGS84/Street_Network_Database.shp"):
rshp = shapefile.Reader("Street_Network_Database/WGS84/Street_Network_Database.shp")
shapes = rshp.shapes()
records = rshp.records()
else:
from pyensae.datasource import download_data
files = download_data("WGS84_seattle_street.zip")
rshp = shapefile.Reader("Street_Network_Database.shp")
shapes = rshp.shapes()
records = rshp.records()
shapes[0].__dict__
{k[0]:v for k,v in zip(rshp.fields[1:], records[0])}
from ensae_projects.datainc.data_geo_streets import get_fields_description
get_fields_description()
```
### Display the streets
```
streets5 = list(zip(records[:5], shapes[:5]))
streets5[2][1].points
import folium
from random import randint
from pyensae.notebookhelper import folium_html_map
c = streets5[0][1]
map_osm = folium.Map(location=[c.bbox[1], c.bbox[0]], zoom_start=9)
for rec, shape in streets5:
d = {k[0]: v for k,v in zip(rshp.fields[1:], rec)}
map_osm.add_child(folium.Marker([shape.points[0][1], shape.points[0][0]], popup=str(d["ORD_STNAME"])))
map_osm.add_child(folium.PolyLine(locations=[[_[1], _[0]] for _ in shape.points], weight=10))
folium_html_map(map_osm, width="60%")
```
## Find connected streets
```
street0 = streets5[0][1].points
street0
def connect_streets(st1, st2):
a1, b1 = st1[0], st1[-1]
a2, b2 = st2[0], st2[-1]
connect = []
if a1 == a2:
connect.append((0, 0))
if a1 == b2:
connect.append((0, 1))
if b1 == a2:
connect.append((1, 0))
if b1 == b2:
connect.append((1, 1))
return tuple(connect) if connect else None
neighbours = []
for i, street in enumerate(shapes):
points = street.points
con = connect_streets(street0, points)
if con:
neighbours.append(i)
neighbours
import folium
from pyensae.notebookhelper import folium_html_map
c = shapes[neighbours[0]]
map_osm = folium.Map(location=[c.bbox[1], c.bbox[0]], zoom_start=15)
points = set()
for index in neighbours:
rec, shape = records[index], shapes[index]
corners = [(_[1], _[0]) for _ in shape.points]
map_osm.add_child(folium.PolyLine(locations=corners, weight=10))
points |= set([corners[0], corners[-1]])
for x, y in points:
map_osm.add_child(folium.Marker((x, y), popup=str(index)))
folium_html_map(map_osm, width="50%")
c = shapes[neighbours[0]]
map_osm = folium.Map(location=[c.bbox[1], c.bbox[0]], zoom_start=15)
points = set()
for index in neighbours:
rec, shape = records[index], shapes[index]
corners = [(_[1], _[0]) for _ in shape.points]
map_osm.add_child(folium.PolyLine(locations=corners, weight=10))
points |= set([corners[0], corners[-1]])
for x, y in points:
map_osm.add_child(folium.CircleMarker((x, y), popup=str(index), radius=8, fill_color="yellow"))
folium_html_map(map_osm, width="50%")
```
## Extraction of all streets in a short perimeter
```
from shapely.geometry import Point, LineString
def enumerate_close(x, y, shapes, th=None):
p = Point(x,y)
for i, shape in enumerate(shapes):
obj = LineString(shape.points)
d = p.distance(obj)
if th is None or d <= th:
yield d, i
x, y = shapes[0].points[0]
closes = list(enumerate_close(x, y, shapes))
closes.sort()
closes[:10]
import folium
from ensae_projects.datainc.data_geo_streets import folium_html_street_map
folium_html_street_map([_[1] for _ in closes[:20]], shapes, html_width="50%", zoom_start=15)
def complete_subset_streets(subset, shapes):
extension = []
for i, shape in enumerate(shapes):
add = []
for s in subset:
to = shapes[s]
if s != i:
con = connect_streets(shapes[s].points, shapes[i].points)
if con is not None:
add.extend([_[1] for _ in con])
if len(set(add)) == 2:
extension.append(i)
return extension
subset = [index for dist, index in closes[:20]]
newset = set(subset + complete_subset_streets(subset, shapes))
print(list(sorted(newset)))
folium_html_street_map(newset, shapes, html_width="50%", zoom_start=15)
from ensae_projects.datainc.data_geo_streets import build_streets_vertices
vertices, edges = build_streets_vertices(newset, shapes)
vertices[:3], edges[:3]
from ensae_projects.datainc.data_geo_streets import plot_streets_network
plot_streets_network(newset, edges, vertices, shapes, figsize=(10,10));
```
| true |
code
| 0.290534 | null | null | null | null |
|
### Data Visualization
#### `matplotlib` - from the documentation:
https://matplotlib.org/3.1.1/tutorials/introductory/pyplot.html
`matplotlib.pyplot` is a collection of command style functions <br>
Each pyplot function makes some change to a figure <br>
`matplotlib.pyplot` preserves ststes across function calls
```
%matplotlib inline
import matplotlib.pyplot as plt
```
Call signatures::
```
plot([x], y, [fmt], data=None, **kwargs)
plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)
```
Quick plot
The main usage of `plt` is the `plot()` and `show()` functions
https://matplotlib.org/3.1.1/api/pyplot_summary.html <br>
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.plot.html <br>
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.show.html <br>
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.legend.html<br>
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.figure.html<br>
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.subplot.html<br>
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.annotate.html<br>
```
df_iris = pd.read_csv('https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv')
df_iris.head()
colors = {'setosa':'red', 'versicolor':'orange', 'virginica':'blue'}
def get_col(spec):
return colors[spec]
colors_col = df_iris.species.apply(get_col)
plt.scatter("petal_length","petal_width", data=df_iris, c = colors_col, s = 7, marker = "o")
legend_elements = [plt.Line2D([0], [0], marker='o', linestyle="", color=colors["setosa"], label="setosa"),
plt.Line2D([0], [0], marker='o', linestyle="", color=colors["versicolor"], label="versicolor"),
plt.Line2D([0], [0], marker='o', linestyle="", color=colors["virginica"], label="virginica")]
plt.legend(handles=legend_elements,loc="upper left", title="Species")
plt.show()
```
https://python-graph-gallery.com/matplotlib/
#### Using pandas `.plot()`
```
df_iris.groupby("species").mean().plot(kind='bar')
plt.show()
df_iris.plot(x= "petal_length", y = "petal_width" ,kind = "scatter", color = colors_col)
plt.savefig('output1.png')
```
https://github.com/pandas-dev/pandas/blob/v0.25.0/pandas/plotting/_core.py#L504-L1533
https://python-graph-gallery.com/wp-content/uploads/Matplotlib_cheatsheet_datacamp.png
<img src = "https://python-graph-gallery.com/wp-content/uploads/Matplotlib_cheatsheet_datacamp.png" width = "1000"/>
### `seaborn` - dataset-oriented plotting
Seaborn is a library that specializes in making *prettier* `matplotlib` plots of statistical data. <br>
It is built on top of matplotlib and closely integrated with pandas data structures.
https://seaborn.pydata.org/introduction.html<br>
https://python-graph-gallery.com/seaborn/
```
import seaborn as sns
```
`seaborn` lets users *style* their plotting environment.<br>
There are 5 preset themes: darkgrid (default), whitegrid, dark, white, and ticks.<br>
https://seaborn.pydata.org/tutorial/aesthetics.html
However, you can always use `matplotlib`'s `plt.style`
https://matplotlib.org/3.1.1/gallery/style_sheets/style_sheets_reference.html
```
sns.set(style='whitegrid')
#dir(sns)
sns.scatterplot(x='petal_length',y='petal_width',data=df_iris)
plt.show()
with plt.style.context(('ggplot')):
sns.scatterplot(x='petal_length',y='petal_width',data=df_iris)
plt.show()
sns.scatterplot(x='petal_length',y='petal_width', hue = "species",data=df_iris)
plt.show()
```
#### Violin plot
Fancier box plot that gets rid of the need for 'jitter' to show the inherent distribution of the data points
```
sns.set(style="dark")
fig, axes = plt.subplots(figsize=(7, 3))
sns.violinplot(data=df_iris, ax=axes)
axes.set_ylabel('value')
axes.set_xlabel('feature')
plt.show()
```
#### Distplot
```
sns.set(style='dark', palette='muted')
# 1 column, 4 rows
f, axes = plt.subplots(4,1, figsize=(10,10), sharex=True)
# Regular displot
sns.distplot(df_iris.petal_length, ax=axes[0])
# Change the color
sns.distplot(df_iris.petal_width, kde=False, ax=axes[1], color='orange')
# Show the Kernel density estimate
sns.distplot(df_iris.sepal_width, hist=False, kde_kws={'shade':True}, ax=axes[2], color='purple')
# Show the rug
sns.distplot(df_iris.sepal_length, hist=False, rug=True, ax=axes[3], color='green')
plt.show()
```
#### FacetGrid
```
sns.set()
columns = ['species', 'petal_length', 'petal_width']
facet_column = 'species'
g = sns.FacetGrid(df_iris.loc[:,columns], col=facet_column, hue=facet_column)
g.map(plt.scatter, 'petal_length', 'petal_width')
sns.relplot(x="petal_length", y="petal_width", col="species",
hue="species", style="species", size="sepal_width",
data=df_iris)
plt.show()
```
https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
```
sns.catplot(x="species", y="petal_length", data=df_iris)
plt.show()
sns.catplot(kind="box", data=df_iris)
plt.show()
# https://seaborn.pydata.org/tutorial/categorical.html
tips = sns.load_dataset("tips")
print(tips.head())
sns.catplot(x="day", y="total_bill", hue="smoker", kind="box", data=tips)
plt.show()
```
Plot the tips by day with two side by side box plots for males and females and different subplots for the time of the meal (lunch/dinner).
```
# help(sns.catplot)
sns.pairplot(df_iris, hue='species', height=2.5);
```
https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Seaborn_Cheat_Sheet.pdf
<img src = "https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Seaborn_Cheat_Sheet.pdf" width = "1000"/>
### `plotnine` - R ggplot2 in python
plotnine is an implementation of a grammar of graphics in Python, it is based on ggplot2. The grammar allows users to compose plots by explicitly mapping data to the visual objects that make up the plot.
Plotting with a grammar is powerful, it makes custom (and otherwise complex) plots are easy to think about and then create, while the simple plots remain simple.
```
#!pip install plotnine
```
https://plotnine.readthedocs.io/en/stable/
```
from plotnine import *
```
https://plotnine.readthedocs.io/en/stable/api.html
```
p = ggplot(data=df_iris) + aes(x="petal_length", y = "petal_width") + geom_point()
# add transparency - to address overlapping points
ggplot(data=df_iris) + aes(x="petal_length", y = "petal_width") + geom_point(size = 5, alpha=0.5)
# change point size
ggplot(data=df_iris) + aes(x="petal_length", y = "petal_width") + geom_point(size = 0.7, alpha=0.7)
# more parameters
ggplot(data=df_iris) + aes(x="petal_length", y = "petal_width") + geom_point() + scale_x_log10() + xlab("Petal Length")
n = "3"
features = "length and width"
title = f'species : {n}; petal : {features}'
#title = 'species : {}; petal : {}'.format(n,features)
ggplot(data=df_iris) +aes(x='petal_length',y='petal_width',color="species") \
+ geom_point(size=0.7) + facet_wrap('~species',nrow=3) \
+ theme(figure_size=(7,9)) + ggtitle(title)
p = ggplot(data=df_iris) + aes(x='petal_length') \
+ geom_histogram(binwidth=1,color='black',fill='grey')
p
ggsave(plot=p, filename='hist_plot_with_plotnine.png')
tips = sns.load_dataset("tips")
print(tips.head())
ggplot(aes(x="day", y="tip",\
color="smoker"), data=tips) \
+ geom_boxplot()\
+ geom_jitter(width=0.05, alpha=0.4) \
+ facet_grid(".~smoker")
```
http://cmdlinetips.com/2018/05/plotnine-a-python-library-to-use-ggplot2-in-python/ <br>
https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf
<img src = "https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf" width = "1000"/>
Use ggplot to plot the sepal_length in boxplots separated by species, add new axes labels and make the y axis values log10.
* Write a function that takes as a parameter a line of the dataframe and if the species is
** setosa it returns the petal_length
** versicolor it returns the petal_width
** virginica it returns the sepal_length
Apply this function to every line in the dataset. <br>
Use ggplot to make a histogram of the resulted values.
```
#dir()
```
https://plotnine.readthedocs.io/en/stable/api.html
Look for scale functions.
More resources:
https://github.com/swyder/plotnine_tutorial/blob/master/plotnine_demo_sw.ipynb <br>
https://datacarpentry.org/python-ecology-lesson/07-visualization-ggplot-python/
| true |
code
| 0.615146 | null | null | null | null |
|
<h1>Phi K Correlation</h1>
Phi K correlation is a newly emerging correlation cofficient with following advantages:
- it can work consistently between categorical, ordinal and interval variables
- it can capture non-linear dependency
- it reverts to the Pearson correlation coefficient in case of a bi-variate normal input distribution
```
import phik
from phik import resources
from phik.binning import bin_data
from phik.decorators import *
from phik.report import plot_correlation_matrix
import pandas as pd
import numpy as np
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, StandardScaler, MinMaxScaler
import seaborn as sns
import matplotlib.pyplot as plt
import networkx as nx
#loading the SalePrice dataset
df=pd.read_csv('dataset.csv')
df.drop(['Id'], axis=1,inplace=True)
```
**Preprocessing**
```
#Preprocessing the data
class PreProcessor:
def __init__(self):
#treating certain categorical columns as ordinal
self.encoder={}
self.encoder['LotShape']={'Reg':0,'IR1':1,'IR2':2,'IR3':3}
self.encoder['LandSlope']={'Gtl':1, 'Mod':2, 'Sev':3}
self.encoder['GarageFinish']={'Fin':3, 'RFn':2, 'Unf':1, 'VNA':0}
self.encoder['BsmtExposure']={'Gd':4,'Av':3,'Mn':2,'No':1,'VNA':0}
self.encoder['Functional']={'Typ':0,'Min1':1,'Min2':2,'Mod':3,'Maj1':4,'Maj2':5,'Sev':6,'Sal':7}
self.encoder['PavedDrive']={'Y':2,'P':1,'N':0}
#columns with values as Ex,Gd,TA,Fa,Po,VNA can be treated as ordinal
ratings={'Ex':5,'Gd':4,'TA':3,'Fa':2,'Po':1,'VNA':0}
rated_cols=['ExterQual', 'ExterCond','BsmtQual','BsmtCond','KitchenQual','FireplaceQu','GarageQual', 'GarageCond']
for col in rated_cols:
self.encoder[col]=ratings
self.categorical_encoded=self.encoder.keys()
def preprocessing1(self,df):
#drop columns with mostly one value or mostly missing values
dropped_cols=['Street', 'Alley', 'Utilities', 'Condition2', 'RoofMatl', 'Heating','LowQualFinSF', '3SsnPorch', 'PoolArea', 'PoolQC', 'Fence', 'MiscFeature', 'MiscVal']
df.drop(dropped_cols, axis=1, inplace=True)
#treating missing values
#Filling missing values with median
col1=['LotFrontage','MasVnrArea']
for col in col1:
df[col].fillna(df[col].median(), inplace=True)
#Fill missing values with new category "VNA"
col2=['BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2','GarageType','GarageFinish','GarageQual','GarageCond','FireplaceQu','MasVnrType', 'Electrical']
for col in col2:
df[col].fillna('VNA', inplace=True)
#Replacing Na values in GarageYrBlt with corresponding values in YearBuilt
df.loc[(pd.isnull(df.GarageYrBlt)), 'GarageYrBlt'] = df.YearBuilt
#encoding categorical columns to ordinal
for col in self.categorical_encoded:
df[col]=df[col].apply(lambda val: self.encoder[col][val])
#apply lable encoder
for col in df.select_dtypes(include=['object']).columns:
df[col] = LabelEncoder().fit_transform(df[col])
return df
def preprocessing2(self,df):
df=self.preprocessing1(df)
#filtered columns
numerical_filtered=['YearBuilt','TotRmsAbvGrd','GrLivArea','1stFlrSF','GarageYrBlt','YearRemodAdd','GarageArea','SalePrice']
ordinal_filtered=['GarageCars','OverallQual','Fireplaces','GarageFinish','BsmtFullBath','KitchenQual','FullBath','FireplaceQu','BsmtQual','TotalBsmtSF']
categorical_filtered=['MSZoning', 'Neighborhood', 'Foundation', 'BsmtFinType1', 'HeatingQC', 'CentralAir', 'GarageType', 'SaleCondition', 'MSSubClass', 'MasVnrType']
return df[numerical_filtered+ordinal_filtered+categorical_filtered], numerical_filtered
#create pre processor object
pre_processor=PreProcessor()
#preprocess the data and get interval column
preprocessed_df, interval_cols=pre_processor.preprocessing2(df)
```
**PhiK correlation**
```
# get the phi_k correlation matrix between all variables
coerr_mat=preprocessed_df.phik_matrix(interval_cols=interval_cols)
#colour map
cmap = sns.diverging_palette(220, 10, as_cmap=True)
#plotting phik correlation
plot_correlation_matrix(coerr_mat.values, x_labels=coerr_mat.columns, y_labels=coerr_mat.index,
vmin=0, vmax=1, color_map=cmap, title=r'correlation $\phi_K$', fontsize_factor=1,
figsize=(7*3,5.5*3))
plt.tight_layout()
plt.show
```
**Finding highly correlated features based on above heat map and vizualizing it as a graph**
```
class GraphVisualization:
def __init__(self):
# visual is a list which stores all
# the set of edges that constitutes a
# graph
self.visual = []
# addEdge function inputs the vertices of an
# edge and appends it to the visual list
def addEdge(self, a, b):
temp = [a, b]
self.visual.append(temp)
# In visualize function G is an object of
# class Graph given by networkx G.add_edges_from(visual)
# creates a graph with a given list
# nx.draw_networkx(G) - plots the graph
# plt.show() - displays the graph
def visualize(self):
G = nx.Graph()
G.add_edges_from(self.visual)
nx.draw_shell(G, alpha = 0.7, with_labels = True, edge_color ='.4', cmap = cmap, font_size=12 )
plt.title("correlation vizualization as graph")
plt.style.use('ggplot')
plt.figure(figsize=(8,5))
plt.show()
G = GraphVisualization()
for col1 in preprocessed_df.columns:
for col2 in preprocessed_df.columns:
if col1!=col2:
#if the correlation is greater than 0.9, add an edge to the graph
if coerr_mat[col1][col2]>0.9:
G.addEdge(col1,col2)
G.visualize()
```
Based on graph plot using PhiK correlation ,following features are highly correlated:
- GarageArea and GarageCars
- GarageTrBlt and YearBuilt
- Neighborhood and MSZoning
- TotalBsmtSF is highly correlated with 1stFlrSF, SalePrice, BsmtQual, GrLivArea and Neighborhood
**Global PhiK Correlations**
This metric signifies how much a column is correlated with all other columns in the dataset
```
# get global correlations based on phi_k correlation matrix
global_coerr=preprocessed_df.global_phik(interval_cols=interval_cols)
#plotting global phik correlation
plot_correlation_matrix(global_coerr[0], x_labels=["correlation"], y_labels=global_coerr[1],vmin=0, vmax=1, color_map=cmap, title=r'global correlation $\phi_K$', fontsize_factor=1,figsize=(7*3,5.5*3))
```
| true |
code
| 0.606761 | null | null | null | null |
|
# Amazon SageMaker - Debugging with custom rules
[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is managed platform to build, train and host maching learning models. Amazon SageMaker Debugger is a new feature which offers the capability to debug machine learning models during training by identifying and detecting problems with the models in near real-time.
In this notebook, we'll show you how to use a custom rule to monitor your training job. All through a tf.keras ResNet example.
## How does Amazon SageMaker Debugger work?
Amazon SageMaker Debugger lets you go beyond just looking at scalars like losses and accuracies during training and gives you full visibility into all tensors 'flowing through the graph' during training. Furthermore, it helps you monitor your training in near real-time using rules and provides you alerts, once it has detected inconsistency in training flow.
### Concepts
* **Tensors**: These represent the state of the training network at intermediate points during its execution
* **Debug Hook**: Hook is the construct with which Amazon SageMaker Debugger looks into the training process and captures the tensors requested at the desired step intervals
* **Rule**: A logical construct, implemented as Python code, which helps analyze the tensors captured by the hook and report anomalies, if at all
With these concepts in mind, let's understand the overall flow of things that Amazon SageMaker Debugger uses to orchestrate debugging.
### Saving tensors during training
The tensors captured by the debug hook are stored in the S3 location specified by you. There are two ways you can configure Amazon SageMaker Debugger to save tensors:
#### With no changes to your training script
If you use one of the Amazon SageMaker provided [Deep Learning Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) for 1.15, then you don't need to make any changes to your training script for the tensors to be stored. Amazon SageMaker Debugger will use the configuration you provide through the Amazon SageMaker SDK's Tensorflow `Estimator` when creating your job to save the tensors in the fashion you specify. You can review the script we are going to use at [src/tf_keras_resnet_zerocodechange.py](src/tf_keras_resnet_zerocodechange.py). You will note that this is an untouched TensorFlow Keras script which uses the `tf.keras` interface. Please note that Amazon SageMaker Debugger only supports `tf.keras`, `tf.estimator` and `tf.MonitoredSession` interfaces for the zero script change experience. Full description of support is available at [Amazon SageMaker Debugger with TensorFlow](https://github.com/awslabs/sagemaker-debugger/tree/master/docs/tensorflow.md)
#### Orchestrating your script to store tensors
For other containers, you need to make couple of lines of changes to your training script. Amazon SageMaker Debugger exposes a library `smdebug` which allows you to capture these tensors and save them for analysis. It's highly customizable and allows to save the specific tensors you want at different frequencies and possibly with other configurations. Refer [DeveloperGuide](https://github.com/awslabs/sagemaker-debugger/tree/master/docs) for details on how to use Amazon SageMaker Debugger library with your choice of framework in your training script. Here we have an example script orchestrated at [src/tf_keras_resnet_byoc.py](src/tf_keras_resnet_byoc.py). In addition to this, you will need to ensure that your container has the `smdebug` library installed in this case, and specify your container image URI when creating the SageMaker Estimator below. Please refer [SageMaker Documentation](https://sagemaker.readthedocs.io/en/stable/sagemaker.tensorflow.html) on how to do that.
### Analysis of tensors
Amazon SageMaker Debugger can be configured to run debugging ***Rules*** on the tensors saved from the training job. At a very broad level, a rule is Python code used to detect certain conditions during training. Some of the conditions that a data scientist training an algorithm may care about are monitoring for gradients getting too large or too small, detecting overfitting, and so on. Amazon SageMaker Debugger comes pre-packaged with certain built-in rules. Users can write their own rules using the APIs provided by Amazon SageMaker Debugger through the `smdebug` library. You can also analyze raw tensor data outside of the Rules construct in say, a SageMaker notebook, using these APIs. Please refer [Analysis Developer Guide](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md) for more on these APIs.
## Training TensorFlow Keras models with Amazon SageMaker Debugger
### Amazon SageMaker TensorFlow as a framework
Train a TensorFlow Keras model in this notebook with Amazon Sagemaker Debugger enabled and monitor the training jobs with rules. This is done using Amazon SageMaker [TensorFlow 1.15.0](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) Container as a framework
```
import boto3
import os
import sagemaker
from sagemaker.tensorflow import TensorFlow
```
Import the libraries needed for the demo of Amazon SageMaker Debugger.
```
from sagemaker.debugger import Rule, DebuggerHookConfig, TensorBoardOutputConfig, CollectionConfig
import smdebug_rulesconfig as rule_configs
```
Now define the entry point for the training script
```
# define the entrypoint script
entrypoint_script='src/tf_keras_resnet_zerocodechange.py'
```
### Setting up the Estimator
Now it's time to setup our SageMaker TensorFlow Estimator. There are new parameters with the estimator to enable your training job for debugging through Amazon SageMaker Debugger. These new parameters are explained below
* **debugger_hook_config**: This new parameter accepts a local path where you wish your tensors to be written to and also accepts the S3 URI where you wish your tensors to be uploaded to. It also accepts CollectionConfigurations which specify which tensors will be saved from the training job.
* **rules**: This new parameter will accept a list of rules you wish to evaluate against the tensors output by this training job. For rules,
Amazon SageMaker Debugger supports two types of rules
* **Amazon SageMaker Rules**: These are rules curated by the Amazon SageMaker team and you can choose to evaluate them against your training job.
* **Custom Rules**: You can optionally choose to write your own rule as a Python source file and have it evaluated against your training job. To provide SageMaker Debugger to evaluate this rule, you would have to provide the S3 location of the rule source and the evaluator image.
#### Creating your own custom rule
Let us look at how you can create your custom rule briefly before proceeding to use it with your training job. Please see the [documentation](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md) to learn more about structuring your rules and other related concepts.
##### **Summary of what the custom rule evaluates**
For demonstration purposes, below is a rule that tries to track whether gradients are too large. The custom rule looks at the tensors in the collection "gradients" saved during training and attempt to get the absolute value of the gradients in each step of the training. If the mean of the absolute values of gradients in any step is greater than a specified threshold, mark the rule as 'triggering'. Let us look at how to structure the rule source.
Any custom rule logic you want to be evaluated should extend the `Rule` interface provided by Amazon SageMaker Debugger
```python
from smdebug.rules.rule import Rule
class CustomGradientRule(Rule):
```
Now implement the class methods for the rule. Doing this allows Amazon SageMaker to understand the intent of the rule and evaluate it against your training tensors.
##### Rule class constructor
In order for Amazon SageMaker to instantiate your rule, your rule class constructor must conform to the following signature.
```python
def __init__(self, base_trial, other_trials, <other parameters>)
```
###### Arguments
- `base_trial (Trial)`: This defines the primary [Trial](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md#trial) that your rule is anchored to. This is an object of class type `Trial`.
- `other_trials (list[Trial])`: *(Optional)* This defines a list of 'other' trials you want your rule to look at. This is useful in the scenarios when you're comparing tensors from the base_trial to tensors from some other trials.
- `<other parameters>`: This is similar to `**kwargs` where you can pass in however many string parameters in your constructor signature. Note that SageMaker would only be able to support supplying string types for these values at runtime (see how, later).
##### Defining the rule logic to be invoked at each step:
This defines the logic to invoked for each step. Essentially, this is where you decide whether the rule should trigger or not. In this case, you're concerned about the gradients getting too large. So, get the [tensor reduction](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md#reduction_value) "mean" for each step and see if it's value is larger than a threshold.
```python
def invoke_at_step(self, step):
for tname in self.base_trial.tensor_names(collection="gradients"):
t = self.base_trial.tensor(tname)
abs_mean = t.reduction_value(step, "mean", abs=True)
if abs_mean > self.threshold:
return True
return False
```
#### Using your custom rule with SageMaker Estimator
Below we create the rule configuration using the `Rule.custom` method, and then pass it to the SageMaker TensorFlow estimator to kick off the job. Note that you need to pass the rule evaluator container image for custom rules. Please refer AWS Documentation on SageMaker documentation to find the image URI for your region. We will soon have this be automatically taken care of by the SageMaker SDK. You can also provide your own image, please refer to [this repository](https://github.com/awslabs/sagemaker-debugger-rules-container) for instructions on how to build such a container.
```
custom_rule = Rule.custom(
name='MyCustomRule', # used to identify the rule
# rule evaluator container image
image_uri='759209512951.dkr.ecr.us-west-2.amazonaws.com/sagemaker-debugger-rule-evaluator:latest',
instance_type='ml.t3.medium', # instance type to run the rule evaluation on
source='rules/my_custom_rule.py', # path to the rule source file
rule_to_invoke='CustomGradientRule', # name of the class to invoke in the rule source file
volume_size_in_gb=30, # EBS volume size required to be attached to the rule evaluation instance
collections_to_save=[CollectionConfig("gradients")],
# collections to be analyzed by the rule. since this is a first party collection we fetch it as above
rule_parameters={
"threshold": "20.0" # this will be used to intialize 'threshold' param in your constructor
}
)
```
Before you proceed and create our training job, let us take a closer look at the parameters used to create the Rule configuration above:
* `name`: This is used to identify this particular rule among the suite of rules you specified to be evaluated.
* `image_uri`: This is the image of the container that has the logic of understanding your custom rule sources and evaluating them against the collections you save in the training job. You can get the list of open sourced SageMaker rule evaluator images [here]()
* `instance_type`: The type of the instance you want to run the rule evaluation on
* `source`: This is the local path or the Amazon S3 URI of your rule source file.
* `rule_to_invoke`: This specifies the particular Rule class implementation in your source file which you want to be evaluated. SageMaker supports only 1 rule to be evaluated at a time in a rule job. Your source file can have multiple Rule class implementations, though.
* `collections_to_save`: This specifies which collections are necessary to be saved for this rule to run. Note that providing this collection does not necessarily mean the rule will actually use these collections. You might want to take such parameters for the rule through the next argument `rule_parameters`.
* `rule_parameters`: This provides the runtime values of the parameter in your constructor. You can still choose to pass in other values which may be necessary for your rule to be evaluated. Any value in this map is available as an environment variable and can be accessed by your rule script using `$<rule_parameter_key>`
You can read more about custom rule evaluation in Amazon SageMaker in this [documentation](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/analysis.md)
Let us now create the estimator and call `fit()` on our estimator to start the training job and rule evaluation job in parallel.
```
estimator = TensorFlow(
role=sagemaker.get_execution_role(),
base_job_name='smdebug-custom-rule-demo-tf-keras',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
entry_point=entrypoint_script,
framework_version='1.15',
py_version='py3',
train_max_run=3600,
script_mode=True,
## New parameter
rules = [custom_rule]
)
# After calling fit, Amazon SageMaker starts one training job and one rule job for you.
# The rule evaluation status is visible in the training logs
# at regular intervals
estimator.fit(wait=False)
```
## Result
As a result of calling the `fit(wait=False)`, two jobs were kicked off in the background. Amazon SageMaker Debugger kicked off a rule evaluation job for our custom gradient logic in parallel with the training job. You can review the status of the above rule job as follows.
```
import time
status = estimator.latest_training_job.rule_job_summary()
while status[0]['RuleEvaluationStatus'] == 'InProgress':
status = estimator.latest_training_job.rule_job_summary()
print(status)
time.sleep(10)
```
Once the rule job starts and you see the RuleEvaluationJobArn above, we can see the logs for the rule job in Cloudwatch. To do that, we'll use this utlity function to get a link to the rule job logs.
```
def _get_rule_job_name(training_job_name, rule_configuration_name, rule_job_arn):
"""Helper function to get the rule job name with correct casing"""
return "{}-{}-{}".format(
training_job_name[:26], rule_configuration_name[:26], rule_job_arn[-8:]
)
def _get_cw_url_for_rule_job(rule_job_name, region):
return "https://{}.console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix".format(region, region, rule_job_name)
def get_rule_jobs_cw_urls(estimator):
training_job = estimator.latest_training_job
training_job_name = training_job.describe()["TrainingJobName"]
rule_eval_statuses = training_job.describe()["DebugRuleEvaluationStatuses"]
result={}
for status in rule_eval_statuses:
if status.get("RuleEvaluationJobArn", None) is not None:
rule_job_name = _get_rule_job_name(training_job_name, status["RuleConfigurationName"], status["RuleEvaluationJobArn"])
result[status["RuleConfigurationName"]] = _get_cw_url_for_rule_job(rule_job_name, boto3.Session().region_name)
return result
get_rule_jobs_cw_urls(estimator)
```
| true |
code
| 0.561816 | null | null | null | null |
|
# Bayesian Hierarchical Linear Regression
Author: [Carlos Souza](mailto:[email protected])
Probabilistic Machine Learning models can not only make predictions about future data, but also **model uncertainty**. In areas such as **personalized medicine**, there might be a large amount of data, but there is still a relatively **small amount of data for each patient**. To customize predictions for each person it becomes necessary to **build a model for each person** — with its inherent **uncertainties** — and to couple these models together in a **hierarchy** so that information can be borrowed from other **similar people** [1].
The purpose of this tutorial is to demonstrate how to **implement a Bayesian Hierarchical Linear Regression model using NumPyro**. To motivate the tutorial, I will use [OSIC Pulmonary Fibrosis Progression](https://www.kaggle.com/c/osic-pulmonary-fibrosis-progression) competition, hosted at Kaggle.
## 1. Understanding the task
Pulmonary fibrosis is a disorder with no known cause and no known cure, created by scarring of the lungs. In this competition, we were asked to predict a patient’s severity of decline in lung function. Lung function is assessed based on output from a spirometer, which measures the forced vital capacity (FVC), i.e. the volume of air exhaled.
In medical applications, it is useful to **evaluate a model's confidence in its decisions**. Accordingly, the metric used to rank the teams was designed to reflect **both the accuracy and certainty of each prediction**. It's a modified version of the Laplace Log Likelihood (more details on that later).
Let's explore the data and see what's that all about:
```
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro arviz
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
train = pd.read_csv(
"https://gist.githubusercontent.com/ucals/"
"2cf9d101992cb1b78c2cdd6e3bac6a4b/raw/"
"43034c39052dcf97d4b894d2ec1bc3f90f3623d9/"
"osic_pulmonary_fibrosis.csv"
)
train.head()
```
In the dataset, we were provided with a baseline chest CT scan and associated clinical information for a set of patients. A patient has an image acquired at time Week = 0 and has numerous follow up visits over the course of approximately 1-2 years, at which time their FVC is measured. For this tutorial, I will use only the Patient ID, the weeks and the FVC measurements, discarding all the rest. Using only these columns enabled our team to achieve a competitive score, which shows the power of Bayesian hierarchical linear regression models especially when gauging uncertainty is an important part of the problem.
Since this is real medical data, the relative timing of FVC measurements varies widely, as shown in the 3 sample patients below:
```
def chart(patient_id, ax):
data = train[train["Patient"] == patient_id]
x = data["Weeks"]
y = data["FVC"]
ax.set_title(patient_id)
ax = sns.regplot(x, y, ax=ax, ci=None, line_kws={"color": "red"})
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart("ID00007637202177411956430", axes[0])
chart("ID00009637202177434476278", axes[1])
chart("ID00010637202177584971671", axes[2])
```
On average, each of the 176 provided patients made 9 visits, when FVC was measured. The visits happened in specific weeks in the [-12, 133] interval. The decline in lung capacity is very clear. We see, though, they are very different from patient to patient.
We were are asked to predict every patient's FVC measurement for every possible week in the [-12, 133] interval, and the confidence for each prediction. In other words: we were asked fill a matrix like the one below, and provide a confidence score for each prediction:
<img src="https://i.ibb.co/0Z9kW8H/matrix-completion.jpg" alt="drawing" width="600"/>
The task was perfect to apply Bayesian inference. However, the vast majority of solutions shared by Kaggle community used discriminative machine learning models, disconsidering the fact that most discriminative methods are very poor at providing realistic uncertainty estimates. Because they are typically trained in a manner that optimizes the parameters to minimize some loss criterion (e.g. the predictive error), they do not, in general, encode any uncertainty in either their parameters or the subsequent predictions. Though many methods can produce uncertainty estimates either as a by-product or from a post-processing step, these are typically heuristic based, rather than stemming naturally from a statistically principled estimate of the target uncertainty distribution [2].
## 2. Modelling: Bayesian Hierarchical Linear Regression with Partial Pooling
The simplest possible linear regression, not hierarchical, would assume all FVC decline curves have the same $\alpha$ and $\beta$. That's the **pooled model**. In the other extreme, we could assume a model where each patient has a personalized FVC decline curve, and **these curves are completely unrelated**. That's the **unpooled model**, where each patient has completely separate regressions.
Here, I'll use the middle ground: **Partial pooling**. Specifically, I'll assume that while $\alpha$'s and $\beta$'s are different for each patient as in the unpooled case, **the coefficients all share similarity**. We can model this by assuming that each individual coefficient comes from a common group distribution. The image below represents this model graphically:
<img src="https://i.ibb.co/H7NgBfR/Artboard-2-2x-100.jpg" alt="drawing" width="600"/>
Mathematically, the model is described by the following equations:
\begin{align}
\mu_{\alpha} &\sim \mathcal{N}(0, 100) \\
\sigma_{\alpha} &\sim |\mathcal{N}(0, 100)| \\
\mu_{\beta} &\sim \mathcal{N}(0, 100) \\
\sigma_{\beta} &\sim |\mathcal{N}(0, 100)| \\
\alpha_i &\sim \mathcal{N}(\mu_{\alpha}, \sigma_{\alpha}) \\
\beta_i &\sim \mathcal{N}(\mu_{\beta}, \sigma_{\beta}) \\
\sigma &\sim \mathcal{N}(0, 100) \\
FVC_{ij} &\sim \mathcal{N}(\alpha_i + t \beta_i, \sigma)
\end{align}
where *t* is the time in weeks. Those are very uninformative priors, but that's ok: our model will converge!
Implementing this model in NumPyro is pretty straightforward:
```
import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
assert numpyro.__version__.startswith("0.8.0")
def model(PatientID, Weeks, FVC_obs=None):
μ_α = numpyro.sample("μ_α", dist.Normal(0.0, 100.0))
σ_α = numpyro.sample("σ_α", dist.HalfNormal(100.0))
μ_β = numpyro.sample("μ_β", dist.Normal(0.0, 100.0))
σ_β = numpyro.sample("σ_β", dist.HalfNormal(100.0))
unique_patient_IDs = np.unique(PatientID)
n_patients = len(unique_patient_IDs)
with numpyro.plate("plate_i", n_patients):
α = numpyro.sample("α", dist.Normal(μ_α, σ_α))
β = numpyro.sample("β", dist.Normal(μ_β, σ_β))
σ = numpyro.sample("σ", dist.HalfNormal(100.0))
FVC_est = α[PatientID] + β[PatientID] * Weeks
with numpyro.plate("data", len(PatientID)):
numpyro.sample("obs", dist.Normal(FVC_est, σ), obs=FVC_obs)
```
That's all for modelling!
## 3. Fitting the model
A great achievement of Probabilistic Programming Languages such as NumPyro is to decouple model specification and inference. After specifying my generative model, with priors, condition statements and data likelihood, I can leave the hard work to NumPyro's inference engine.
Calling it requires just a few lines. Before we do it, let's add a numerical Patient ID for each patient code. That can be easily done with scikit-learn's LabelEncoder:
```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train["PatientID"] = le.fit_transform(train["Patient"].values)
FVC_obs = train["FVC"].values
Weeks = train["Weeks"].values
PatientID = train["PatientID"].values
```
Now, calling NumPyro's inference engine:
```
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=2000, num_warmup=2000)
rng_key = random.PRNGKey(0)
mcmc.run(rng_key, PatientID, Weeks, FVC_obs=FVC_obs)
posterior_samples = mcmc.get_samples()
```
## 4. Checking the model
### 4.1. Inspecting the learned parameters
First, let's inspect the parameters learned. To do that, I will use [ArviZ](https://arviz-devs.github.io/arviz/), which perfectly integrates with NumPyro:
```
import arviz as az
data = az.from_numpyro(mcmc)
az.plot_trace(data, compact=True);
```
Looks like our model learned personalized alphas and betas for each patient!
### 4.2. Visualizing FVC decline curves for some patients
Now, let's visually inspect FVC decline curves predicted by our model. We will completely fill in the FVC table, predicting all missing values. The first step is to create a table to fill:
```
pred_template = []
for i in range(train["Patient"].nunique()):
df = pd.DataFrame(columns=["PatientID", "Weeks"])
df["Weeks"] = np.arange(-12, 134)
df["PatientID"] = i
pred_template.append(df)
pred_template = pd.concat(pred_template, ignore_index=True)
```
Predicting the missing values in the FVC table and confidence (sigma) for each value becomes really easy:
```
PatientID = pred_template["PatientID"].values
Weeks = pred_template["Weeks"].values
predictive = Predictive(model, posterior_samples, return_sites=["σ", "obs"])
samples_predictive = predictive(random.PRNGKey(0), PatientID, Weeks, None)
```
Let's now put the predictions together with the true values, to visualize them:
```
df = pd.DataFrame(columns=["Patient", "Weeks", "FVC_pred", "sigma"])
df["Patient"] = le.inverse_transform(pred_template["PatientID"])
df["Weeks"] = pred_template["Weeks"]
df["FVC_pred"] = samples_predictive["obs"].T.mean(axis=1)
df["sigma"] = samples_predictive["obs"].T.std(axis=1)
df["FVC_inf"] = df["FVC_pred"] - df["sigma"]
df["FVC_sup"] = df["FVC_pred"] + df["sigma"]
df = pd.merge(
df, train[["Patient", "Weeks", "FVC"]], how="left", on=["Patient", "Weeks"]
)
df = df.rename(columns={"FVC": "FVC_true"})
df.head()
```
Finally, let's see our predictions for 3 patients:
```
def chart(patient_id, ax):
data = df[df["Patient"] == patient_id]
x = data["Weeks"]
ax.set_title(patient_id)
ax.plot(x, data["FVC_true"], "o")
ax.plot(x, data["FVC_pred"])
ax = sns.regplot(x, data["FVC_true"], ax=ax, ci=None, line_kws={"color": "red"})
ax.fill_between(x, data["FVC_inf"], data["FVC_sup"], alpha=0.5, color="#ffcd3c")
ax.set_ylabel("FVC")
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart("ID00007637202177411956430", axes[0])
chart("ID00009637202177434476278", axes[1])
chart("ID00011637202177653955184", axes[2])
```
The results are exactly what we expected to see! Highlight observations:
- The model adequately learned Bayesian Linear Regressions! The orange line (learned predicted FVC mean) is very inline with the red line (deterministic linear regression). But most important: it learned to predict uncertainty, showed in the light orange region (one sigma above and below the mean FVC line)
- The model predicts a higher uncertainty where the data points are more disperse (1st and 3rd patients). Conversely, where the points are closely grouped together (2nd patient), the model predicts a higher confidence (narrower light orange region)
- Finally, in all patients, we can see that the uncertainty grows as the look more into the future: the light orange region widens as the # of weeks grow!
### 4.3. Computing the modified Laplace Log Likelihood and RMSE
As mentioned earlier, the competition was evaluated on a modified version of the Laplace Log Likelihood. In medical applications, it is useful to evaluate a model's confidence in its decisions. Accordingly, the metric is designed to reflect both the accuracy and certainty of each prediction.
For each true FVC measurement, we predicted both an FVC and a confidence measure (standard deviation $\sigma$). The metric was computed as:
\begin{align}
\sigma_{clipped} &= max(\sigma, 70) \\
\delta &= min(|FVC_{true} - FVC_{pred}|, 1000) \\
metric &= -\dfrac{\sqrt{2}\delta}{\sigma_{clipped}} - \ln(\sqrt{2} \sigma_{clipped})
\end{align}
The error was thresholded at 1000 ml to avoid large errors adversely penalizing results, while the confidence values were clipped at 70 ml to reflect the approximate measurement uncertainty in FVC. The final score was calculated by averaging the metric across all (Patient, Week) pairs. Note that metric values will be negative and higher is better.
Next, we calculate the metric and RMSE:
```
y = df.dropna()
rmse = ((y["FVC_pred"] - y["FVC_true"]) ** 2).mean() ** (1 / 2)
print(f"RMSE: {rmse:.1f} ml")
sigma_c = y["sigma"].values
sigma_c[sigma_c < 70] = 70
delta = (y["FVC_pred"] - y["FVC_true"]).abs()
delta[delta > 1000] = 1000
lll = -np.sqrt(2) * delta / sigma_c - np.log(np.sqrt(2) * sigma_c)
print(f"Laplace Log Likelihood: {lll.mean():.4f}")
```
What do these numbers mean? It means if you adopted this approach, you would **outperform most of the public solutions** in the competition. Curiously, the vast majority of public solutions adopt a standard deterministic Neural Network, modelling uncertainty through a quantile loss. **Most of the people still adopt a frequentist approach**.
**Uncertainty** for single predictions becomes more and more important in machine learning and is often a requirement. **Especially when the consequenses of a wrong prediction are high**, we need to know what the probability distribution of an individual prediction is. For perspective, Kaggle just launched a new competition sponsored by Lyft, to build motion prediction models for self-driving vehicles. "We ask that you predict a few trajectories for every agent **and provide a confidence score for each of them**."
Finally, I hope the great work done by Pyro/NumPyro developers help democratize Bayesian methods, empowering an ever growing community of researchers and practitioners to create models that can not only generate predictions, but also assess uncertainty in their predictions.
## References
1. Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 521, 452–459 (2015). https://doi.org/10.1038/nature14541
2. Rainforth, Thomas William Gamlen. Automating Inference, Learning, and Design Using Probabilistic Programming. University of Oxford, 2017.
| true |
code
| 0.597197 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/st24hour/tutorial/blob/master/Neural_Style_Transfer_with_Eager_Execution_question.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neural Style Transfer with tf.keras
## Overview
이 튜토리얼에서 우리는 딥러닝을 사용하여 다른 이미지의 스타일로 이미지를 구성하는 방법을 배우게됩니다 (피카소나 반 고흐처럼 그릴 수 있기를 바란 적 있습니까?). 이것은 **neural style transfer**라고 알려져 있습니다. 이것은 Leon A. Gatys의 논문 인 [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576)에 설명되어 있으며 반드시 읽어 봐야합니다.
그런데, neural style transfer가 무엇일까요?
Neural style transfer는 3 가지 이미지, **콘텐츠** 이미지, **스타일 참조** 이미지 (유명한 화가의 작품 등) 및 원하는 **입력 이미지** 를 사용하는 최적화 기술입니다: 입력 이미지가 콘텐츠 이미지처럼 보이도록 변형되지만 스타일 이미지의 스타일처럼 "색칠"되도록 서로 섞습니다.
예를 들어, 이 거북이와 Katsushika Hokusai의 이미지 *The Great Wave off Kanagawa*를 봅시다.
<img src="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg?raw=1" alt="Drawing" style="width: 200px;"/>
<img src="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg?raw=1" alt="Drawing" style="width: 200px;"/>
[Image of Green Sea Turtle](https://commons.wikimedia.org/wiki/File:Green_Sea_Turtle_grazing_seagrass.jpg)
-By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Common
Hokusai가 거북이의 그림을 자신의 스타일로 그리기로 결정했다면 어떻게 될까요? 이와 같을까요?
<img src="https://github.com/tensorflow/models/blob/master/research/nst_blogpost/wave_turtle.png?raw=1" alt="Drawing" style="width: 500px;"/>
Neural style transfer는 신경 네트워크의 기능과 내부 표현을 보여주는 재미있고 흥미로운 기술입니다.
Neural style transfer의 원리는 두 이미지의 내용이 얼마나 다른지, $L_{content}$, 두 이미지의 스타일이 얼마나 다른지, $L_{ style}$를 표현하는 두 거리 함수를 정의하는 것입니다. 그 다음, 3 개의 이미지, 원하는 스타일 이미지, 원하는 컨텐츠 이미지 및 입력 이미지 (컨텐츠 이미지로 초기화 됨)가 주어지면 입력 이미지를 콘텐츠 이미지와 콘텐츠 거리가 최소화 되도록, 스타일 이미지와 스타일 거리가 최소화 되도록 변환합니다. 요약하면 기본 입력 이미지, 일치시킬 콘텐츠 이미지 및 일치시키고자하는 스타일 이미지를 사용합니다. 우리는 backpropagation으로 컨텐츠 및 스타일 거리 (losses)를 최소화하고 컨텐츠 이미지의 컨텐츠 및 스타일 이미지의 스타일과 일치하는 이미지를 만듭니다.
### 다루게 될 개념들:
이 튜토리얼에서 우리는 실제 경험을 쌓고 다음 개념을 중심으로 실습할 것입니다.
* **Eager Execution** - Operation을 즉각적으로 평가하는 TensorFlow의 imperative programming 환경 사용
* [Learn more about eager execution](https://www.tensorflow.org/programmers_guide/eager)
* [See it in action](https://www.tensorflow.org/get_started/eager)
* **모델 정의를 위해 [Functional API](https://keras.io/getting-started/functional-api-guide/) 사용** - Functional API를 사용하여 필요한 중간 activations에 대한 액세스를 제공 할 모델을 만들 것입니다.
* **Pretrained model의 feature를 활용** - Pretrained된 모델과 그 feature map을 사용하는 방법을 배웁니다.
* **Custom training loops 구현** - 입력 parameter와 관련된 주어진 손실을 최소화하기 위해 optimizer를 설정하는 방법을 살펴 보겠습니다.
### Style transfer의 일반적인 단계들 :
1. Visualize data
2. Basic Preprocessing/preparing our data
3. Set up loss functions
4. Create model
5. Optimize for loss function
## Setup
### Download Images
```
import os
img_dir = '/tmp/nst'
if not os.path.exists(img_dir):
os.makedirs(img_dir)
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg
!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg
```
### Import and configure modules
```
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (10,10)
mpl.rcParams['axes.grid'] = False
import numpy as np
from PIL import Image
import time
import functools
import tensorflow as tf
import tensorflow.contrib.eager as tfe
from tensorflow.python.keras.preprocessing import image as kp_image
from tensorflow.python.keras import models
from tensorflow.python.keras import losses
from tensorflow.python.keras import layers
from tensorflow.python.keras import backend as K
```
우리는 [eager execution](https://www.tensorflow.org/guide/eager)을 가능하게 하는 것으로 시작할 것입니다. Eager execution은 우리가 가장 명확하고 가장 판독 가능한 방식으로 작업할 수 있게 해줍니다.
```
"""
Start eager execution
"""
print("Eager execution: {}".format(tf.executing_eagerly()))
# Set up some global values here
content_path = '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg'
style_path = '/tmp/nst/The_Great_Wave_off_Kanagawa.jpg'
```
## Visualize the input
```
def load_img(path_to_img):
max_dim = 512
img = Image.open(path_to_img)
long = max(img.size)
scale = max_dim/long
img = img.resize((round(img.size[0]*scale), round(img.size[1]*scale)), Image.ANTIALIAS)
img = kp_image.img_to_array(img)
# We need to broadcast the image array such that it has a batch dimension
img = np.expand_dims(img, axis=0)
return img
def imshow(img, title=None):
# Remove the batch dimension
out = np.squeeze(img, axis=0)
# Normalize for display
out = out.astype('uint8')
plt.imshow(out)
if title is not None:
plt.title(title)
plt.imshow(out)
```
이들은 콘텐츠 및 스타일 입력 이미지입니다. 우리는 콘텐츠 이미지의 콘텐츠로 이미지를 "생성"하지만 스타일 이미지의 스타일을 사용하기를 바랍니다.
```
plt.figure(figsize=(10,10))
content = load_img(content_path).astype('uint8')
style = load_img(style_path).astype('uint8')
plt.subplot(1, 2, 1)
imshow(content, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style, 'Style Image')
plt.show()
```
## Prepare the data
이미지를 쉽게 로드하고 사전 처리 할 수있는 메소드를 만들어 봅시다. 우리는 VGG 학습 과정과 동일한 전처리 과정을 수행합니다. VGG 네트워크는 각 채널이 `mean = [103.939, 116.779, 123.68]` 및 채널 BGR로 normalize 된 이미지로 학습됩니다.
```
def load_and_process_img(path_to_img):
img = load_img(path_to_img)
img = tf.keras.applications.vgg19.preprocess_input(img)
return img
```
최적화의 결과를 보기 위해서 우리는 역 사전 처리 단계를 수행해야합니다. 또한 최적화 된 이미지는 $- \infty$에서 $- \infty$ 사이의 값을 가질 수 있으므로 0-255 범위에서 값을 유지하려면 clip해야합니다.
```
def deprocess_img(processed_img):
x = processed_img.copy()
if len(x.shape) == 4:
x = np.squeeze(x, 0)
assert len(x.shape) == 3, ("Input to deprocess image must be an image of "
"dimension [1, height, width, channel] or [height, width, channel]")
if len(x.shape) != 3:
raise ValueError("Invalid input to deprocessing image")
# perform the inverse of the preprocessiing step
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
return x
```
### Define content and style representations
이미지의 콘텐츠 표현과 스타일 표현을 모두 얻으려면 우리 모델에서 중간 layer를 살펴볼 것입니다. 이러한 중간 layer는 고차원 feature를 나타냅니다. 우리는 이미지 분류에서 pretrained된 VGG19 네트워크를 사용할 것입니다. 이러한 중간 layer는 이미지의 콘텐츠 및 스타일 표현을 정의하는 데 필요합니다. 입력 이미지를 이러한 중간 layer에서 해당 스타일 및 내용 타겟 표현과 일치 시키도록 할 것입니다.
#### Why intermediate layers?
미리 학습된 이미지 분류 네트워크에서 이러한 중간 결과물로 스타일과 컨텐츠 표현을 정의 할 수 있는 이유가 궁금 할 수 있습니다. High level에서 이 현상은 네트워크가 이미지 분류를 수행하기 위해 (네트워크가 수행하도록 훈련 된) 이미지를 이해해야한다는 사실로 설명 할 수 있습니다. 여기에는 raw 이미지를 입력으로 사용하여 raw 이미지 내에있는 복잡한 피쳐를 이해하는 것으로 바뀌는 변환을 통해 내부 표현을 만드는 작업이 포함됩니다. 이것은 컨벌루션 뉴럴 네트워크가 잘 일반화 될 수있는 이유 중 일부입니다. 배경 노이즈 및 기타 배경과 같은 부분에 상관없이 클래스 (예 : 고양이 vs. 개) 내에서 불변하는 특징을 정의 할 수 있습니다. 따라서 원본 이미지가 입력되고 분류 레이블이 출력되는 곳 사이의 어떤 곳에서 모델은 복잡한 피쳐 추출기로 사용됩니다. 따라서 중간 레이어에 액세스하여 입력 이미지의 내용과 스타일을 설명 할 수 있습니다.
특히 네트워크에서 다음과 같은 중간 계층을 가져옵니다.
참고: VGG19 architecture
<img src="https://www.researchgate.net/profile/Clifford_Yang/publication/325137356/figure/fig2/AS:670371271413777@1536840374533/llustration-of-the-network-architecture-of-VGG-19-model-conv-means-convolution-FC-means.jpg" alt="Drawing" style="width: 200px;"/>
```
# Content layer where will pull our feature maps
content_layers = ['block5_conv2']
# Style layer we are interested in
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1'
]
num_content_layers = len(content_layers)
num_style_layers = len(style_layers)
```
## Build the Model
우리는 [VGG19](https://keras.io/applications/#vgg19)를 불러오고 우리의 입력 tensor를 모델에 입력으로 줄 것입니다. 이것은 콘텐츠, 스타일, 그리고 생성하는 이미지의 feature map (내용 및 스타일 표현)을 추출할 수 있도록 해줍니다.
원래의 논문에서와 같이 VGG19 네트워크를 이용할 것입니다. 또한 VGG19는 ResNet, Inception 등과 비교하면 비교적 단순한 모델이므로 feature map이 실제로 스타일 이전에 더 효과적입니다.
우리의 스타일 및 콘텐츠 feature map에 해당하는 중간 layer의 출력을 얻을 것이며, Keras [**Functional API**](https://keras.io/getting-started/functional-api-guide/)를 사용하여 모델이 원하는 출력을 하도록 정의합니다.
모델을 정의하는 Functional API를 사용하면 단순히 입력 및 출력을 정의 할 수 있습니다.
`model = Model(inputs, outputs)`
참고: [tf.keras.applications.vgg19.VGG19()](https://keras.io/applications/#vgg19)
```
def get_model():
""" Creates our model with access to intermediate layers.
This function will load the VGG19 model and access the intermediate layers.
These layers will then be used to create a new model that will take input image
and return the outputs from these intermediate layers from the VGG model.
Returns:
returns a keras model that takes image inputs and outputs the style and
content intermediate layers.
"""
# Load our model. We load pretrained VGG, trained on imagenet data
"""
Load Imagenet pretrained VGG19 network. You don't need to load FC layers
vgg =
"""
vgg.trainable = False
# Get output layers corresponding to style and content layers
style_outputs = [vgg.get_layer(name).output for name in style_layers]
content_outputs = [vgg.get_layer(name).output for name in content_layers]
model_outputs = style_outputs + content_outputs
# Build model
return models.Model(vgg.input, model_outputs)
```
위의 코드에서 pretrained된 이미지 분류 네트워크를 로드합니다. 그런 다음 이전에 정의한 관심있는 layer를 불러옵니다. 그런 다음 모델의 입력을 이미지로 설정하고 출력을 스타일 및 콘텐츠 레이어의 출력으로 설정하여 모델을 정의합니다. 즉, 입력 이미지를 가져와 콘텐츠 및 스타일 중간 레이어를 출력하는 모델을 만들었습니다.
## Define and create our loss functions (content and style distances)
### Content Loss
우리의 콘텐츠 loss 정의는 실제로 매우 간단합니다. 원하는 콘텐츠 이미지와 기본 입력 이미지를 네트워크에 전달합니다. 이렇게하면 모델에서 출력되는 중간 레이어 출력 (위에 정의 된 레이어에서)이 반환됩니다. 그런 다음 우리는 단순히 그 이미지들의 두 중간 representation 사이의 유클리드 거리를 취합니다.
보다 공식적으로, 콘텐츠 손실은 출력 이미지 $x$와 콘텐츠 이미지 $p$에서 콘텐츠까지의 거리를 설명하는 함수입니다. $ C_{nn} $은 미리 훈련 된 deep convolutional neural network라고 합시다. 우리는 [VGG19](https://keras.io/applications/#vgg19)를 사용할 것입니다. $X$를 임의의 이미지라고 하면 $C_{nn}(X)$ 는 네트워크에 X를 넣은 것입니다. $F^l_{ij}(x) \in C_{nn}(x)$ 와 $P^l_{ij}(p) \in C_{nn}(p)$ 를 각각 입력으로 $x$ 와 $p$ 를 넣었을때 layer $l$ 에서의 중간 feature representation이라고 합시다. 그리면 우리는 콘텐츠 거리(loss)를 수식적으로 다음과 같이 정의 할 수 있습니다: $$L^l_{content}(p, x) = \sum_{i, j} (F^l_{ij}(x) - P^l_{ij}(p))^2$$
우리는 일반적인 방식으로 backpropagation을 수행하여 이러한 콘텐츠 loss를 최소화합니다. 따라서 특정 레이어 (content_layer에 정의 됨)에서 원본 콘텐츠 이미지와 같은 응답을 생성 할 때까지 초기 이미지를 변경합니다.
이것은 매우 간단하게 구현 될 수 있습니다. 입력 이미지 $x$, 그리고 우리의 콘텐트 이미지 $p$를 입력으로 받은 네트워크의 레이어 $l$에서 feature map을 입력으로 받아서 컨텐츠 거리를 반환합니다.
### Computing content loss
실제로 원하는 각 레이어에서 콘텐츠 loss를 추가 할 것입니다. 이 방법은 우리가 모델을 통해 입력 이미지를 공급할 때마다 (eager에서는 단순하게 `model(input_image)`입니다!) 모델을 통한 모든 컨텐츠 손실이 적절하게 계산 될 것이고 eager로 실행하기 때문에 모든 gradients가 계산됩니다 .
```
def get_content_loss(base_content, target):
return tf.reduce_mean(tf.square(base_content - target))
```
### Style Loss
스타일 loss 계산은 좀 더 복잡하지만 동일한 원칙을 따르며, 이번에는 네트워크에 기본 입력 이미지와 스타일 이미지를 입력으로 줍니다. 그러나 기본 입력 이미지와 스타일 이미지의 중간 출력을 그대로 비교하는 대신 두 출력의 Gram matrix를 비교합니다.
수학적으로, 우리는 기본 입력 이미지 $x$와 스타일 이미지 $a$의 style loss를 두 이미지의 스타일 표현(gram matrix)의 거리로 정의합니다. 우리는 이미지의 스타일 표현을 gram matrix $G^l$로 주어지는 서로 다른 필터 응답의 correlation으로 설명합니다. 여기서 $G^l_{ij}$는 벡터화 된 feature map $i$와 $j$의 내적 (inner product) 입니다. 우리는 특정 이미지의 feature map에서 생성된 $G^l_{ij}$가 feature map $i$와 $j$ 사이의 correlation을 나타낸다는 것을 알 수 있습니다.
기본 입력 이미지의 스타일을 생성하기 위해 콘텐츠 이미지에서 gradient descent를 수행하여 스타일 이미지의 스타일 표현과 일치하는 이미지로 변환합니다.이를 위해 스타일 이미지와 입력 이미지 사이의 mean square 거리를 최소화하도록 만듭니다. 총 스타일 손실에 대한 각 layer의 contribution은 다음과 같습니다:
$$E_l = \frac{1}{4N_l^2M_l^2} \sum_{i,j}(G^l_{ij} - A^l_{ij})^2$$
$G^l_{ij}$ 와 $A^l_{ij}$는 각각 layer $l$에서
$x$ 와 $a$의 스타일 표현입니다. $N_l$는 각 사이즈가 $M_l = height * width$인 feature map 수를 나타냅니다. 따라서 전체 스타일 loss는
$$L_{style}(a, x) = \sum_{l \in L} w_l E_l$$
입니다. 여기서 우리는 각 layer의 loss contribution을 $w_l$로 가중치 주었습니다. 우리의 경우에 각 layer를 동일하게 가중치 주었습니다($w_l =\frac{1}{|L|}$).
### Total loss
만들고자하는 이미지는 콘텐츠 이미지와 $L_{content}$가 작고 스타일 이미지와 $L_{style}$이 작아지도록 하는 이미지입니다. 따라서 전체 목적 함수(loss)는 다음과 같습니다:
$$L_{total}(p, a, x) = \alpha L_{content}(p, x)+\beta L_{style}(a, x)$$
$\alpha$와 $\beta$는 각각 콘텐트와 스타일 loss에 곱해지는 weight 값 입니다.
### Computing style loss
이번에도 style loss를 거리 metric으로 구현합니다.
get_style_loss는 $E_l$을 구하는 함수입니다.
```
def gram_matrix(input_tensor):
# We make the image channels first
channels = int(input_tensor.shape[-1])
a = tf.reshape(input_tensor, [-1, channels])
n = tf.shape(a)[0]
gram = tf.matmul(a, a, transpose_a=True)
return gram / tf.cast(n, tf.float32)
def get_style_loss(base_style, gram_target):
"""Expects two images of dimension h, w, c"""
# height, width, num filters of each layer
# We scale the loss at a given layer by the size of the feature map and the number of filters
height, width, channels = base_style.get_shape().as_list()
gram_style = gram_matrix(base_style)
return tf.reduce_mean(tf.square(gram_style - gram_target))# / (4. * (channels ** 2) * (width * height) ** 2)
```
## Apply style transfer to our images
### Run Gradient Descent
우리는 loss를 최소화하기 위해 [Adam](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam)* optimizer를 사용합니다. 반복적으로 출력 이미지를 업데이트하여 loss를 최소화합니다. 네트워크와 관련된 weight를 업데이트하지 않고 대신 입력 이미지를 조정하여 loss를 최소화합니다. 이를 위해서는 loss와 gradients를 계산하는 방법을 알아야합니다.
우리는 콘텐츠와 스타일 이미지를 불러오고, 네트워크를 통해 feed forward하며, 모델에서 콘텐츠 및 스타일 feature representation을 출력하는 작은 도우미 함수를 정의 할 것입니다.
```
def get_feature_representations(model, content_path, style_path):
"""Helper function to compute our content and style feature representations.
This function will simply load and preprocess both the content and style
images from their path. Then it will feed them through the network to obtain
the outputs of the intermediate layers.
Arguments:
model: The model that we are using.
content_path: The path to the content image.
style_path: The path to the style image
Returns:
returns the style features and the content features.
"""
# Load our images in
content_image = load_and_process_img(content_path)
style_image = load_and_process_img(style_path)
# batch compute content and style features
style_outputs = model(style_image)
content_outputs = model(content_image)
# Get the style and content feature representations from our model
style_features = [style_layer[0] for style_layer in style_outputs[:num_style_layers]]
content_features = [content_layer[0] for content_layer in content_outputs[num_style_layers:]]
return style_features, content_features
```
### Computing the loss and gradients
여기서는 [**tf.GradientTape**](https://www.tensorflow.org/programmers_guide/eager#computing_gradients)을 사용하여 gradient를 계산합니다. 나중에 gradient를 계산하기위한 operation을 추적하여 자동 미분화를 가능하게 합니다. Forward pass중에 작업을 기록한 다음 backward pass시에 입력 이미지에 대하여 loss 함수의 gradient를 계산할 수 있습니다.
```
def compute_loss(model, loss_weights, init_image, gram_style_features, content_features):
"""This function will compute the loss total loss.
Arguments:
model: The model that will give us access to the intermediate layers
loss_weights: The weights of each contribution of each loss function.
(style weight, content weight, and total variation weight)
init_image: Our initial base image. This image is what we are updating with
our optimization process. We apply the gradients wrt the loss we are
calculating to this image.
gram_style_features: Precomputed gram matrices corresponding to the
defined style layers of interest.
content_features: Precomputed outputs from defined content layers of
interest.
Returns:
returns the total loss, style loss, content loss, and total variational loss
"""
style_weight, content_weight = loss_weights
# Feed our init image through our model. This will give us the content and
# style representations at our desired layers. Since we're using eager
# our model is callable just like any other function!
model_outputs = model(init_image)
style_output_features = model_outputs[:num_style_layers]
content_output_features = model_outputs[num_style_layers:]
style_score = 0
content_score = 0
# Accumulate style losses from all layers
# Here, we equally weight each contribution of each loss layer
weight_per_style_layer = 1.0 / float(num_style_layers)
for target_style, comb_style in zip(gram_style_features, style_output_features):
style_score += weight_per_style_layer * get_style_loss(comb_style[0], target_style)
# Accumulate content losses from all layers
weight_per_content_layer = 1.0 / float(num_content_layers)
for target_content, comb_content in zip(content_features, content_output_features):
content_score += weight_per_content_layer* get_content_loss(comb_content[0], target_content)
style_score *= style_weight
content_score *= content_weight
# Get total loss
loss = style_score + content_score
return loss, style_score, content_score
```
Gradients를 구하는 것은 쉽습니다:
```
def compute_grads(cfg):
with tf.GradientTape() as tape:
all_loss = compute_loss(**cfg)
# Compute gradients wrt input image
total_loss = all_loss[0]
return tape.gradient(total_loss, cfg['init_image']), all_loss
```
### Optimization loop
[Adam optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
```
import IPython.display
def run_style_transfer(content_path,
style_path,
num_iterations=1000,
content_weight=1e3,
style_weight=1e-2):
# We don't need to (or want to) train any layers of our model, so we set their
# trainable to false.
model = get_model()
for layer in model.layers:
layer.trainable = False
# Get the style and content feature representations (from our specified intermediate layers)
style_features, content_features = get_feature_representations(model, content_path, style_path)
gram_style_features = [gram_matrix(style_feature) for style_feature in style_features]
# Set initial image
init_image = load_and_process_img(content_path)
init_image = tfe.Variable(init_image, dtype=tf.float32)
# Create our optimizer
opt = tf.train.AdamOptimizer(learning_rate=5, beta1=0.99, epsilon=1e-1)
# For displaying intermediate images
iter_count = 1
# Store our best result
best_loss, best_img = float('inf'), None
# Create a nice config
loss_weights = (style_weight, content_weight)
cfg = {
'model': model,
'loss_weights': loss_weights,
'init_image': init_image,
'gram_style_features': gram_style_features,
'content_features': content_features
}
# For displaying
num_rows = 2
num_cols = 5
display_interval = num_iterations/(num_rows*num_cols)
start_time = time.time()
global_start = time.time()
norm_means = np.array([103.939, 116.779, 123.68])
min_vals = -norm_means
max_vals = 255 - norm_means
imgs = []
for i in range(num_iterations):
grads, all_loss = compute_grads(cfg)
loss, style_score, content_score = all_loss
"""
Apply_gradients
"""
clipped = tf.clip_by_value(init_image, min_vals, max_vals)
init_image.assign(clipped)
end_time = time.time()
if loss < best_loss:
# Update best loss and best image from total loss.
best_loss = loss
best_img = deprocess_img(init_image.numpy())
if i % display_interval== 0:
start_time = time.time()
# Use the .numpy() method to get the concrete numpy array
plot_img = init_image.numpy()
plot_img = deprocess_img(plot_img)
imgs.append(plot_img)
IPython.display.clear_output(wait=True)
IPython.display.display_png(Image.fromarray(plot_img)) # NumPy 배열을 Image 객체로 변환
print('Iteration: {}'.format(i))
print('Total loss: {:.4e}, '
'style loss: {:.4e}, '
'content loss: {:.4e}, '
'time: {:.4f}s'.format(loss, style_score, content_score, time.time() - start_time))
print('Total time: {:.4f}s'.format(time.time() - global_start))
IPython.display.clear_output(wait=True)
plt.figure(figsize=(14,4))
for i,img in enumerate(imgs):
plt.subplot(num_rows,num_cols,i+1)
plt.imshow(img)
plt.xticks([])
plt.yticks([])
return best_img, best_loss
best, best_loss = run_style_transfer(content_path,
style_path, num_iterations=1000)
Image.fromarray(best)
```
## Visualize outputs
우리는 출력 이미지에 적용된 processing을 제거하기 위해 출력 이미지를 "deprocess"합니다.
```
def show_results(best_img, content_path, style_path, show_large_final=True):
plt.figure(figsize=(10, 5))
content = load_img(content_path)
style = load_img(style_path)
plt.subplot(1, 2, 1)
imshow(content, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style, 'Style Image')
if show_large_final:
plt.figure(figsize=(10, 10))
plt.imshow(best_img)
plt.title('Output Image')
plt.show()
show_results(best, content_path, style_path)
```
## Try it on other images
Image of Tuebingen
Photo By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons
### Starry night + Tuebingen
```
best_starry_night, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')
show_results(best_starry_night, '/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')
```
### Pillars of Creation + Tuebingen
```
best_poc_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
show_results(best_poc_tubingen,
'/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
```
### Kandinsky Composition 7 + Tuebingen
```
best_kandinsky_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')
show_results(best_kandinsky_tubingen,
'/tmp/nst/Tuebingen_Neckarfront.jpg',
'/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')
```
### Pillars of Creation + Sea Turtle
```
best_poc_turtle, best_loss = run_style_transfer('/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
show_results(best_poc_turtle,
'/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',
'/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')
```
## 주요 요점
### 다룬 내용들:
* 우리는 몇 가지 다른 loss 함수를 구축하고 이러한 손실을 최소화도록 입력 영상을 변환하기 위해 backpropagation 사용했습니다
* 이를 위해 **pretrained된 모델**을 로드하고 학습 된 feature map을 사용하여 이미지의 콘텐츠 및 스타일 표현을 설명해야했습니다.
* 우리의 주요 loss 함수는 주로 이러한 다양한 representation의 관점에서 거리를 계산하는 것이 었습니다
* 우리는 이것을 custom model과 **eager execution**으로 구현하였습니다.
* 우리는 Functional API를 이용하여 우리의 custom model을 만들었습니다.
* Eager execution은 자연스러운 Python control flow를 사용하여 텐서로 동적으로 작업 할 수있게 해줍니다.
* 우리는 텐서를 직접 조작하여 디버깅을하고 텐서로 작업하는 것을 더 쉽게 만듭니다.
* **tf.gradient**를 사용하여 optimizer 업데이트 규칙을 적용하고 이미지를 반복적으로 업데이트했습니다. Optimizer는 입력 이미지와 관련하여 주어진 loss를 최소화했습니다.
**[Image of Tuebingen](https://commons.wikimedia.org/wiki/File:Tuebingen_Neckarfront.jpg)**
Photo By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons
**[Image of Green Sea Turtle](https://commons.wikimedia.org/wiki/File:Green_Sea_Turtle_grazing_seagrass.jpg)**
By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
# Report
1. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e3, style_weight=1e-2
2. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e3, style_weight=1e-0
3. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e3, style_weight=1e--4
4. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e1, style_weight=1e-2
5. 튀빙겐 사진을 고흐의 starry night 스타일로 바꿔봅시다. content_weight=1e5, style_weight=1e-2
Q) $\alpha$(content_weight)와 $\beta$(style_weight)의 역할은 무엇입니까?
#### 참고) 파일 경로, 이름
> 튀빙겐: '/tmp/nst/Tuebingen_Neckarfront.jpg'
> starry night: '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg'
| true |
code
| 0.604457 | null | null | null | null |
|
# Node2Vec representation learning with Stellargraph components
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
This example demonstrates how to apply components from the stellargraph library to perform representation learning via Node2Vec. This uses a Keras implementation of Node2Vec available in stellargraph instead of the reference implementation provided by ``gensim``. This implementation provides flexible interfaces to downstream tasks for end-to-end learning.
<a name="refs"></a>
**References**
[1] Node2Vec: Scalable Feature Learning for Networks. A. Grover, J. Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016. ([link](https://snap.stanford.edu/node2vec/))
[2] Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))
[3] word2vec Parameter Learning Explained. X. Rong. arXiv preprint arXiv:1411.2738. 2014 Nov 11. ([link](https://arxiv.org/pdf/1411.2738.pdf))
## Introduction
Following word2vec [2,3], for each (``target``,``context``) node pair $(v_i,v_j)$ collected from random walks, we learn the representation for the target node $v_i$ by using it to predict the existence of context node $v_j$, with the following three-layer neural network.

Node $v_i$'s representation in the hidden layer is obtained by multiplying $v_i$'s one-hot representation in the input layer with the input-to-hidden weight matrix $W_{in}$, which is equivalent to look up the $i$th row of input-to-hidden weight matrix $W_{in}$. The existence probability of each node conditioned on node $v_i$ is outputted in the output layer, which is obtained by multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation. To capture the ``target-context`` relation between $v_i$ and $v_j$, we need to maximize the probability $\mathrm{P}(v_j|v_i)$. However, computing $\mathrm{P}(v_j|v_i)$ is time consuming, which involves the matrix multiplication between $v_i$'s hidden-layer representation and the hidden-to-out weight matrix $W_{out}$.
To speed up the computing, we adopt the negative sampling strategy [2,3]. For each (``target``, ``context``) node pair, we sample a negative node $v_k$, which is not $v_i$'s context. To obtain the output, instead of multiplying $v_i$'s hidden-layer representation with the hidden-to-out weight matrix $W_{out}$ followed by a softmax activation, we only calculate the dot product between $v_i$'s hidden-layer representation and the $j$th column as well as the $k$th column of the hidden-to-output weight matrix $W_{out}$ followed by a sigmoid activation respectively. According to [3], the original objective to maximize $\mathrm{P}(v_j|v_i)$ can be approximated by minimizing the cross entropy between $v_j$ and $v_k$'s outputs and their ground-truth labels (1 for $v_j$ and 0 for $v_k$).
Following [2,3], we denote the rows of the input-to-hidden weight matrix $W_{in}$ as ``input_embeddings`` and the columns of the hidden-to-out weight matrix $W_{out}$ as ``output_embeddings``. To build the Node2Vec model, we need look up ``input_embeddings`` for target nodes and ``output_embeddings`` for context nodes and calculate their inner product together with a sigmoid activation.
```
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.3.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.3.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import os
import networkx as nx
import numpy as np
import pandas as pd
from tensorflow import keras
from stellargraph import StellarGraph
from stellargraph.data import BiasedRandomWalk
from stellargraph.data import UnsupervisedSampler
from stellargraph.data import BiasedRandomWalk
from stellargraph.mapper import Node2VecLinkGenerator, Node2VecNodeGenerator
from stellargraph.layer import Node2Vec, link_classification
from stellargraph import datasets
from IPython.display import display, HTML
%matplotlib inline
```
### Dataset
For clarity, we use only the largest connected component, ignoring isolated nodes and subgraphs; having these in the data does not prevent the algorithm from running and producing valid results.
```
dataset = datasets.Cora()
display(HTML(dataset.description))
G, subjects = dataset.load(largest_connected_component_only=True)
print(G.info())
```
### The Node2Vec algorithm
The Node2Vec algorithm introduced in [[1]](#refs) is a 2-step representation learning algorithm. The two steps are:
1. Use random walks to generate sentences from a graph. A sentence is a list of node ids. The set of all sentences makes a corpus.
2. The corpus is then used to learn an embedding vector for each node in the graph. Each node id is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [[2]](#refs) is used for calculating the embedding vectors.
In this implementation, we train the Node2Vec algorithm in the following two steps:
1. Generate a set of (`target`, `context`) node pairs through starting the biased random walk with a fixed length at per node. The starting nodes are taken as the target nodes and the following nodes in biased random walks are taken as context nodes. For each (`target`, `context`) node pair, we generate 1 negative node pair.
2. Train the Node2Vec algorithm through minimizing cross-entropy loss for `target-context` pair prediction, with the predictive value obtained by performing the dot product of the 'input embedding' of the target node and the 'output embedding' of the context node, followed by a sigmoid activation.
Specify the optional parameter values: the number of walks to take per node, the length of each walk. Here, to guarantee the running efficiency, we respectively set `walk_number` and `walk_length` to 100 and 5. Larger values can be set to them to achieve better performance.
```
walk_number = 100
walk_length = 5
```
Create the biased random walker to perform context node sampling, with the specified parameters.
```
walker = BiasedRandomWalk(
G,
n=walk_number,
length=walk_length,
p=0.5, # defines probability, 1/p, of returning to source node
q=2.0, # defines probability, 1/q, for moving to a node away from the source node
)
```
Create the UnsupervisedSampler instance with the biased random walker.
```
unsupervised_samples = UnsupervisedSampler(G, nodes=list(G.nodes()), walker=walker)
```
Set the batch size and the number of epochs.
```
batch_size = 50
epochs = 2
```
Define an attri2vec training generator, which generates a batch of (index of target node, index of context node, label of node pair) pairs per iteration.
```
generator = Node2VecLinkGenerator(G, batch_size)
```
Build the Node2Vec model, with the dimension of learned node representations set to 128.
```
emb_size = 128
node2vec = Node2Vec(emb_size, generator=generator)
x_inp, x_out = node2vec.in_out_tensors()
```
Use the link_classification function to generate the prediction, with the 'dot' edge embedding generation method and the 'sigmoid' activation, which actually performs the dot product of the ``input embedding`` of the target node and the ``output embedding`` of the context node followed by a sigmoid activation.
```
prediction = link_classification(
output_dim=1, output_act="sigmoid", edge_embedding_method="dot"
)(x_out)
```
Stack the Node2Vec encoder and prediction layer into a Keras model. Our generator will produce batches of positive and negative context pairs as inputs to the model. Minimizing the binary crossentropy between the outputs and the provided ground truth is much like a regular binary classification task.
```
model = keras.Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.binary_crossentropy,
metrics=[keras.metrics.binary_accuracy],
)
```
Train the model.
```
history = model.fit(
generator.flow(unsupervised_samples),
epochs=epochs,
verbose=1,
use_multiprocessing=False,
workers=4,
shuffle=True,
)
```
## Visualise Node Embeddings
Build the node based model for predicting node representations from node ids and the learned parameters. Below a Keras model is constructed, with `x_inp[0]` as input and `x_out[0]` as output. Note that this model's weights are the same as those of the corresponding node encoder in the previously trained node pair classifier.
```
x_inp_src = x_inp[0]
x_out_src = x_out[0]
embedding_model = keras.Model(inputs=x_inp_src, outputs=x_out_src)
```
Get the node embeddings from node ids.
```
node_gen = Node2VecNodeGenerator(G, batch_size).flow(subjects.index)
node_embeddings = embedding_model.predict(node_gen, workers=4, verbose=1)
```
Transform the embeddings to 2d space for visualisation.
```
transform = TSNE # PCA
trans = transform(n_components=2)
node_embeddings_2d = trans.fit_transform(node_embeddings)
# draw the embedding points, coloring them by the target label (paper subject)
alpha = 0.7
label_map = {l: i for i, l in enumerate(np.unique(subjects))}
node_colours = [label_map[target] for target in subjects]
plt.figure(figsize=(7, 7))
plt.axes().set(aspect="equal")
plt.scatter(
node_embeddings_2d[:, 0],
node_embeddings_2d[:, 1],
c=node_colours,
cmap="jet",
alpha=alpha,
)
plt.title("{} visualization of node embeddings".format(transform.__name__))
plt.show()
```
### Downstream task
The node embeddings calculated using Node2Vec can be used as feature vectors in a downstream task such as node attribute inference (e.g., inferring the subject of a paper in Cora), community detection (clustering of nodes based on the similarity of their embedding vectors), and link prediction (e.g., prediction of citation links between papers).
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/embeddings/keras-node2vec-embeddings.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| true |
code
| 0.707632 | null | null | null | null |
|
## [Bag of Words Meets Bags of Popcorn | Kaggle](https://www.kaggle.com/c/word2vec-nlp-tutorial#part-3-more-fun-with-word-vectors)
# 튜토리얼 파트 3, 4
* [DeepLearningMovies/KaggleWord2VecUtility.py at master · wendykan/DeepLearningMovies](https://github.com/wendykan/DeepLearningMovies/blob/master/KaggleWord2VecUtility.py)
* 캐글에 링크 되어 있는 github 튜토리얼을 참고하여 만들었으며 파이썬2로 되어있는 소스를 파이썬3에 맞게 일부 수정하였다.
### 첫 번째 시도(average feature vectors)
- 튜토리얼2의 코드로 벡터의 평균을 구한다.
### 두 번째 시도(K-means)
- Word2Vec은 의미가 관련있는 단어들의 클러스터를 생성하기 때문에 클러스터 내의 단어 유사성을 이용하는 것이다.
- 이런식으로 벡터를 그룹화 하는 것을 "vector quantization(벡터 양자화)"라고 한다.
- 이를 위해서는 K-means와 같은 클러스터링 알고리즘을 사용하여 클러스터라는 단어의 중심을 찾아야 한다.
- 비지도학습인 K-means를 통해 클러스터링을 하고 지도학습인 랜덤포레스트로 리뷰가 추천인지 아닌지를 예측한다.
```
import pandas as pd
import numpy as np
from gensim.models import Word2Vec
from sklearn.cluster import KMeans
from sklearn.ensemble import RandomForestClassifier
from bs4 import BeautifulSoup
import re
import time
from nltk.corpus import stopwords
import nltk.data
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
model = Word2Vec.load('300features_40minwords_10text')
model
# 숫자로 단어를 표현
# Word2Vec 모델은 어휘의 각 단어에 대한 feature 벡터로 구성되며
# 'syn0'이라는 넘파이 배열로 저장된다.
# syn0의 행 수는 모델 어휘의 단어 수
# 컬럼 수는 2 부에서 설정 한 피처 벡터의 크기
type(model.wv.syn0)
# syn0의 행 수는 모델 어휘의 단어 수
# 열 수는 2부에서 설정한 특징 벡터의 크기
model.wv.syn0.shape
# 개별 단어 벡터 접근
model.wv['flower'].shape
model.wv['flower'][:10]
```
## K-means (K평균)클러스터링으로 데이터 묶기
* [K-평균 알고리즘 - 위키백과, 우리 모두의 백과사전](https://ko.wikipedia.org/wiki/K-%ED%8F%89%EA%B7%A0_%EC%95%8C%EA%B3%A0%EB%A6%AC%EC%A6%98)
- 클러스터링은 비지도 학습 기법
- 클러스터링은 유사성 등 개념에 기초해 몇몇 그룹으로 분류하는 기법
- 클러스터링의 목적은 샘플(실수로 구성된 n차원의 벡터)을 내부적으로는 비슷하지만 외부적으로 공통 분모가 없는 여러 그룹으로 묶는 것
- 특정 차원의 범위가 다른 차원과 차이가 크면 클러스터링 하기 전에 스케일을 조정해야 한다.
1. 최초 센트로이드(centroid)(중심점)로 k개의 벡터를 무작위로 선정한다.
2. 각 샘플을 그 위치에서 가장 가까운 센트로이드에 할당한다.
3. 센트로이드의 위치를 재계산한다.
4. 센트로이드가 더 이상 움직이지 않을 때까지 2와 3을 반복한다.
참고 : [책] 모두의 데이터 과학(with 파이썬)
```
# 단어 벡터에서 k-means를 실행하고 일부 클러스터를 찍어본다.
start = time.time() # 시작시간
# 클러스터의 크기 "k"를 어휘 크기의 1/5 이나 평균 5단어로 설정한다.
word_vectors = model.wv.syn0 # 어휘의 feature vector
num_clusters = word_vectors.shape[0] / 5
num_clusters = int(num_clusters)
# K means 를 정의하고 학습시킨다.
kmeans_clustering = KMeans( n_clusters = num_clusters )
idx = kmeans_clustering.fit_predict( word_vectors )
# 끝난시간에서 시작시간을 빼서 걸린 시간을 구한다.
end = time.time()
elapsed = end - start
print("Time taken for K Means clustering: ", elapsed, "seconds.")
# 각 어휘 단어를 클러스터 번호에 매핑되게 word/Index 사전을 만든다.
idx = list(idx)
names = model.wv.index2word
word_centroid_map = {names[i]: idx[i] for i in range(len(names))}
# word_centroid_map = dict(zip( model.wv.index2word, idx ))
# 첫번째 클러스터의 처음 10개를 출력
for cluster in range(0,10):
# 클러스터 번호를 출력
print("\nCluster {}".format(cluster))
# 클러스터번호와 클러스터에 있는 단어를 찍는다.
words = []
for i in range(0,len(list(word_centroid_map.values()))):
if( list(word_centroid_map.values())[i] == cluster ):
words.append(list(word_centroid_map.keys())[i])
print(words)
"""
판다스로 데이터프레임 형태의 데이터로 읽어온다.
QUOTE_MINIMAL (0), QUOTE_ALL (1),
QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
그리고 이전 튜토리얼에서 했던 것처럼 clean_train_reviews 와
clean_test_reviews 로 텍스트를 정제한다.
"""
train = pd.read_csv('data/labeledTrainData.tsv',
header=0, delimiter="\t", quoting=3)
test = pd.read_csv('data/testData.tsv',
header=0, delimiter="\t", quoting=3)
# unlabeled_train = pd.read_csv( 'data/unlabeledTrainData.tsv', header=0, delimiter="\t", quoting=3 )
from KaggleWord2VecUtility import KaggleWord2VecUtility
# 학습 리뷰를 정제한다.
clean_train_reviews = []
for review in train["review"]:
clean_train_reviews.append(
KaggleWord2VecUtility.review_to_wordlist( review, \
remove_stopwords=True ))
# 테스트 리뷰를 정제한다.
clean_test_reviews = []
for review in test["review"]:
clean_test_reviews.append(
KaggleWord2VecUtility.review_to_wordlist( review, \
remove_stopwords=True ))
# bags of centroids 생성
# 속도를 위해 centroid 학습 세트 bag을 미리 할당 한다.
train_centroids = np.zeros((train["review"].size, num_clusters), \
dtype="float32" )
train_centroids[:5]
# centroid 는 두 클러스터의 중심점을 정의 한 다음 중심점의 거리를 측정한 것
def create_bag_of_centroids( wordlist, word_centroid_map ):
# 클러스터의 수는 word / centroid map에서 가장 높은 클러스트 인덱스와 같다.
num_centroids = max( word_centroid_map.values() ) + 1
# 속도를 위해 bag of centroids vector를 미리 할당한다.
bag_of_centroids = np.zeros( num_centroids, dtype="float32" )
# 루프를 돌며 단어가 word_centroid_map에 있다면
# 해당되는 클러스터의 수를 하나씩 증가시켜 준다.
for word in wordlist:
if word in word_centroid_map:
index = word_centroid_map[word]
bag_of_centroids[index] += 1
# bag of centroids를 반환한다.
return bag_of_centroids
# 학습 리뷰를 bags of centroids 로 변환한다.
counter = 0
for review in clean_train_reviews:
train_centroids[counter] = create_bag_of_centroids( review, \
word_centroid_map )
counter += 1
# 테스트 리뷰도 같은 방법으로 반복해 준다.
test_centroids = np.zeros(( test["review"].size, num_clusters), \
dtype="float32" )
counter = 0
for review in clean_test_reviews:
test_centroids[counter] = create_bag_of_centroids( review, \
word_centroid_map )
counter += 1
# 랜덤포레스트를 사용하여 학습시키고 예측
forest = RandomForestClassifier(n_estimators = 100)
# train 데이터의 레이블을 통해 학습시키고 예측한다.
# 시간이 좀 소요되기 때문에 %time을 통해 걸린 시간을 찍도록 함
print("Fitting a random forest to labeled training data...")
%time forest = forest.fit(train_centroids, train["sentiment"])
from sklearn.model_selection import cross_val_score
%time score = np.mean(cross_val_score(\
forest, train_centroids, train['sentiment'], cv=10,\
scoring='roc_auc'))
%time result = forest.predict(test_centroids)
score
# 결과를 csv로 저장
output = pd.DataFrame(data={"id":test["id"], "sentiment":result})
output.to_csv("data/submit_BagOfCentroids_{0:.5f}.csv".format(score), index=False, quoting=3)
fig, axes = plt.subplots(ncols=2)
fig.set_size_inches(12,5)
sns.countplot(train['sentiment'], ax=axes[0])
sns.countplot(output['sentiment'], ax=axes[1])
output_sentiment = output['sentiment'].value_counts()
print(output_sentiment[0] - output_sentiment[1])
output_sentiment
# 캐글 점수 0.84908
print(330/528)
```
### 왜 이 튜토리얼에서는 Bag of Words가 더 좋은 결과를 가져올까?
벡터를 평균화하고 centroids를 사용하면 단어 순서가 어긋나며 Bag of Words 개념과 매우 비슷하다. 성능이 (표준 오차의 범위 내에서) 비슷하기 때문에 튜토리얼 1, 2, 3이 동등한 결과를 가져온다.
첫째, Word2Vec을 더 많은 텍스트로 학습시키면 성능이 좋아진다. Google의 결과는 10 억 단어가 넘는 코퍼스에서 배운 단어 벡터를 기반으로 한다. 학습 레이블이 있거나 레이블이 없는 학습 세트는 단지 대략 천팔백만 단어 정도다. 편의상 Word2Vec은 Google의 원래 C도구에서 출력되는 사전 학습 된 모델을 로드하는 기능을 제공하기 때문에 C로 모델을 학습 한 다음 Python으로 가져올 수도 있다.
둘째, 출판 된 자료들에서 분산 워드 벡터 기술은 Bag of Words 모델보다 우수한 것으로 나타났다. 이 논문에서는 IMDB 데이터 집합에 단락 벡터 (Paragraph Vector)라는 알고리즘을 사용하여 현재까지의 최첨단 결과 중 일부를 생성한다. 단락 벡터는 단어 순서 정보를 보존하는 반면 벡터 평균화 및 클러스터링은 단어 순서를 잃어 버리기 때문에 여기에서 시도하는 방식보다 부분적으로 더 좋다.
* 더 공부하기 : 스탠포드 NLP 강의 : [Lecture 1 | Natural Language Processing with Deep Learning - YouTube](https://www.youtube.com/watch?v=OQQ-W_63UgQ&list=PL3FW7Lu3i5Jsnh1rnUwq_TcylNr7EkRe6)
| true |
code
| 0.326164 | null | null | null | null |
|
```
# Uncomment and run this cell if you're on Colab or Kaggle
# !git clone https://github.com/nlp-with-transformers/notebooks.git
# %cd notebooks
# from install import *
# install_requirements(is_chapter10=True)
# hide
from utils import *
setup_chapter()
```
# Training Transformers from Scratch
> **Note:** In this chapter a large dataset and the script to train a large language model on a distributed infrastructure are built. As such not all the steps in this notebook are executable on platforms such as Colab or Kaggle. Either downscale the steps at critical points or use this notebook as an inspiration when building a script for distributed training.
## Large Datasets and Where to Find Them
### Challenges of Building a Large-Scale Corpus
```
#hide_output
from transformers import pipeline, set_seed
generation_gpt = pipeline("text-generation", model="openai-gpt")
generation_gpt2 = pipeline("text-generation", model="gpt2")
def model_size(model):
return sum(t.numel() for t in model.parameters())
print(f"GPT size: {model_size(generation_gpt.model)/1000**2:.1f}M parameters")
print(f"GPT2 size: {model_size(generation_gpt2.model)/1000**2:.1f}M parameters")
# hide
set_seed(1)
def enum_pipeline_ouputs(pipe, prompt, num_return_sequences):
out = pipe(prompt, num_return_sequences=num_return_sequences,
clean_up_tokenization_spaces=True)
return "\n".join(f"{i+1}." + s["generated_text"] for i, s in enumerate(out))
prompt = "\nWhen they came back"
print("GPT completions:\n" + enum_pipeline_ouputs(generation_gpt, prompt, 3))
print("")
print("GPT-2 completions:\n" + enum_pipeline_ouputs(generation_gpt2, prompt, 3))
```
### Building a Custom Code Dataset
#### Creating a dataset with Google BigQuery
#sidebar To Filter the Noise or Not?
### Working with Large Datasets
#### Memory mapping
> **Note:** The following code block assumes that you have downloaded the BigQuery dataset to a folder called `codeparrot`. We suggest skipping this step since it will unpack the compressed files and require ~180GB of disk space. This code is just for demonstration purposes and you can just continue below with the streamed dataset which will not consume that much disk space.
```
#hide_output
from datasets import load_dataset, DownloadConfig
download_config = DownloadConfig(delete_extracted=True)
dataset = load_dataset("./codeparrot", split="train",
download_config=download_config)
import psutil, os
print(f"Number of python files code in dataset : {len(dataset)}")
ds_size = sum(os.stat(f["filename"]).st_size for f in dataset.cache_files)
# os.stat.st_size is expressed in bytes, so we convert to GB
print(f"Dataset size (cache file) : {ds_size / 2**30:.2f} GB")
# Process.memory_info is expressed in bytes, so we convert to MB
print(f"RAM used: {psutil.Process(os.getpid()).memory_info().rss >> 20} MB")
```
#### Streaming
```
# hide_output
streamed_dataset = load_dataset('./codeparrot', split="train", streaming=True)
iterator = iter(streamed_dataset)
print(dataset[0] == next(iterator))
print(dataset[1] == next(iterator))
remote_dataset = load_dataset('transformersbook/codeparrot', split="train",
streaming=True)
```
### Adding Datasets to the Hugging Face Hub
## Building a Tokenizer
```
# hide_output
from transformers import AutoTokenizer
def tok_list(tokenizer, string):
input_ids = tokenizer(string, add_special_tokens=False)["input_ids"]
return [tokenizer.decode(tok) for tok in input_ids]
tokenizer_T5 = AutoTokenizer.from_pretrained("t5-base")
tokenizer_camembert = AutoTokenizer.from_pretrained("camembert-base")
print(f'T5 tokens for "sex": {tok_list(tokenizer_T5,"sex")}')
print(f'CamemBERT tokens for "being": {tok_list(tokenizer_camembert,"being")}')
```
### The Tokenizer Model
### Measuring Tokenizer Performance
### A Tokenizer for Python
```
from transformers import AutoTokenizer
python_code = r"""def say_hello():
print("Hello, World!")
# Print it
say_hello()
"""
tokenizer = AutoTokenizer.from_pretrained("gpt2")
print(tokenizer(python_code).tokens())
print(tokenizer.backend_tokenizer.normalizer)
print(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(python_code))
a, e = u"a", u"€"
byte = ord(a.encode("utf-8"))
print(f'`{a}` is encoded as `{a.encode("utf-8")}` with a single byte: {byte}')
byte = [ord(chr(i)) for i in e.encode("utf-8")]
print(f'`{e}` is encoded as `{e.encode("utf-8")}` with three bytes: {byte}')
from transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode
byte_to_unicode_map = bytes_to_unicode()
unicode_to_byte_map = dict((v, k) for k, v in byte_to_unicode_map.items())
base_vocab = list(unicode_to_byte_map.keys())
print(f'Size of our base vocabulary: {len(base_vocab)}')
print(f'First element: `{base_vocab[0]}`, last element: `{base_vocab[-1]}`')
# hide_input
#id unicode_mapping
#caption Examples of character mappings in BPE
#hide_input
import pandas as pd
from transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode
byte_to_unicode_map = bytes_to_unicode()
unicode_to_byte_map = dict((v, k) for k, v in byte_to_unicode_map.items())
base_vocab = list(unicode_to_byte_map.keys())
examples = [
['Regular characters', '`a` and `?`', f'{ord("a")} and {ord("?")}' , f'`{byte_to_unicode_map[ord("a")]}` and `{byte_to_unicode_map[ord("?")]}`'],
['Nonprintable control character (carriage return)', '`U+000D`', f'13', f'`{byte_to_unicode_map[13]}`'],
['A space', '` `', f'{ord(" ")}', f'`{byte_to_unicode_map[ord(" ")]}`'],
['A nonbreakable space', '`\\xa0`', '160', f'`{byte_to_unicode_map[ord(chr(160))]}`'],
['A newline character', '`\\n`', '10', f'`{byte_to_unicode_map[ord(chr(10))]}`'],
]
pd.DataFrame(examples, columns = ['Description', 'Character', 'Bytes', 'Mapped bytes'])
print(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(python_code))
print(f"Size of the vocabulary: {len(tokenizer)}")
print(tokenizer(python_code).tokens())
```
### Training a Tokenizer
```
tokens = sorted(tokenizer.vocab.items(), key=lambda x: len(x[0]), reverse=True)
print([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[:8]]);
tokens = sorted(tokenizer.vocab.items(), key=lambda x: x[1], reverse=True)
print([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[:12]]);
#hide_output
from tqdm.auto import tqdm
length = 100000
dataset_name = 'transformersbook/codeparrot-train'
dataset = load_dataset(dataset_name, split="train", streaming=True)
iter_dataset = iter(dataset)
def batch_iterator(batch_size=10):
for _ in tqdm(range(0, length, batch_size)):
yield [next(iter_dataset)['content'] for _ in range(batch_size)]
new_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(),
vocab_size=12500,
initial_alphabet=base_vocab)
tokens = sorted(new_tokenizer.vocab.items(), key=lambda x: x[1], reverse=False)
print([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[257:280]]);
print([f'{new_tokenizer.convert_tokens_to_string(t)}' for t,_ in tokens[-12:]]);
print(new_tokenizer(python_code).tokens())
import keyword
print(f'There are in total {len(keyword.kwlist)} Python keywords.')
for keyw in keyword.kwlist:
if keyw not in new_tokenizer.vocab:
print(f'No, keyword `{keyw}` is not in the vocabulary')
# hide_output
length = 200000
new_tokenizer_larger = tokenizer.train_new_from_iterator(batch_iterator(),
vocab_size=32768, initial_alphabet=base_vocab)
tokens = sorted(new_tokenizer_larger.vocab.items(), key=lambda x: x[1],
reverse=False)
print([f'{tokenizer.convert_tokens_to_string(t)}' for t, _ in tokens[-12:]]);
print(new_tokenizer_larger(python_code).tokens())
for keyw in keyword.kwlist:
if keyw not in new_tokenizer_larger.vocab:
print(f'No, keyword `{keyw}` is not in the vocabulary')
```
### Saving a Custom Tokenizer on the Hub
```
#hide_output
model_ckpt = "codeparrot"
org = "transformersbook"
new_tokenizer_larger.push_to_hub(model_ckpt, organization=org)
reloaded_tokenizer = AutoTokenizer.from_pretrained(org + "/" + model_ckpt)
print(reloaded_tokenizer(python_code).tokens())
#hide_output
new_tokenizer.push_to_hub(model_ckpt+ "-small-vocabulary", organization=org)
```
## Training a Model from Scratch
### A Tale of Pretraining Objectives
<img alt="Code snippet" caption="An example of a Python function that could be found in our dataset" src="images/chapter10_code-snippet.png" id="code-snippet"/>
#### Causal language modeling
<img alt="CLM pretraining" caption="In causal language modeling, the future tokens are masked and the model has to predict them; typically a decoder model such as GPT is used for such a task" src="images/chapter10_pretraining-clm.png" id="pretraining-clm"/>
#### Masked language modeling
<img alt="MLM pretraining" caption="In masked language modeling some of the input tokens are either masked or replaced, and the model's task is to predict the original tokens; this is the architecture underlying the encoder branch of transformer models" src="images/chapter10_pretraining-mlm.png" id="pretraining-mlm"/>
#### Sequence-to-sequence training
<img alt="Seq2seq pretraining" caption="Using an encoder-decoder architecture for a sequence-to-sequence task where the inputs are split into comment/code pairs using heuristics: the model gets one element as input and needs to generate the other one" src="images/chapter10_pretraining-seq2seq.png" id="pretraining-seq2seq"/>
### Initializing the Model
> **NOTE**: In the following code block, a large GPT-2 checkpoint is loaded into memory. On platforms like Colab and Kaggle, this can cause the instance to crash due to insufficient RAM or GPU memory. You can still run the example if you use the small checkpoint by replacing the configuration with `config = AutoConfig.from_pretrained("gpt2", vocab_size=len(tokenizer))`.
```
#hide_output
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(org + "/" + model_ckpt)
config = AutoConfig.from_pretrained("gpt2-xl", vocab_size=len(tokenizer))
model = AutoModelForCausalLM.from_config(config)
print(f'GPT-2 (xl) size: {model_size(model)/1000**2:.1f}M parameters')
#hide_output
model.save_pretrained("models/" + model_ckpt, push_to_hub=True,
organization=org)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
config_small = AutoConfig.from_pretrained("gpt2", vocab_size=len(tokenizer))
model_small = AutoModelForCausalLM.from_config(config_small)
print(f'GPT-2 size: {model_size(model_small)/1000**2:.1f}M parameters')
#hide_output
model_small.save_pretrained("models/" + model_ckpt + "-small", push_to_hub=True,
organization=org)
```
### Implementing the Dataloader
<img alt="Preprocessing for CLM" caption="Preparing sequences of varying length for causal language modeling by concatenating several tokenized examples with an EOS token before chunking them" src="images/chapter10_preprocessing-clm.png" id="preprocessing-clm"/>
```
#hide_output
examples, total_characters, total_tokens = 500, 0, 0
dataset = load_dataset('transformersbook/codeparrot-train', split='train',
streaming=True)
for _, example in tqdm(zip(range(examples), iter(dataset)), total=examples):
total_characters += len(example['content'])
total_tokens += len(tokenizer(example['content']).tokens())
characters_per_token = total_characters / total_tokens
print(characters_per_token)
import torch
from torch.utils.data import IterableDataset
class ConstantLengthDataset(IterableDataset):
def __init__(self, tokenizer, dataset, seq_length=1024,
num_of_sequences=1024, chars_per_token=3.6):
self.tokenizer = tokenizer
self.concat_token_id = tokenizer.eos_token_id
self.dataset = dataset
self.seq_length = seq_length
self.input_characters = seq_length * chars_per_token * num_of_sequences
def __iter__(self):
iterator = iter(self.dataset)
more_examples = True
while more_examples:
buffer, buffer_len = [], 0
while True:
if buffer_len >= self.input_characters:
m=f"Buffer full: {buffer_len}>={self.input_characters:.0f}"
print(m)
break
try:
m=f"Fill buffer: {buffer_len}<{self.input_characters:.0f}"
print(m)
buffer.append(next(iterator)["content"])
buffer_len += len(buffer[-1])
except StopIteration:
iterator = iter(self.dataset)
all_token_ids = []
tokenized_inputs = self.tokenizer(buffer, truncation=False)
for tokenized_input in tokenized_inputs['input_ids']:
all_token_ids.extend(tokenized_input + [self.concat_token_id])
for i in range(0, len(all_token_ids), self.seq_length):
input_ids = all_token_ids[i : i + self.seq_length]
if len(input_ids) == self.seq_length:
yield torch.tensor(input_ids)
shuffled_dataset = dataset.shuffle(buffer_size=100)
constant_length_dataset = ConstantLengthDataset(tokenizer, shuffled_dataset,
num_of_sequences=10)
dataset_iterator = iter(constant_length_dataset)
lengths = [len(b) for _, b in zip(range(5), dataset_iterator)]
print(f"Lengths of the sequences: {lengths}")
```
### Defining the Training Loop
```
from argparse import Namespace
# Commented parameters correspond to the small model
config = {"train_batch_size": 2, # 12
"valid_batch_size": 2, # 12
"weight_decay": 0.1,
"shuffle_buffer": 1000,
"learning_rate": 2e-4, # 5e-4
"lr_scheduler_type": "cosine",
"num_warmup_steps": 750, # 2000
"gradient_accumulation_steps": 16, # 1
"max_train_steps": 50000, # 150000
"max_eval_steps": -1,
"seq_length": 1024,
"seed": 1,
"save_checkpoint_steps": 50000} # 15000
args = Namespace(**config)
from torch.utils.tensorboard import SummaryWriter
import logging
import wandb
def setup_logging(project_name):
logger = logging.getLogger(__name__)
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, handlers=[
logging.FileHandler(f"log/debug_{accelerator.process_index}.log"),
logging.StreamHandler()])
if accelerator.is_main_process: # We only want to set up logging once
wandb.init(project=project_name, config=args)
run_name = wandb.run.name
tb_writer = SummaryWriter()
tb_writer.add_hparams(vars(args), {'0': 0})
logger.setLevel(logging.INFO)
datasets.utils.logging.set_verbosity_debug()
transformers.utils.logging.set_verbosity_info()
else:
tb_writer = None
run_name = ''
logger.setLevel(logging.ERROR)
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
return logger, tb_writer, run_name
def log_metrics(step, metrics):
logger.info(f"Step {step}: {metrics}")
if accelerator.is_main_process:
wandb.log(metrics)
[tb_writer.add_scalar(k, v, step) for k, v in metrics.items()]
#hide_output
from torch.utils.data.dataloader import DataLoader
def create_dataloaders(dataset_name):
train_data = load_dataset(dataset_name+'-train', split="train",
streaming=True)
train_data = train_data.shuffle(buffer_size=args.shuffle_buffer,
seed=args.seed)
valid_data = load_dataset(dataset_name+'-valid', split="validation",
streaming=True)
train_dataset = ConstantLengthDataset(tokenizer, train_data,
seq_length=args.seq_length)
valid_dataset = ConstantLengthDataset(tokenizer, valid_data,
seq_length=args.seq_length)
train_dataloader=DataLoader(train_dataset, batch_size=args.train_batch_size)
eval_dataloader=DataLoader(valid_dataset, batch_size=args.valid_batch_size)
return train_dataloader, eval_dataloader
def get_grouped_params(model, no_decay=["bias", "LayerNorm.weight"]):
params_with_wd, params_without_wd = [], []
for n, p in model.named_parameters():
if any(nd in n for nd in no_decay):
params_without_wd.append(p)
else:
params_with_wd.append(p)
return [{'params': params_with_wd, 'weight_decay': args.weight_decay},
{'params': params_without_wd, 'weight_decay': 0.0}]
def evaluate():
model.eval()
losses = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(batch, labels=batch)
loss = outputs.loss.repeat(args.valid_batch_size)
losses.append(accelerator.gather(loss))
if args.max_eval_steps > 0 and step >= args.max_eval_steps: break
loss = torch.mean(torch.cat(losses))
try:
perplexity = torch.exp(loss)
except OverflowError:
perplexity = torch.tensor(float("inf"))
return loss.item(), perplexity.item()
set_seed(args.seed)
# Accelerator
accelerator = Accelerator()
samples_per_step = accelerator.state.num_processes * args.train_batch_size
# Logging
logger, tb_writer, run_name = setup_logging(project_name.split("/")[1])
logger.info(accelerator.state)
# Load model and tokenizer
if accelerator.is_main_process:
hf_repo = Repository("./", clone_from=project_name, revision=run_name)
model = AutoModelForCausalLM.from_pretrained("./", gradient_checkpointing=True)
tokenizer = AutoTokenizer.from_pretrained("./")
# Load dataset and dataloader
train_dataloader, eval_dataloader = create_dataloaders(dataset_name)
# Prepare the optimizer and learning rate scheduler
optimizer = AdamW(get_grouped_params(model), lr=args.learning_rate)
lr_scheduler = get_scheduler(name=args.lr_scheduler_type, optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,)
def get_lr():
return optimizer.param_groups[0]['lr']
# Prepare everything with our `accelerator` (order of args is not important)
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader)
# Train model
model.train()
completed_steps = 0
for step, batch in enumerate(train_dataloader, start=1):
loss = model(batch, labels=batch).loss
log_metrics(step, {'lr': get_lr(), 'samples': step*samples_per_step,
'steps': completed_steps, 'loss/train': loss.item()})
loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
if step % args.gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
completed_steps += 1
if step % args.save_checkpoint_steps == 0:
logger.info('Evaluating and saving model checkpoint')
eval_loss, perplexity = evaluate()
log_metrics(step, {'loss/eval': eval_loss, 'perplexity': perplexity})
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
if accelerator.is_main_process:
unwrapped_model.save_pretrained("./")
hf_repo.push_to_hub(commit_message=f'step {step}')
model.train()
if completed_steps >= args.max_train_steps:
break
# Evaluate and save the last checkpoint
logger.info('Evaluating and saving model after training')
eval_loss, perplexity = evaluate()
log_metrics(step, {'loss/eval': eval_loss, 'perplexity': perplexity})
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
if accelerator.is_main_process:
unwrapped_model.save_pretrained("./")
hf_repo.push_to_hub(commit_message=f'final model')
```
<img alt="DDP" caption="Illustration of the processing steps in DDP with four GPUs" src="images/chapter10_ddp.png" id="ddp"/>
### The Training Run
## Results and Analysis
```
#hide_output
from transformers import pipeline, set_seed
model_ckpt = 'transformersbook/codeparrot-small'
generation = pipeline('text-generation', model=model_ckpt, device=0)
import re
from transformers import set_seed
def first_block(string):
return re.split('\nclass|\ndef|\n#|\n@|\nprint|\nif', string)[0].rstrip()
def complete_code(pipe, prompt, max_length=64, num_completions=4, seed=1):
set_seed(seed)
gen_kwargs = {"temperature":0.4, "top_p":0.95, "top_k":0, "num_beams":1,
"do_sample":True,}
code_gens = generation(prompt, num_return_sequences=num_completions,
max_length=max_length, **gen_kwargs)
code_strings = []
for code_gen in code_gens:
generated_code = first_block(code_gen['generated_text'][len(prompt):])
code_strings.append(generated_code)
print(('\n'+'='*80 + '\n').join(code_strings))
prompt = '''def area_of_rectangle(a: float, b: float):
"""Return the area of the rectangle."""'''
complete_code(generation, prompt)
prompt = '''def get_urls_from_html(html):
"""Get all embedded URLs in a HTML string."""'''
complete_code(generation, prompt)
import requests
def get_urls_from_html(html):
return [url for url in re.findall(r'<a href="(.*?)"', html) if url]
print(" | ".join(get_urls_from_html(requests.get('https://hf.co/').text)))
```
> **NOTE**: In the following code block, a large GPT-2 checkpoint is loaded into memory. On platforms like Colab and Kaggle, this can cause the instance to crash due to insufficient RAM or GPU memory. You can still run the example if you replace the large model with the small one by using `model_ckpt = "transformersbook/codeparrot-small"`.
```
model_ckpt = 'transformersbook/codeparrot'
generation = pipeline('text-generation', model=model_ckpt, device=0)
prompt = '''# a function in native python:
def mean(a):
return sum(a)/len(a)
# the same function using numpy:
import numpy as np
def mean(a):'''
complete_code(generation, prompt, max_length=64)
prompt = '''X = np.random.randn(100, 100)
y = np.random.randint(0, 1, 100)
# fit random forest classifier with 20 estimators'''
complete_code(generation, prompt, max_length=96)
```
## Conclusion
| true |
code
| 0.55941 | null | null | null | null |
|
# Part 4: Projects and Automated ML Pipeline
This part of the MLRun getting-started tutorial walks you through the steps for working with projects, source control (git), and automating the ML pipeline.
MLRun Project is a container for all your work on a particular activity: all the associated code, functions,
jobs/workflows and artifacts. Projects can be mapped to `git` repositories to enable versioning, collaboration, and CI/CD.
You can create project definitions using the SDK or a yaml file and store those in MLRun DB, file, or archive.
Once the project is loaded you can run jobs/workflows which refer to any project element by name, allowing separation between configuration and code. See the [Projects, Automation & CI/CD](../projects/overview.md) section for details.
Projects contain `workflows` that execute the registered functions in a sequence/graph (DAG), and which can reference project parameters, secrets and artifacts by name. MLRun currently supports two workflow engines, `local` (for simple tasks) and [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/pipelines-quickstart/) (for more complex/advanced tasks). MLRun also supports a real-time workflow engine (see [MLRun serving graphs](../serving/serving-graph.md)).
> **Note**: The Iguazio Data Science Platform has a default (pre-deployed) shared Kubeflow Pipelines service (`pipelines`).
An ML Engineer can gather the different functions created by the Data Engineer and Data Scientist and create this automated pipeline.
The tutorial consists of the following steps:
1. [Setting up Your Project](#gs-tutorial-4-step-setting-up-project)
2. [Updating Project and Function Definitions](#gs-tutorial-4-step-import-functions)
3. [Defining and Saving a Pipeline Workflow](#gs-tutorial-4-step-pipeline-workflow-define-n-save)
4. [Registering the Workflow](#gs-tutorial-4-step-register-workflow)
5. [Running A Pipeline](#gs-tutorial-4-step-run-pipeline)
6. [Viewing the Pipeline on the Dashboard (UI)](#gs-tutorial-4-step-ui-pipeline-view)
7. [Invoking the Model](#gs-tutorial-4-step-invoke-model)
By the end of this tutorial you'll learn how to:
- Create an operational pipeline using previously defined functions.
- Run the pipeline and track the pipeline results.
<a id="gs-tutorial-4-prerequisites"></a>
## Prerequisites
The following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.
Therefore, make sure to first run parts [1](01-mlrun-basics.ipynb)—[3](03-model-serving.ipynb) of the tutorial.
<a id="gs-tutorial-4-step-setting-up-project"></a>
## Step 1: Setting Up Your Project
To run a pipeline, you first need to create a Python project object and import the required functions for its execution.
Create a project by using one of:
- the `new_project` MLRun method
- the `get_or_create_project`method: loads a project from the MLRun DB or the archive/context if it exists, or creates a new project if it doesn't exist.
Both methods have the following parameters:
- **`name`** (required) — the project name.
- **`context`** — the path to a local project directory (the project's context directory).
The project directory contains a project-configuration file (default: **project.yaml**) that defines the project, and additional generated Python code.
The project file is created when you save your project (using the `save` MLRun project method or when saving your first function within the project).
- **`init_git`** — set to `True` to perform Git initialization of the project directory (`context`) in case its not initialized.
> **Note:** It's customary to store project code and definitions in a Git repository.
The following code gets or creates a user project named "getting-started-<username>".
> **Note:** Platform projects are currently shared among all users of the parent tenant, to facilitate collaboration. Therefore:
>
> - Set `user_project` to `True` if you want to create a project unique to your user.
> You can easily change the default project name for this tutorial by changing the definition of the `project_name_base` variable in the following code.
> - Don't include in your project proprietary information that you don't want to expose to other users.
> Note that while projects are a useful tool, you can easily develop and run code in the platform without using projects.
```
import mlrun
# Set the base project name
project_name_base = 'getting-started'
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name_base, context="./", user_project=True, init_git=True)
print(f'Project name: {project.metadata.name}')
```
<a id="gs-tutorial-4-step-import-functions"></a>
## Step 2: Updating Project and Function Definitions
You must save the definitions for the functions used in the project so that you can automatically convert code to functions, import external functions when you load new versions of MLRun code, or run automated CI/CD workflows. In addition, you might want to set other project attributes such as global parameters, secrets, and data.
The code can be stored in Python files, notebooks, external repositories, packaged containers, etc. Use the `project.set_function()` method to register the code in the project. The definitions are saved to the project object as well as in a YAML file in the root of the project.
Functions can also be imported from MLRun marketplace (using the `hub://` schema).
This tutorial uses the functions:
- `prep-data` — the first function, which ingests the Iris data set (in Notebook 01)
- `describe` — generates statistics on the data set (from the marketplace)
- `train-iris` — the model-training function (in Notebook 02)
- `test-classifier` — the model-testing function (from the marketplace)
- `mlrun-model` — the model-serving function (in Notebook 03)
> Note: `set_function` uses the `code_to_function` and `import_function` methods under the hood (used in the previous notebooks), but in addition it saves the function configurations in the project spec for use in automated workflows and CI/CD.
Add the function definitions to the project along with parameters and data artifacts, and save the project.
<a id="gs-tutorial-4-view-project-functions"></a>
```
project.set_function('01-mlrun-basics.ipynb', 'prep-data', kind='job', image='mlrun/mlrun')
project.set_function('02-model-training.ipynb', 'train', kind='job', image='mlrun/mlrun', handler='train_iris')
project.set_function('hub://describe', 'describe')
project.set_function('hub://test_classifier', 'test')
project.set_function('hub://v2_model_server', 'serving')
# set project level parameters and save
project.spec.params = {'label_column': 'label'}
project.save()
```
<br>When you save the project it stores the project definitions in the `project.yaml`. This means that you can load the project from the source control (GIT) and run it with a single command or API call.
The project YAML for this project can be printed using:
```
print(project.to_yaml())
```
### Saving and Loading Projects from GIT
After you save the project and its elements (functions, workflows, artifacts, etc.) you can commit all the changes to a GIT repository. Use the standard GIT tools or use the MLRun `project` methods such as `pull`, `push`, `remote`, which call the Git API for you.
Projects can then be loaded from Git using the MLRun `load_project` method, for example:
project = mlrun.load_project("./myproj", "git://github.com/mlrun/project-demo.git", name=project_name)
or using MLRun CLI:
mlrun project -n myproj -u "git://github.com/mlrun/project-demo.git" ./myproj
Read the [Projects, Automation & CI/CD](../projects/overview.md) section for more details
<a id="gs-tutorial-4-kubeflow-pipelines"></a>
### Using Kubeflow Pipelines
You're now ready to create a full ML pipeline.
This is done by using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/) —
an open-source framework for building and deploying portable, scalable machine-learning workflows based on Docker containers.
MLRun leverages this framework to take your existing code and deploy it as steps in the pipeline.
> **Note:** When using the Iguazio Data Science Platform, Kubeflow Pipelines is available as a default (pre-deployed) shared platform service.
<a id="gs-tutorial-4-step-pipeline-workflow-define-n-save"></a>
## Step 3: Defining and Saving a Pipeline Workflow
A pipeline is created by running an MLRun **"workflow"**.
The following code defines a workflow and writes it to a file in your local directory, with the file name **workflow.py**.
The workflow describes a directed acyclic graph (DAG) for execution using Kubeflow Pipelines, and depicts the connections between the functions and the data as part of an end-to-end pipeline.
The workflow file has two parts: initialization of the function objects, and definition of a pipeline DSL (domain-specific language) for connecting the function inputs and outputs.
Examine the code to see how function objects are initialized and used (by name) within the workflow.
The defined pipeline includes the following steps:
- Ingest the Iris flower data set (`ingest`).
- Train and the model (`train`).
- Test the model with its test data set.
- Deploy the model as a real-time serverless function (`deploy`).
> **Note**: A pipeline can also include continuous build integration and deployment (CI/CD) steps, such as building container images and deploying models.
```
%%writefile './workflow.py'
from kfp import dsl
from mlrun import run_function, deploy_function
DATASET = 'cleaned_data'
MODEL = 'iris'
LABELS = "label"
# Create a Kubeflow Pipelines pipeline
@dsl.pipeline(
name="Getting-started-tutorial",
description="This tutorial is designed to demonstrate some of the main "
"capabilities of the Iguazio Data Science Platform.\n"
"The tutorial uses the Iris flower data set."
)
def kfpipeline(source_url):
# Ingest the data set
ingest = run_function(
'prep-data',
handler='prep_data',
inputs={'source_url': source_url},
params={'label_column': LABELS},
outputs=[DATASET])
# Train a model
train = run_function(
"train",
params={"label_column": LABELS},
inputs={"dataset": ingest.outputs[DATASET]},
outputs=['my_model', 'test_set'])
# Test and visualize the model
test = run_function(
"test",
params={"label_column": LABELS},
inputs={"models_path": train.outputs['my_model'],
"test_set": train.outputs['test_set']})
# Deploy the model as a serverless function
deploy = deploy_function("serving", models={f"{MODEL}_v1": train.outputs['my_model']})
```
<a id="gs-tutorial-4-step-register-workflow"></a>
## Step 4: Registering the Workflow
Use the `set_workflow` MLRun project method to register your workflow with MLRun.
The following code sets the `name` parameter to the selected workflow name ("main") and the `code` parameter to the name of the workflow file that is found in your project directory (**workflow.py**).
```
# Register the workflow file as "main"
project.set_workflow('main', 'workflow.py')
```
<a id="gs-tutorial-4-step-run-pipeline"></a>
## Step 5: Running A Pipeline
First run the following code to save your project:
```
project.save()
```
Use the `run` MLRun project method to execute your workflow pipeline with Kubeflow Pipelines.
The tutorial code sets the following method parameters; (for the full parameters list, see the [MLRun documentation](../api/mlrun.run.html#mlrun.run.run_pipeline) or embedded help):
- **`name`** — the workflow name (in this case, "main" — see the previous step).
- **`arguments`** — A dictionary of Kubeflow Pipelines arguments (parameters).
The tutorial code sets this parameter to an empty arguments list (`{}`), but you can edit the code to add arguments.
- **`artifact_path`** — a path or URL that identifies a location for storing the workflow artifacts.
You can use `{{workflow.uid}}` in the path to signify the ID of the current workflow run iteration.
The tutorial code sets the artifacts path to a **<worker ID>** directory (`{{workflow.uid}}`) in a **pipeline** directory under the projects container (**/v3io/projects/getting-started-tutorial-project name/pipeline/<worker ID>**).
- **`dirty`** — set to `True` to allow running the workflow also when the project's Git repository is dirty (i.e., contains uncommitted changes).
(When the notebook that contains the execution code is in the same Git directory as the executed workflow, the directory will always be dirty during the execution.)
- **`watch`** — set to `True` to wait for the pipeline to complete and output the execution graph as it updates.
The `run` method returns the ID of the executed workflow, which the code stores in a `run_id` variable.
You can use this ID to track the progress or your workflow, as demonstrated in the following sections.
> **Note**: You can also run the workflow from a command-line shell by using the `mlrun` CLI.
> The following CLI command defines a similar execution logic as that of the `run` call in the tutorial:
> ```
> mlrun project /User/getting-started-tutorial/conf -r main -p "$V3IO_HOME_URL/getting-started-tutorial/pipeline/{{workflow.uid}}/"
> ```
```
source_url = mlrun.get_sample_path("data/iris/iris.data.raw.csv")
import os
pipeline_path = mlrun.mlconf.artifact_path
run_id = project.run(
'main',
arguments={'source_url' : source_url},
artifact_path=os.path.join(pipeline_path, "pipeline", '{{workflow.uid}}'),
dirty=True,
watch=True)
```
<a id="gs-tutorial-4-step-ui-pipeline-view"></a>
## Step 6: Viewing the Pipeline on the Dashboard (UI)
In the **Projects > Jobs and Workflows > Monitor Workflows** tab, press the workflow name to view a graph of the workflow. Press any step to open another pane with full details of the step: either the job's overview, inputs, artifacts, etc.; or the deploy / build function's overview, code, and log.
After the pipelines execution completes, you should be able to view the pipeline and see its functions:
- `prep-data`
- `train`
- `test`
- `deploy-serving`
The graph is refreshed while the pipeline is running.
<img src="../_static/images/job_pipeline.png" alt="pipeline" width="700"/>
<a id="gs-tutorial-4-step-invoke-model"></a>
## Step 7: Invoking the Model
Now that your model is deployed using the pipeline, you can invoke it as usual:
```
serving_func = project.func('serving')
my_data = {'inputs': [[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}
serving_func.invoke('/v2/models/iris_v1/infer', my_data)
```
You can also make an HTTP call directly:
```
import requests
import json
predict_url = f'http://{serving_func.status.address}/v2/models/iris_v1/predict'
resp = requests.put(predict_url, json=json.dumps(my_data))
print(resp.json())
```
<a id="gs-tutorial-4-done"></a>
## Done!
Congratulations! You've completed the getting started tutorial.
You might also want to explore the following demos:
- For an example of distributed training of an image-classification pipeline using TensorFlow (versions 1 or 2), Keras, and Horovod, see the [**image-classification with distributed training demo**](https://github.com/mlrun/demos/tree/release/v0.6.x-latest/image-classification-with-distributed-training).
- To learn more about deploying live endpoints and concept drift, see the [**network-operations (NetOps) demo**](https://github.com/mlrun/demos/tree/release/v0.6.x-latest/network-operations).
- To learn how to deploy your model with streaming information, see the [**model-deployment pipeline demo**](https://github.com/mlrun/demos/tree/release/v0.6.x-latest/model-deployment-pipeline).
For additional information and guidelines, see the MLRun [**How-To Guides and Demos**](../howto/index.md).
| true |
code
| 0.54468 | null | null | null | null |
|
# SGT ($\beta \neq 0 $) calculation for fluids mixtures with SAFT-$\gamma$-Mie
In this notebook, the SGT ($\beta \neq 0 $) calculations for fluid mixtures with ```saftgammamie``` EoS are illustrated.
When using $\beta \neq 0 $, the cross-influence parameters are computed as $c_{ij} = (1-\beta_{ij})\sqrt{c_{ii}c_{jj}}$.
First, all the needed modules are imported.
- numpy: numerical interface and work with arrays
- matplotlib: to plot results
- sgtpy: package with SAFT-$\gamma$-Mie EoS and SGT functions.
```
import numpy as np
import matplotlib.pyplot as plt
from sgtpy import component, mixture, saftgammamie
```
Now, pure components are configured and created with the ```component``` function. To use SGT it is required to set the influence parameter (```cii```) for the pure fluids. Then, a mixture is created with them using the ```mixture``` function or by adding (`+`) pure components. The interaction parameters are set up with the ```mixture.saftgammamie``` method. Finally, the ```eos``` object is created with the ```saftgammamie``` function.
The ```eos``` object includes all the necessary methods to compute phase equilibria and interfacial properties using SAFT-$\gamma$-Mie EoS.
For this notebook, the calculations are exemplified for the mixture of ethanol + water and the mixture of hexane + ethanol.
```
ethanol = component(GC={'CH3':1, 'CH2OH':1}, cii=4.1388468864244875e-20)
water = component(GC={'H2O':1}, cii=1.6033244745871344e-20)
# creating mixture with mixture class function
mix1 = mixture(ethanol, water)
# or creating mixture by adding pure components
mix1 = ethanol + water
mix1.saftgammamie()
eos1 = saftgammamie(mix1)
```
Now, it is required to compute the phase equilibria (VLE, LLE or VLLE). See Notebooks 5 to 10 for more information about phase equilibria computation.
In this example, the bubble point of the mixture of ethanol and water at $x_1=0.2$ and 298.15K is computed.
```
from sgtpy.equilibrium import bubblePy
T = 298.15 # K
# liquid composition
x = np.array([0.2, 0.8])
# initial guesses
P0 = 1e4 # Pa
y0 = np.array([0.8, 0.2])
sol = bubblePy(y0, P0, x, T, eos1, full_output=True)
y, P = sol.Y, sol.P
vl, vv = sol.v1, sol.v2
rhol = x/vl
rhov = y/vv
```
In order to set the $\beta$ correction is necessary to create the symmetric matrix of shape (`nc, nc`) and then use it with the ```eos.beta_sgt``` method from the eos. The $\beta_{ij}$ correction is computed as follows:
$$ \beta_{ij} = \beta_{ij,0} + \beta_{ij,1} \cdot T + \beta_{ij,2} \cdot T^2 + \frac{\beta_{ij,3}}{T} $$
Alternatively, you can modify just the pair $ij$ using the `eos.set_betaijsgt` method. In both methods, by default only the $\beta_{ij,0}$ is required. The temperature dependent parameters are optional, if they are not provided they are assumed to be zero.
The function ```sgt_mix_beta0``` is used to study the interfacial behavior with SGT and $\beta=0$. AS shown in Notebook 12, Liang method can compute the density paths correctly.
```
from sgtpy.sgt import sgt_mix_beta0
bij = 0.0
beta = np.array([[0, bij], [bij, 0]])
eos1.beta_sgt(beta)
# or by setting the beta correction by pair i=0 (hexane), j=1 (ethanol)
eos1.set_betaijsgt(i=0, j=1, beta0=bij)
soll = sgt_mix_beta0(rhov, rhol, T, P, eos1, n=300, method='liang', full_output=True)
```
When using $\beta \neq 0$ two options are available to solve SGT.
- ```sgt_mix```: solves SGT system as a boundary value problem using orthogonal collocation (increasing interfacial length).
- ```msgt_mix```: solves a stabilized SGT system as a boundary value problem using orthogonal collocation (fixed interfacial length).
```
from sgtpy.sgt import sgt_mix
bij = 0.2
beta = np.array([[0, bij], [bij, 0]])
eos1.beta_sgt(beta)
# or by setting the beta correction by pair i=0 (ethanol), j=1 (water)
eos1.set_betaijsgt(i=0, j=1, beta0=bij)
solbeta = sgt_mix(rhov, rhol, T, P, eos1, full_output=True)
from sgtpy.sgt import msgt_mix
bij = 0.5
beta = np.array([[0, bij], [bij, 0]])
eos1.beta_sgt(beta)
# or by setting the beta correction by pair i=0 (ethanol), j=1 (water)
eos1.set_betaijsgt(i=0, j=1, beta0=bij)
msolbeta = msgt_mix(rhov, rhol, T, P, eos1, rho0 = solbeta, full_output=True)
```
The interfacial tension results are shown below.
```
print('Liang path Function: ', soll.tension, 'mN/m')
print('SGT BVP: ', solbeta.tension, 'mN/m')
print('Modified SGT BVP: ', msolbeta.tension, 'mN/m')
```
The density profiles are plotted below. It can be seen that using a $\beta$ correction smooths the density profiles.
```
rhobeta = solbeta.rho / 1000 # kmol/m3
mrhobeta = msolbeta.rho / 1000 # kmol/m3
rholiang = soll.rho / 1000 # kmol/m3
alphas = soll.alphas
path = soll.path
fig = plt.figure(figsize = (10, 4))
fig.subplots_adjust( wspace=0.3)
ax1 = fig.add_subplot(121)
ax1.plot(rholiang[0], rholiang[1], color = 'red')
ax1.plot(rhobeta[0], rhobeta[1], 's', color = 'blue')
ax1.plot(mrhobeta[0], mrhobeta[1], '--', color = 'black')
ax1.plot(rhov[0]/1000, rhov[1]/1000, 'o', color = 'k')
ax1.plot(rhol[0]/1000, rhol[1]/1000, 'o', color = 'k')
ax1.set_xlabel(r'$\rho_1$ / kmol m$^{-3}$')
ax1.set_ylabel(r'$\rho_2$ / kmol m$^{-3}$')
ax2 = fig.add_subplot(122)
ax2.plot(path/1000, alphas)
ax2.axhline(y = 0, linestyle = '--',color = 'r')
ax2.set_ylabel(r'$\alpha$')
ax2.set_xlabel(r'path function / 1000')
```
## Hexane - Ethanol
The interfacial behavior of this mixture is well known to be difficult to study as its displays multiple stationary points in the inhomogeneous zone.
```
hexane = component(GC={'CH3':2, 'CH2':4}, cii=3.288396028761707e-19)
mix2 = mixture(hexane, ethanol)
mix2.saftgammamie()
eos2 = saftgammamie(mix2)
```
In this example, the bubble point of the mixture at $x_1=0.3$ and 298.15K is computed with the ```bubblePy``` function.
```
T = 298.15 # K
x = np.array([0.3, 0.7])
y0 = 1.*x
P0 = 8000. # Pa
sol = bubblePy(y0, P0, x, T, eos2, full_output=True)
y, P = sol.Y, sol.P
vl, vv = sol.v1, sol.v2
rhox = x/vl
rhoy = y/vv
sol
```
The function ```sgt_mix_beta0``` is used to study the interfacial behavior with SGT and $\beta=0$. AS shown in Notebook 12, Liang method can compute the density paths correctly.
```
soll2 = sgt_mix_beta0(rhoy, rhox, T, P, eos2, n=300, method='liang', full_output=True)
```
SGT is solved with $\beta = 0.2$ and $\beta = 0.5$ using the ```sgt_mix``` and ```msgt_mix``` function.
```
bij = 0.2
beta = np.array([[0, bij], [bij, 0]])
eos2.beta_sgt(beta)
# or by setting the beta correction by pair i=0 (hexane), j=1 (ethanol)
eos2.set_betaijsgt(i=0, j=1, beta0=bij)
solbeta = sgt_mix(rhoy, rhox, T, P, eos2, full_output=True)
bij = 0.5
beta = np.array([[0, bij], [bij, 0]])
eos2.beta_sgt(beta)
# or by setting the beta correction by pair i=0 (hexane), j=1 (ethanol)
eos2.set_betaijsgt(i=0, j=1, beta0=bij)
msolbeta = msgt_mix(rhoy, rhox, T, P, eos2, rho0=solbeta, full_output=True)
```
The interfacial tension results are shown below.
```
print('Liang path Function: ', soll2.tension, 'mN/m')
print('SGT BVP: ', solbeta.tension, 'mN/m')
print('Modified SGT BVP: ', msolbeta.tension, 'mN/m')
```
The density profiles are plotted below. It can be seen that using a $\beta$ correction smooths the density profiles and reduces the number of stationary points.
```
rhobeta = solbeta.rho / 1000 # kmol/m3
mrhobeta = msolbeta.rho / 1000 # kmol/m3
rholiang = soll2.rho / 1000 # kmol/m3
alphas = soll2.alphas
path = soll2.path
fig = plt.figure(figsize = (10, 4))
fig.subplots_adjust( wspace=0.3)
ax1 = fig.add_subplot(121)
ax1.plot(rholiang[0], rholiang[1], color = 'red')
ax1.plot(rhobeta[0], rhobeta[1], 's', color = 'blue')
ax1.plot(mrhobeta[0], mrhobeta[1], '--', color = 'black')
ax1.plot(rhoy[0]/1000, rhoy[1]/1000, 'o', color = 'k')
ax1.plot(rhox[0]/1000, rhox[1]/1000, 'o', color = 'k')
ax1.set_xlabel(r'$\rho_1$ / kmol m$^{-3}$')
ax1.set_ylabel(r'$\rho_2$ / kmol m$^{-3}$')
ax2 = fig.add_subplot(122)
ax2.plot(path/1000, alphas)
ax2.axhline(y = 0, linestyle = '--',color = 'r')
ax2.set_ylabel(r'$\alpha$')
ax2.set_xlabel(r'path function / 1000')
ax1.tick_params(direction='in')
ax2.tick_params(direction='in')
# fig.savefig('sgt_mix.pdf')
```
For further information of any of these functions just run: ```function?```
| true |
code
| 0.613294 | null | null | null | null |
|
# Introduction: Writing Patent Abstracts with a Recurrent Neural Network
The purpose of this notebook is to develop a recurrent neural network using LSTM cells that can generate patent abstracts. We will look at using a _word level_ recurrent neural network and _embedding_ the vocab, both with pre-trained vectors and training our own embeddings. We will train the model by feeding in as the features a long sequence of words (for example 50 words) and then using the next word as the label. Over time, the network will (hopefully) learn to predict the next word in a given sequence and we can use the model predictions to generate entirely novel patent abstracts.
## Approach
The approach to solving this problem is:
1. Read in training data: thousands of "neural network" patents
2. Convert patents to integer sequences: `tokenization`
3. Create training dataset using next word following a sequence as label
4. Build a recurrent neural network using word embeddings and LSTM cells
5. Load in pre-trained embeddings
6. Train network to predict next word from sequence
7. Generate new abstracts by feeding network a seed sequence
8. Repeat steps 2 - 7 using pre-trained embeddings
9. Try different model architecture to see if performance improves
10. For fun, create a simple game where we must guess if the output is human or computer!
Each of these steps is relatively simple by itself, so don't be intimidated. We'll walk through the entire process and at the end will be able to have a working application of deep learning!
```
# Set up IPython to show all outputs from a cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
import warnings
warnings.filterwarnings('ignore', category = RuntimeWarning)
RANDOM_STATE = 50
EPOCHS = 150
BATCH_SIZE = 2048
TRAINING_LENGTH = 50
TRAIN_FRACTION = 0.7
VERBOSE = 0
SAVE_MODEL = True
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
## Read in Data
Our data consists of patent abstracts by searching for the term "neural networks" on [patentsview query](http://www.patentsview.org/querydev) web interface. The data can be downloaded in a number of formats and can include a number of patent attributes (I only kept 4).
```
import pandas as pd
import numpy as np
# Read in data
data = pd.read_csv('../data/neural_network_patent_query.csv', parse_dates = ['patent_date'])
# Extract abstracts
original_abstracts = list(data['patent_abstract'])
len(original_abstracts)
data.head()
```
### Brief Data Exploration
This data is extremely clean, which means we don't need to do any manual munging. We can still make a few simple plots out of curiousity though!
```
data['patent_abstract'][100]
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
data['year-month'] = [pd.datetime(year, month, 1) for year, month in zip(data['patent_date'].dt.year,
data['patent_date'].dt.month)]
monthly = data.groupby('year-month')['patent_number'].count().reset_index()
monthly.set_index('year-month')['patent_number'].plot(figsize = (16, 8))
plt.ylabel('Number of Patents'); plt.xlabel('Date');
plt.title('Neural Network Patents over Time');
monthly.groupby(monthly['year-month'].dt.year)['patent_number'].sum().plot.bar(color = 'red', edgecolor = 'k',
figsize = (12, 6))
plt.xlabel('Year'); plt.ylabel('Number of Patents'); plt.title('Neural Network Patents by Year');
```
The distribution of patents over time is interesting. I would expect 2018 to come out on top once the patents have been accepted.
## Data Cleaning
Our preprocessing is going to involve using a `Tokenizer` to convert the patents from sequences of words (strings) into sequences of integers. We'll get to that in a bit, but even with neural networks, having a clean dataset is paramount. The data quality is already high, but there are some idiosyncracies of patents as well as general text improvements to make. For example, let's consider the following two sentences.
>'This is a short sentence (1) with one reference to an image. This next sentence, while non-sensical, does not have an image and has two commas.'
If we choose to remove all punctuation with the default Tokenizer settings, we get the following.
```
from keras.preprocessing.text import Tokenizer
example = 'This is a short sentence (1) with one reference to an image. This next sentence, while non-sensical, does not have an image and has two commas.'
tokenizer = Tokenizer(filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts([example])
s = tokenizer.texts_to_sequences([example])[0]
' '.join(tokenizer.index_word[i] for i in s)
```
This removes all the punctuation and now we have a random number in the sentence. If we choose to not remove the punctuation, the sentence looks better, but then we have some interesting words in the vocabulary.
```
tokenizer = Tokenizer(filters='"#$%&*+/:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts([example])
s = tokenizer.texts_to_sequences([example])[0]
' '.join(tokenizer.index_word[i] for i in s)
tokenizer.word_index.keys()
```
Notice that `image` and `image.` are classified as distinct words. This is because the period is attached to one and not the other and the same with `sentence` and `sentence,`. To alleviate this issue, we can add spaces around the punctuation using regular expressions. We will also remove the image references.
```
import re
def format_patent(patent):
"""Add spaces around punctuation and remove references to images/citations."""
# Add spaces around punctuation
patent = re.sub(r'(?<=[^\s0-9])(?=[.,;?])', r' ', patent)
# Remove references to figures
patent = re.sub(r'\((\d+)\)', r'', patent)
# Remove double spaces
patent = re.sub(r'\s\s', ' ', patent)
return patent
f = format_patent(example)
f
```
Now when we do the tokenization, we get separate entries in the vocab for the punctuation, but _not_ for words with punctuation attached.
```
tokenizer = Tokenizer(filters='"#$%&*+/:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts([f])
s = tokenizer.texts_to_sequences([f])[0]
' '.join(tokenizer.index_word[i] for i in s)
tokenizer.word_index.keys()
```
We no longer have the `image` and `image.` problem but we do have separate symbols for `.` and `,`. This means the network will be forced to learn a representation for these punctuation marks (they are also in the pre-trained embeddings). When we want to get back to the original sentence (without image references) we simply have to remove the spaces.
```
def remove_spaces(patent):
"""Remove spaces around punctuation"""
patent = re.sub(r'\s+([.,;?])', r'\1', patent)
return patent
remove_spaces(' '.join(tokenizer.index_word[i] for i in s))
```
We can apply this operation to all of the original abstracts.
```
formatted = []
# Iterate through all the original abstracts
for a in original_abstracts:
formatted.append(format_patent(a))
len(formatted)
```
# Convert Text to Sequences
A neural network cannot process words, so we must convert the patent abstracts into integers. This is done using the Keras utility `Tokenizer`. By default, this will convert all words to lowercase and remove punctuation. Therefore, our model will not be able to write complete sentences. However, this can be beneficial for a first model because it limits the size of the vocabulary and means that more of the words (converted into tokens) will have pre-trained embeddings.
Later, we will not remove the capitalization and punctuation when we train our own embeddings.
## Features and Labels
This function takes a few parameters including a training length which is the number of words we will feed into the network as features with the next word the label. For example, if we set `training_length = 50`, then the model will take in 50 words as features and the 51st word as the label.
For each abstract, we can make multiple training examples by slicing at different points. We can use the first 50 words as features with the 51st as a label, then the 2nd through 51st word as features and the 52nd as the label, then 3rd - 52nd with 53rd as label and so on. This gives us much more data to train on and the performance of the model is proportional to the amount of training data.
```
def make_sequences(texts, training_length = 50,
lower = True, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n'):
"""Turn a set of texts into sequences of integers"""
# Create the tokenizer object and train on texts
tokenizer = Tokenizer(lower=lower, filters=filters)
tokenizer.fit_on_texts(texts)
# Create look-up dictionaries and reverse look-ups
word_idx = tokenizer.word_index
idx_word = tokenizer.index_word
num_words = len(word_idx) + 1
word_counts = tokenizer.word_counts
print(f'There are {num_words} unique words.')
# Convert text to sequences of integers
sequences = tokenizer.texts_to_sequences(texts)
# Limit to sequences with more than training length tokens
seq_lengths = [len(x) for x in sequences]
over_idx = [i for i, l in enumerate(seq_lengths) if l > (training_length + 20)]
new_texts = []
new_sequences = []
# Only keep sequences with more than training length tokens
for i in over_idx:
new_texts.append(texts[i])
new_sequences.append(sequences[i])
training_seq = []
labels = []
# Iterate through the sequences of tokens
for seq in new_sequences:
# Create multiple training examples from each sequence
for i in range(training_length, len(seq)):
# Extract the features and label
extract = seq[i - training_length: i + 1]
# Set the features and label
training_seq.append(extract[:-1])
labels.append(extract[-1])
print(f'There are {len(training_seq)} training sequences.')
# Return everything needed for setting up the model
return word_idx, idx_word, num_words, word_counts, new_texts, new_sequences, training_seq, labels
```
Now let's see how our function generates data. For using pre-trained embeddings, we'll remove a fair amount of the punctuation and lowercase all letters but leave in periods and commas. This is because there are no capitalized words in the pre-trained embeddings but there is some punctuation. Our model will not learn how to capitalize words, but it may learn how to end a sentence and insert commas.
```
filters = '!"#$%&()*+/:<=>@[\\]^_`{|}~\t\n'
word_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted,
TRAINING_LENGTH,
lower = True,
filters = filters)
```
Each patent is now represented as a sequence of integers. Let's look at an example of a few features and the corresponding labels. The label is the next word in the sequence after the first 50 words.
```
n = 3
features[n][:10]
def find_answer(index):
"""Find label corresponding to features for index in training data"""
# Find features and label
feats = ' '.join(idx_word[i] for i in features[index])
answer = idx_word[labels[index]]
print('Features:', feats)
print('\nLabel: ', answer)
find_answer(n)
original_abstracts[0]
find_answer(100)
```
Our patents are no longer correct English, but, by removing capital letters, we do reduce the size of the vocabulary.
__Deciding which pre-processing steps to take in general is the most important aspect of an machine learning project.__
```
sorted(word_counts.items(), key = lambda x: x[1], reverse = True)[:15]
```
The most common words make sense in the context of the patents we are using and the geneal English language.
## Training Data
Next we need to take the features and labels and convert them into training and validation data. The following function does this by splitting the data - after random shuffling because the features were made in sequential order - based on the `train_fraction` specified. All the inputs are converted into numpy arrays which is the correct input to a keras neural network.
### Encoding of Labels
One important step is to convert the labels to one hot encoded vectors because our network will be trained using `categorical_crossentropy` and makes a prediction for each word in the vocabulary (we can train with the labels represented as simple integers, but I found performance was better and training faster when using a one-hot representation of the labels). This is done by creating an array of rows of all zeros except for the index of the word which we want to predict - the label - which gets a 1.
```
from sklearn.utils import shuffle
def create_train_valid(features, labels, num_words, train_fraction = TRAIN_FRACTION):
"""Create training and validation features and labels."""
# Randomly shuffle features and labels
features, labels = shuffle(features, labels, random_state = RANDOM_STATE)
# Decide on number of samples for training
train_end = int(train_fraction * len(labels))
train_features = np.array(features[:train_end])
valid_features = np.array(features[train_end:])
train_labels = labels[:train_end]
valid_labels = labels[train_end:]
# Convert to arrays
X_train, X_valid = np.array(train_features), np.array(valid_features)
# Using int8 for memory savings
y_train = np.zeros((len(train_labels), num_words), dtype = np.int8)
y_valid = np.zeros((len(valid_labels), num_words), dtype = np.int8)
# One hot encoding of labels
for example_index, word_index in enumerate(train_labels):
y_train[example_index, word_index] = 1
for example_index, word_index in enumerate(valid_labels):
y_valid[example_index, word_index] = 1
# Memory management
import gc
gc.enable()
del features, labels, train_features, valid_features, train_labels, valid_labels
gc.collect()
return X_train, X_valid, y_train, y_valid
X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)
X_train.shape
y_train.shape
```
We do want to be careful about using up too much memory. One hot encoding the labels creates massive numpy arrays so I took care to delete the un-used objects from the workspace.
```
import sys
sys.getsizeof(y_train) / 1e9
def check_sizes(gb_min = 1):
for x in globals():
size = sys.getsizeof(eval(x))/1e9
if size > gb_min:
print(f'Object: {x:10}\tSize: {size} GB.')
check_sizes(gb_min = 1)
```
# Pre-Trained Embeddings
Rather than training our own word embeddings, a very expensive operation, we can use word embeddings that were trained on a large corpus of words. The hope is that these embeddings will generalize from the training corpus to our needs.
This code downloads 100-dimensional word embeddings if you don't already have them. There are a number of different pre-trained word embeddings you can find from [Stanford online](https://nlp.stanford.edu/data/).
```
import os
from keras.utils import get_file
# Vectors to use
glove_vectors = '/home/ubuntu/.keras/datasets/glove.6B.zip'
# Download word embeddings if they are not present
if not os.path.exists(glove_vectors):
glove_vectors = get_file('glove.6B.zip', 'http://nlp.stanford.edu/data/glove.6B.zip')
os.system(f'unzip {glove_vectors}')
# Load in unzipped file
glove_vectors = '/home/ubuntu/.keras/datasets/glove.6B.100d.txt'
glove = np.loadtxt(glove_vectors, dtype='str', comments=None)
glove.shape
```
Now we separated into the words and the vectors.
```
vectors = glove[:, 1:].astype('float')
words = glove[:, 0]
del glove
vectors[100], words[100]
```
Next we want to keep only those words that appear in our vocabulary. For words that are in our vocabulary but don't have an embedding, they will be represented as all 0s (a shortcoming that we can address by training our own embeddings.)
```
vectors.shape
word_lookup = {word: vector for word, vector in zip(words, vectors)}
embedding_matrix = np.zeros((num_words, vectors.shape[1]))
not_found = 0
for i, word in enumerate(word_idx.keys()):
# Look up the word embedding
vector = word_lookup.get(word, None)
# Record in matrix
if vector is not None:
embedding_matrix[i + 1, :] = vector
else:
not_found += 1
print(f'There were {not_found} words without pre-trained embeddings.')
import gc
gc.enable()
del vectors
gc.collect()
```
Each word is represented by 100 numbers with a number of words that can't be found. We can find the closest words to a given word in embedding space using the cosine distance. This requires first normalizing the vectors to have a magnitude of 1.
```
# Normalize and convert nan to 0
embedding_matrix = embedding_matrix / np.linalg.norm(embedding_matrix, axis = 1).reshape((-1, 1))
embedding_matrix = np.nan_to_num(embedding_matrix)
def find_closest(query, embedding_matrix, word_idx, idx_word, n = 10):
"""Find closest words to a query word in embeddings"""
idx = word_idx.get(query, None)
# Handle case where query is not in vocab
if idx is None:
print(f'{query} not found in vocab.')
return
else:
vec = embedding_matrix[idx]
# Handle case where word doesn't have an embedding
if np.all(vec == 0):
print(f'{query} has no pre-trained embedding.')
return
else:
# Calculate distance between vector and all others
dists = np.dot(embedding_matrix, vec)
# Sort indexes in reverse order
idxs = np.argsort(dists)[::-1][:n]
sorted_dists = dists[idxs]
closest = [idx_word[i] for i in idxs]
print(f'Query: {query}\n')
max_len = max([len(i) for i in closest])
# Print out the word and cosine distances
for word, dist in zip(closest, sorted_dists):
print(f'Word: {word:15} Cosine Similarity: {round(dist, 4)}')
find_closest('the', embedding_matrix, word_idx, idx_word)
find_closest('neural', embedding_matrix, word_idx, idx_word, 10)
find_closest('.', embedding_matrix, word_idx, idx_word, 10)
find_closest('wonder', embedding_matrix, word_idx, idx_word)
find_closest('dnn', embedding_matrix, word_idx, idx_word)
```
# Build Model
With data encoded as integers and an embedding matrix of pre-trained word vectors, we're ready to build the recurrent neural network. This model is relatively simple and uses an LSTM cell as the heart of the network. After converting the words into embeddings, we pass them through a single LSTM layer, then into a fully connected layer with `relu` activation before the final output layer with a `softmax` activation. The final layer produces a probability for every word in the vocab.
When training, these predictions are compared to the actual label using the `categorical_crossentropy` to calculate a loss. The parameters (weights) in the network are then updated using the Adam optimizer (a variant on Stochastic Gradient Descent) with gradients calculated through backpropagation. Fortunately, Keras handles all of this behind the scenes, so we just have to set up the network and then start the training. The most difficult part is figuring out the correct shapes for the inputs and outputs into the model.
```
from keras.models import Sequential, load_model
from keras.layers import LSTM, Dense, Dropout, Embedding, Masking, Bidirectional
from keras.optimizers import Adam
from keras.utils import plot_model
def make_word_level_model(num_words, embedding_matrix, bi_directional = False,
trainable = False, lstm_cells = 128, lstm_layers = 1):
"""Make a word level recurrent neural network with option for pretrained embeddings
and varying numbers of LSTM cell layers."""
model = Sequential()
# Map words to an embedding
if not trainable:
model.add(Embedding(input_dim=num_words,
output_dim=embedding_matrix.shape[1],
weights = [embedding_matrix], trainable = False,
mask_zero = True))
model.add(Masking())
else:
model.add(Embedding(input_dim = num_words,
output_dim = embedding_matrix.shape[1],
weights = [embedding_matrix],
trainable = True))
# If want to add multiple LSTM layers
if lstm_layers > 1:
for i in range(lstm_layers - 1):
model.add(LSTM(128, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))
# Add final LSTM cell layer
if bi_directional:
model.add(Bidirectional(LSTM(lstm_cells, return_sequences = False, dropout = 0.1, recurrent_dropout=0.1)))
else:
model.add(LSTM(lstm_cells, return_sequences=False, dropout=0.1))
model.add(Dense(128, activation = 'relu'))
# Dropout for regularization
model.add(Dropout(0.5))
# Output layer
model.add(Dense(num_words, activation = 'softmax'))
# Compile the model
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy',
metrics = ['accuracy'])
return model
model = make_word_level_model(num_words, embedding_matrix = embedding_matrix, bi_directional = True,
trainable = False, lstm_layers = 1, lstm_cells = 64)
model.summary()
```
The model needs a loss to minimize (`categorical_crossentropy`) as well as a method for updating the weights using the gradients (`Adam`). We will also monitor accuracy which is not a good loss but can give us a more interpretable measure of the model performance.
Using pre-trained embeddings means we have about half the parameters to train. However, this also means that the embeddings might not be the best for our data, and there are a number of words with no embeddings.
```
model_name = 'pre-trained-bi-directional-rnn'
model_dir = '../models/'
plot_model(model, to_file = f'{model_dir}{model_name}.png', show_shapes = True)
from IPython.display import Image
Image(f'{model_dir}{model_name}.png')
```
# Train Model
We can now train the model on our training examples. We'll make sure to use early stopping with a validation set to stop the training when the loss on the validation set is no longer decreasing. Also, we'll save the best model every time the validation loss decreases so we can then load in the best model to generate predictions.
### Callbacks
* Early Stopping: Stop training when validation loss no longer decreases
* Model Checkpoint: Save the best model on disk
```
from keras.callbacks import EarlyStopping, ModelCheckpoint
BATCH_SIZE = 2048
def make_callbacks(model_name, save = SAVE_MODEL):
"""Make list of callbacks for training"""
callbacks = [EarlyStopping(monitor = 'val_loss', patience = 5)]
if save:
callbacks.append(ModelCheckpoint(f'{model_dir}{model_name}.h5',
save_best_only = True, save_weights_only = False))
return callbacks
callbacks = make_callbacks(model_name)
def load_and_evaluate(model_name, return_model = False):
"""Load in a trained model and evaluate with log loss and accuracy"""
model = load_model(f'{model_dir}{model_name}.h5')
r = model.evaluate(X_valid, y_valid, batch_size = 2048, verbose = 1)
valid_crossentropy = r[0]
valid_accuracy = r[1]
print(f'Cross Entropy: {round(valid_crossentropy, 4)}')
print(f'Accuracy: {round(100 * valid_accuracy, 2)}%')
if return_model:
return model
```
__Depending on your machine, this may take several hours to run.__
```
history = model.fit(X_train, y_train, epochs = EPOCHS, batch_size = BATCH_SIZE, verbose = 1,
callbacks=callbacks,
validation_data = (X_valid, y_valid))
model = load_and_evaluate(model_name, return_model = True)
model = make_word_level_model(num_words, embedding_matrix = embedding_matrix, bi_directional = False,
trainable = False, lstm_layers = 1, lstm_cells = 64)
model.summary()
model_name = 'pre-trained-nonbi-directional-rnn'
callbacks = make_callbacks(model_name)
history = model.fit(X_train, y_train, epochs = EPOCHS, batch_size = BATCH_SIZE, verbose = 1,
callbacks=callbacks,
validation_data = (X_valid, y_valid))
model = load_and_evaluate(model_name, return_model = True)
```
The accuracy - both training and validation - increase over time and the loss decreases over time which gives us indication that our model is getting better with training.
We can load back in the model so we don't need to repeat the training.
```
def load_and_evaluate(model_name, return_model = False):
"""Load in a trained model and evaluate with log loss and accuracy"""
model = load_model(f'{model_dir}{model_name}.h5')
r = model.evaluate(X_valid, y_valid, batch_size = 2048, verbose = 1)
valid_crossentropy = r[0]
valid_accuracy = r[1]
print(f'Cross Entropy: {round(valid_crossentropy, 4)}')
print(f'Accuracy: {round(100 * valid_accuracy, 2)}%')
if return_model:
return model
model = load_and_evaluate(model_name, return_model = True)
```
To check how the model compares to just using the word frequencies to make predictions, we can compute the accuracy if we were to use the most frequent word for every guess. We can also choose from a multinomial distribution using the word frequencies as probabilities.
```
np.random.seed(40)
# Number of all words
total_words = sum(word_counts.values())
# Compute frequency of each word in vocab
frequencies = [word_counts[word]/total_words for word in word_idx.keys()]
frequencies.insert(0, 0)
frequencies[1:10], list(word_idx.keys())[0:9]
```
The most common word is 'the'. Let's see the accuracy of guessing this for every validation example.
```
print(f'The accuracy is {round(100 * np.mean(np.argmax(y_valid, axis = 1) == 1), 4)}%.')
```
Now we make a guess for each of the sequences in the validation set using the frequencies as probabilities. This is in some sense informed, but the multinomial also has randomness.
```
random_guesses = []
# Make a prediction based on frequencies for each example in validation data
for i in range(len(y_valid)):
random_guesses.append(np.argmax(np.random.multinomial(1, frequencies, size = 1)[0]))
from collections import Counter
# Create a counter from the guesses
c = Counter(random_guesses)
# Iterate through the 10 most common guesses
for i in c.most_common(10):
word = idx_word[i[0]]
word_count = word_counts[word]
print(f'Word: {word} \tCount: {word_count} \tPercentage: {round(100 * word_count / total_words, 2)}% \tPredicted: {i[1]}')
accuracy = np.mean(random_guesses == np.argmax(y_valid, axis = 1))
print(f'Random guessing accuracy: {100 * round(accuracy, 4)}%')
```
We can see that our model easily outperforms both guessing the most common word - 7.76% accuracy - as well as using relative word frequencies to guess the next word - 1.46% accuracy. Therefore, we can say that our model has learned something!
# Generating Output
Now for the fun part: we get to use our model to generate new abstracts. To do this, we feed the network a seed sequence, have it make a prediction, add the predicted word to the sequence, and make another prediction for the next word. We continue this for the number of words that we want. We compare the generated output to the actual abstract to see if we can tell the difference!
```
from IPython.display import HTML
def header(text, color = 'black'):
raw_html = f'<h1 style="color: {color};"><center>' + str(text) + '</center></h1>'
return raw_html
def box(text):
raw_html = '<div style="border:1px inset black;padding:1em;font-size: 20px;">'+str(text)+'</div>'
return raw_html
def addContent(old_html, raw_html):
old_html += raw_html
return old_html
import random
def generate_output(model, sequences, training_length = 50, new_words = 50, diversity = 1,
return_output = False, n_gen = 1):
"""Generate `new_words` words of output from a trained model and format into HTML."""
# Choose a random sequence
seq = random.choice(sequences)
# Choose a random starting point
seed_idx = random.randint(0, len(seq) - training_length - 10)
# Ending index for seed
end_idx = seed_idx + training_length
gen_list = []
for n in range(n_gen):
# Extract the seed sequence
seed = seq[seed_idx:end_idx]
original_sequence = [idx_word[i] for i in seed]
generated = seed[:] + ['#']
# Find the actual entire sequence
actual = generated[:] + seq[end_idx:end_idx + new_words]
# Keep adding new words
for i in range(new_words):
# Make a prediction from the seed
preds = model.predict(np.array(seed).reshape(1, -1))[0].astype(np.float64)
# Diversify
preds = np.log(preds) / diversity
exp_preds = np.exp(preds)
# Softmax
preds = exp_preds / sum(exp_preds)
# Choose the next word
probas = np.random.multinomial(1, preds, 1)[0]
next_idx = np.argmax(probas)
# New seed adds on old word
seed = seed[1:] + [next_idx]
generated.append(next_idx)
# Showing generated and actual abstract
n = []
for i in generated:
n.append(idx_word.get(i, '< --- >'))
gen_list.append(n)
a = []
for i in actual:
a.append(idx_word.get(i, '< --- >'))
a = a[training_length:]
gen_list = [gen[training_length:training_length + len(a)] for gen in gen_list]
if return_output:
return original_sequence, gen_list, a
# HTML formatting
seed_html = ''
seed_html = addContent(seed_html, header('Seed Sequence', color = 'darkblue'))
seed_html = addContent(seed_html, box(remove_spaces(' '.join(original_sequence))))
gen_html = ''
gen_html = addContent(gen_html, header('RNN Generated', color = 'darkred'))
gen_html = addContent(gen_html, box(remove_spaces(' '.join(gen_list[0]))))
a_html = ''
a_html = addContent(a_html, header('Actual', color = 'darkgreen'))
a_html = addContent(a_html, box(remove_spaces(' '.join(a))))
return seed_html, gen_html, a_html
```
The `diversity` parameter determines how much randomness is added to the predictions. If we just use the most likely word for each prediction, the output sometimes gets stuck in loops. The diversity means the predicted text has a little more variation.
```
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 1)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
```
Increasing the diversity seems to increase the plausibility of the output. However, that could be becuase the patents themselves don't sound that realistic. This is especially true when we remove the punctuation. We'll fix that in the next section by keeping the punctuation and training our own embeddings.
# Training Own Embeddings
If we aren't happy with the output, especially the lack of punctuation, we can try training our own embeddings. This means the model will adapt the embeddings by itself to get better at the problem of predicting the next output. The final embeddings should place words that are more similar closer together in embedding space. The advantage of training our own embeddings are that they might be more relevant to the task. However, the downside is that training will take longer because the number of parameters significantly increases.
```
def clear_memory():
import gc
gc.enable()
for i in ['model', 'X', 'y', 'word_idx', 'idx_word', 'X_train', 'X_valid,' 'y_train', 'y_valid', 'embedding_matrix',
'words', 'vectors', 'labels', 'random_guesses', 'training_seq', 'word_counts', 'data', 'frequencies']:
if i in dir():
del globals()[i]
gc.collect()
clear_memory()
```
Now when we create the training data, we do not remove the punctuation or convert the words to lowercase.
```
TRAINING_LENGTH = 50
filters = '!"%;[\\]^_`{|}~\t\n'
word_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted,
TRAINING_LENGTH,
lower = False,
filters = filters)
embedding_matrix = np.zeros((num_words, len(word_lookup['the'])))
not_found = 0
for i, word in enumerate(word_idx.keys()):
# Look up the word embedding
vector = word_lookup.get(word, None)
# Record in matrix
if vector is not None:
embedding_matrix[i + 1, :] = vector
else:
not_found += 1
print(f'There were {not_found} words without pre-trained embeddings.')
embedding_matrix.shape
# Split into training and validation
X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)
X_train.shape, y_train.shape
check_sizes(gb_min = 1)
```
Let's create a model with 100 dimensional embeddings, input sequences of length 50, and 1 LSTM layer as before.
```
model = make_word_level_model(num_words, embedding_matrix, trainable = True, bi_directional = True,
lstm_layers = 1, lstm_cells = 64)
model.summary()
model_name = 'training-rnn-bi-directional'
callbacks = make_callbacks(model_name)
model.compile(optimizer = Adam(), loss = 'categorical_crossentropy', metrics = ['accuracy'])
history = model.fit(X_train, y_train, batch_size = BATCH_SIZE, verbose = VERBOSE, epochs = EPOCHS, callbacks=callbacks,
validation_data = (X_valid, y_valid))
import json
with open('training-rnn.json', 'w') as f:
f.write(json.dumps(word_idx))
```
As before we load in the model and have it generate output.
```
model_dir = '../models/'
from keras.models import load_model
model = load_and_evaluate(model_name, return_model=True)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
```
The most realisitic output seems to occur when the diversity is between 0.5 and 1.0. Sometimes it's difficult to tell the generated from the actual, a trial we'll look at a little later!
## Inspect Embeddings
We can take a look at our trained embeddings to figure out the closest words in the embedding space. These embeddings are trained for our task, which means they may differ slightly from the pre-trained versions.
```
model.summary()
def get_embeddings(model):
embedding_layer = model.get_layer(index = 0)
embedding_matrix = embedding_layer.get_weights()[0]
embedding_matrix = embedding_matrix / np.linalg.norm(embedding_matrix, axis = 1).reshape((-1, 1))
embedding_matrix = np.nan_to_num(embedding_matrix)
return embedding_matrix
embedding_matrix = get_embeddings(model)
embedding_matrix.shape
find_closest('the', embedding_matrix, word_idx, idx_word)
find_closest('neural', embedding_matrix, word_idx, idx_word)
find_closest('computer', embedding_matrix, word_idx, idx_word)
```
# Change Parameters of Network
Next, we can try to generate more accurate predictions by altering the network parameters. Primarily, we will increase the number of LSTM layers to 2. The first LSTM layer returns the sequences - the entire output for each input sequence instead of only the final one - before passing it on to the second. Training may take a little longer, but performance could also improve. There's no guarantee this model is better because we could just end up overfitting on the training data. There is no downside to trying though.
```
model = make_word_level_model(num_words, embedding_matrix, trainable = True, lstm_layers = 2)
model.summary()
model_name = 'training-rnn-2_layers'
callbacks = make_callbacks(model_name)
history = model.fit(X_train, y_train, batch_size = BATCH_SIZE, verbose = VERBOSE, epochs = EPOCHS, callbacks=callbacks,
validation_data = (X_valid, y_valid))
model = load_and_evaluate(model_name, return_model = True)
embedding_matrix = get_embeddings(model)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
```
# Change Training Length
Another option to try and improve the model is to change the length of the training sequences. The idea here is using more previous words will give the network more context for predicting the next word. However, it could also be that including more words _hurts_ the model because some of them are irrelevant!
```
clear_memory()
TRAINING_LENGTH = 100
filters = '!"%;[\\]^_`{|}~\t\n'
word_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted,
TRAINING_LENGTH,
lower = False,
filters = filters)
X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)
X_train.shape, y_train.shape
check_sizes()
model = make_word_level_model(num_words, embedding_matrix, trainable = True)
model.summary()
model_name = 'training-len100'
callbacks = make_callbacks(model_name)
history = model.fit(X_train, y_train, epochs = EPOCHS, callbacks=callbacks, batch_size = BATCH_SIZE, verbose = VERBOSE,
validation_data = (X_valid, y_valid))
model = load_and_evaluate(model_name, return_model=True)
embedding_matrix = get_embeddings(model)
word_lookup = {word: embedding_matrix[i] for i, word in zip(idx_word.keys(), idx_word.values())}
len(word_lookup)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 1.5)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
```
# Reduce Training Length
```
clear_memory()
TRAINING_LENGTH = 20
filters = '!"%[\\]^_`{|}~\t\n'
word_idx, idx_word, num_words, word_counts, abstracts, sequences, features, labels = make_sequences(formatted,
TRAINING_LENGTH,
lower = False,
filters = filters)
embedding_matrix = np.zeros((num_words, len(word_lookup['the'])))
not_found = 0
for i, word in enumerate(word_idx.keys()):
# Look up the word embedding
vector = word_lookup.get(word, None)
# Record in matrix
if vector is not None:
embedding_matrix[i + 1, :] = vector
else:
not_found += 1
print(f'There were {not_found} words without pre-trained embeddings.')
X_train, X_valid, y_train, y_valid = create_train_valid(features, labels, num_words)
X_train.shape, y_train.shape
check_sizes()
model = make_word_level_model(num_words, embedding_matrix, trainable = True, lstm_layers = 1)
model_name = 'training-len20'
callbacks = make_callbacks(model_name)
history = model.fit(X_train, y_train, epochs = EPOCHS, batch_size = BATCH_SIZE, verbose = VERBOSE,
callbacks=callbacks,
validation_data = (X_valid, y_valid))
model = load_and_evaluate(model_name, return_model = True)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.75)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
seed_html, gen_html, a_html = generate_output(model, sequences, TRAINING_LENGTH, diversity = 0.8)
HTML(seed_html)
HTML(gen_html)
HTML(a_html)
```
# Is Output from a human or machine?
```
def guess_human(model, sequences, training_length=50, new_words=50):
"""Produce 2 RNN sequences and play game to compare to actaul.
Diversity is randomly set between 0.5 and 1.25"""
diversity = np.random.uniform(0.5, 1.25)
sequence, gen_list, actual = generate_output(model, sequences, training_length,
diversity=diversity, return_output=True, n_gen = 2)
gen_0, gen_1 = gen_list
output = {'sequence': remove_spaces(' '.join(sequence)),
'c0': remove_spaces(' '.join(gen_0)),
'c1': remove_spaces(' '.join(gen_1)),
'h': remove_spaces(' '.join(actual))}
print(f"Seed Sequence: {output['sequence']}\n")
choices = ['h', 'c0', 'c1']
selected = []
i = 0
while len(selected) < 3:
choice = random.choice(choices)
selected.append(choice)
print('\n')
print(f'Option {i + 1} {output[choice]}')
choices.remove(selected[-1])
i += 1
print('\n')
guess = int(input('Enter option you think is human (1-3): ')) - 1
print('\n')
if guess == np.where(np.array(selected) == 'h')[0][0]:
print('Correct')
print('Correct Ordering', selected)
else:
print('Incorrect')
print('Correct Ordering', selected)
print('Diversity', round(diversity, 2))
guess_human(model, sequences)
guess_human(model, sequences)
```
# Conclusions
In this notebook, we saw how to build a recurrent neural network and used it to generate patent abstracts. Although the output is not always believable, this project gives us practice handling text sequences with neural networks. Deep learning has some advantages compared to traditional machine learning, especially in areas of computer vision and natural language processing. Hopefully you are now confident harnessing these powerful techniques to solve your own text problems!
This project covered a number of steps for working with text data including:
1. Cleaning data using regular expressions
2. Preparing data for neural network
* Converting text strings to integers (tokenization)
* Encoding labels using one-hot encoding
* Building training and validation set
3. Buildig a recurrent neural network using LSTM cells
4. Using pre-trained word embeddings and training our own embeddings
5. Adjusting model parameters to improve performance
6. Inspecting model results
Although we didn't cover the theory in depth, we did see the implementation, which means we now have a framework to fit the concepts we study. Technical topics are best learned through practice, and this project gave us a great opportunity to explore the frontiers of natural language processing with deep learning.
# Appendix I: Training with A Data Generator
```
def data_gen(sequences, labels, batch_size, num_words):
"""Yield batches for training"""
i = 0
while True:
# Reset once all examples have been used
if i + batch_size > len(labels):
i = 0
X = np.array(sequences[i: i + batch_size])
# Create array of zeros for labels
y = np.zeros((BATCH_SIZE, num_words))
# Extract integer labels
ys = labels[i: i + batch_size]
# Convert to one hot representation
for example_num, word_num in enumerate(ys):
y[example_num, word_num] = 1
yield X, y
i += batch_size
gc.collect()
def create_train_valid_gen(features, labels, batch_size, num_words):
"""Create training and validation generators for training"""
# Randomly shuffle features and labels
features, labels = shuffle(features, labels, random_state = RANDOM_STATE)
# Decide on number of samples for training
train_end = int(0.7 * len(labels))
train_features = np.array(features[:train_end])
valid_features = np.array(features[train_end:])
train_labels = labels[:train_end]
valid_labels = labels[train_end:]
# Make training and validation generators
train_gen = data_gen(train_features, train_labels, batch_size, num_words)
valid_gen = data_gen(valid_features, valid_labels, batch_size, num_words)
return train_gen, valid_gen, train_end
BATCH_SIZE = 2048
train_gen, valid_gen, train_len = create_train_valid_gen(features, labels, BATCH_SIZE, num_words)
X, y = next(train_gen)
train_steps = train_len // BATCH_SIZE
valid_steps = (len(labels) - train_len) // BATCH_SIZE
X.shape
y.shape
train_steps
valid_steps
history = model.fit_generator(train_gen, steps_per_epoch= train_steps, epochs = 2,
callbacks=None,
validation_data = valid_gen,
validation_steps = valid_steps)
```
# Appendix II: Using a Keras Sequence for Training
```
from keras.utils import Sequence
class textSequence(Sequence):
"""Keras Sequence for training with a generator."""
def __init__(self, x_set, y_set, batch_size, num_words):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.num_words = num_words
def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
X = np.array(batch_x)
y = np.zeros((len(batch_y), self.num_words))
for example_idx, word_idx in enumerate(batch_y):
y[example_idx, word_idx] = 1
return X, y
# Decide on number of samples for training
train_end = int(TRAIN_FRACTION * len(labels))
train_features = np.array(features[:train_end])
valid_features = np.array(features[train_end:])
train_labels = labels[:train_end]
valid_labels = labels[train_end:]
train_sequence = textSequence(train_features, train_labels, 2048, num_words)
valid_sequence = textSequence(valid_features, valid_labels, 2048, num_words)
history = model.fit_generator(train_sequence, epochs = 2,
callbacks=None,
validation_data = valid_sequence,
workers = 20)
```
| true |
code
| 0.521959 | null | null | null | null |
|
## Kaggle Advance House Price Prediction Using PyTorch
* https://docs.fast.ai/tabular.html
* https://www.fast.ai/2018/04/29/categorical-embeddings/
* https://yashuseth.blog/2018/07/22/pytorch-neural-network-for-tabular-data-with-categorical-embeddings/
```
import pandas as pd
```
### Importing the Dataset
```
df=pd.read_csv('houseprice.csv',usecols=["SalePrice", "MSSubClass", "MSZoning", "LotFrontage", "LotArea",
"Street", "YearBuilt", "LotShape", "1stFlrSF", "2ndFlrSF"]).dropna()
df.shape
df.head()
df.info()
```
### Unique Values in the Columns
```
for i in df.columns:
print("Column name {} and unique values are {}".format(i,len(df[i].unique())))
```
### Derived Features
```
import datetime
datetime.datetime.now().year
df['Total Years']=datetime.datetime.now().year-df['YearBuilt']
df.head()
df.drop("YearBuilt",axis=1,inplace=True)
df.columns
```
### Creating my Categorical Features
```
cat_features=["MSSubClass", "MSZoning", "Street", "LotShape"]
out_feature="SalePrice"
df["MSSubClass"].unique()
```
### Converting the categorical feature
```
from sklearn.preprocessing import LabelEncoder
lbl_encoders={}
lbl_encoders["MSSubClass"]=LabelEncoder()
lbl_encoders["MSSubClass"].fit_transform(df["MSSubClass"])
lbl_encoders
from sklearn.preprocessing import LabelEncoder
lbl_encoders={}
for feature in cat_features:
lbl_encoders[feature]=LabelEncoder()
df[feature]=lbl_encoders[feature].fit_transform(df[feature])
df.head()
```
### Stacking and Converting Into Tensors
```
import numpy as np
cat_features=np.stack([df['MSSubClass'],df['MSZoning'],df['Street'],df['LotShape']],1)
cat_features
```
### Convert numpy to Tensors
**Note: CATEGORICAL FEATURES CAN NEVER BY CONVERTED TO FLOAT**
```
import torch
cat_features=torch.tensor(cat_features,dtype=torch.int64)
cat_features
```
### Creating continuous variables
```
cont_features=[]
for i in df.columns:
if i in ["MSSubClass", "MSZoning", "Street", "LotShape","SalePrice"]:
pass
else:
cont_features.append(i)
cont_features
```
### Stacking continuous variables to a tensor
```
cont_values=np.stack([df[i].values for i in cont_features],axis=1)
cont_values=torch.tensor(cont_values,dtype=torch.float)
cont_values
cont_values.dtype
```
### Dependent Feature
```
y=torch.tensor(df['SalePrice'].values,dtype=torch.float).reshape(-1,1)
y
df.info()
cat_features.shape,cont_values.shape,y.shape
len(df['MSSubClass'].unique())
```
## Embedding Size For Categorical columns
```
cat_dims=[len(df[col].unique()) for col in ["MSSubClass", "MSZoning", "Street", "LotShape"]]
cat_dims
```
### Dimension of Output from the Embedding Layer
* Output dimension should be set based on the input dimension
* Should be min(50, feature dimension/2)
* **Not more than 50 categorical values can be used**
```
embedding_dim= [(x, min(50, (x + 1) // 2)) for x in cat_dims]
embedding_dim
```
## Creating an Embedding Layer inside the Neural Network
* ModuleList is used because we have many dimensions (4) in the embedding layer.
* Embedding function creates the embedding layer using the list comprehension
```
import torch
import torch.nn as nn
import torch.nn.functional as F
embed_representation=nn.ModuleList([nn.Embedding(inp,out) for inp,out in embedding_dim])
embed_representation
cat_features
cat_featuresz=cat_features[:4]
cat_featuresz
pd.set_option('display.max_rows', 500)
embedding_val=[]
for i,e in enumerate(embed_representation):
embedding_val.append(e(cat_features[:,i]))
embedding_val
len(embedding_val[0][0])
```
### Stacking the embedded values column wise
```
z = torch.cat(embedding_val, 1)
z
```
### Implement dropout - Regularization Method (Prevents Overfitting)
```
# 40% values are dropped out.
droput=nn.Dropout(.4)
final_embed=droput(z)
final_embed
```
## Create a Feed Forward Neural Network
```
import torch
import torch.nn as nn
import torch.nn.functional as F
class FeedForwardNN(nn.Module):
def __init__(self, embedding_dim, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(inp,out) for inp,out in embedding_dim])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((out for inp,out in embedding_dim))
# Input feature = Embedding Layers + Continuous Variables
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
x = self.layers(x)
return x
len(cont_features)
torch.manual_seed(100)
model=FeedForwardNN(embedding_dim,len(cont_features),1,[100,50],p=0.4)
```
* ReLU activation function is used because it is a regression problem.
```
model
```
### Define Loss And Optimizer
```
model.parameters
# Later converted to Root Mean Squared Error
loss_function=nn.MSELoss()
optimizer=torch.optim.Adam(model.parameters(),lr=0.01)
df.shape
cont_values
cont_values.shape
batch_size=1200
test_size=int(batch_size*0.15)
train_categorical=cat_features[:batch_size-test_size]
test_categorical=cat_features[batch_size-test_size:batch_size]
train_cont=cont_values[:batch_size-test_size]
test_cont=cont_values[batch_size-test_size:batch_size]
y_train=y[:batch_size-test_size]
y_test=y[batch_size-test_size:batch_size]
len(train_categorical),len(test_categorical),len(train_cont),len(test_cont),len(y_train),len(y_test)
epochs=5000
final_losses=[]
for i in range(epochs):
i=i+1
y_pred=model(train_categorical,train_cont)
# RMSE
loss=torch.sqrt(loss_function(y_pred,y_train))
final_losses.append(loss)
if i%10==1:
print("Epoch number: {} and the loss : {}".format(i,loss.item()))
optimizer.zero_grad()
loss.backward()
optimizer.step()
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(range(epochs), final_losses)
plt.ylabel('RMSE Loss')
plt.xlabel('Epoch')
```
### Validate the Test Data
```
y_pred=""
with torch.no_grad():
y_pred=model(test_categorical,test_cont)
loss=torch.sqrt(loss_function(y_pred,y_test))
print('RMSE: {}'.format(loss))
data_verify=pd.DataFrame(y_test.tolist(),columns=["Test"])
data_verify
data_predicted=pd.DataFrame(y_pred.tolist(),columns=["Prediction"])
data_predicted
final_output=pd.concat([data_verify,data_predicted],axis=1)
final_output['Difference']=final_output['Test']-final_output['Prediction']
final_output.head()
```
## Save the model
```
torch.save(model,'HousePrice.pt')
torch.save(model.state_dict(),'HouseWeights.pt')
```
### Loading the saved Model
```
embs_size=[(15, 8), (5, 3), (2, 1), (4, 2)]
model1=FeedForwardNN(embs_size,5,1,[100,50],p=0.4)
model1.load_state_dict(torch.load('HouseWeights.pt'))
model1.eval()
```
| true |
code
| 0.598312 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/williamsdoug/CTG_RP/blob/master/CTG_RP_Train_Model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Generate Datasets and Train Model
```
#! rm -R images
! ls
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import config_local
from config_local import *
import numpy as np
import matplotlib.pyplot as plt
import gc
from fastai.vision import *
from fastai.metrics import error_rate
import torch
from torch import nn
import collections
import pprint
import random
from compute_metadata import get_splits, generate_label_file, generate_lists
from generate_recurrence_images import generate_rp_images, gen_recurrence_params
```
## Code
## Config
```
np.random.seed(1234)
random.seed(1234)
# Configure Recurrent Plot Parameters
POLICY='early_valid' # 'best_quality', 'early_valid', 'late_valid'
rp_params = gen_recurrence_params(dimensions=[2], time_delays=[1], percentages=[1,3, 10], use_clip_vals=[False])
rp_params
tfms=[]
size=64
bs=64
workers=4
path = Path() / 'images'
```
## Generate Recurrence Images
```
generate_rp_images(RECORDINGS_DIR, images_dir=IMAGES_DIR, rp_params=rp_params[:1],
policy=POLICY,
show_signal=False, show_image=True, verbose=True, cmap='binary',
limit=3,
)
generate_rp_images(RECORDINGS_DIR, images_dir=IMAGES_DIR, rp_params=rp_params,
policy=POLICY,
show_signal=False, show_image=False, verbose=True, cmap='binary',
)
#!ls images
```
## Generate Train and Valid Label Files
```
train_valid_groups_full = get_splits(image_dir='images', image_file='rp_images_index.json',
exclude=['_clipped'],
thresh = 7.15)
# Create valid_x.csv files for each split
for i in range(len(train_valid_groups_full)):
generate_lists(train_valid_groups_full[i], train_file='train_{}.csv'.format(i),
valid_file='valid_{}.csv'.format(i))
!ls images/*.csv
train = ImageList.from_csv(path, 'train_0.csv')
valid = ImageList.from_csv(path, 'valid_0.csv')
lls = ItemLists(path, train, valid).label_from_df(cols=1).transform(tfms, size=size)
#db = lls.databunch(bs=bs, num_workers=workers)#.normalize(binary_image_stats)
db = lls.databunch(bs=bs, num_workers=workers)
my_stats = db.batch_stats()
db = lls.databunch(bs=bs, num_workers=workers).normalize(my_stats)
db.batch_stats()
```
### Examine Results
```
print('nClass: {} classes: {}'.format(db.c, db.classes))
db
im = train.get(-1)
print(len(train), im.size)
im.show()
```
## Model
```
trial_model = nn.Sequential(
nn.Sequential(
nn.Conv2d(3,8,5), # 60 × 60 × 8
nn.ReLU(),
nn.AvgPool2d(3, stride=2), # 29 × 29 × 8
#nn.Dropout(p=0.25),
nn.Conv2d(8,8,5), # 25 × 25 × 8
nn.ReLU(),
nn.AvgPool2d(3, stride=2), # 12 × 12 × 8
Flatten() # 1152
),
# removed model head to compute flatten size
)
trial_learn = Learner(db, trial_model, loss_func = nn.CrossEntropyLoss(), metrics=accuracy)
trial_learn.summary()
del trial_model
trial_learn.destroy()
gc.collect()
mymodel = nn.Sequential(
nn.Sequential(
nn.Conv2d(3,8,5), # 60 × 60 × 8
nn.ReLU(),
nn.AvgPool2d(3, stride=2), # 29 × 29 × 8
#nn.Dropout(p=0.25),
nn.Conv2d(8,8,5), # 25 × 25 × 8
nn.ReLU(),
nn.AvgPool2d(3, stride=2), # 12 × 12 × 8
Flatten() # 1152
),
nn.Sequential(
# nn.Dropout(p=0.25),
nn.Linear(1152, 144),
nn.ReLU(),
nn.Dropout(p=0.8),
nn.Linear(144, db.c)
)
)
learn = Learner(db, mymodel, loss_func = nn.CrossEntropyLoss(), metrics=accuracy)
learn.summary()
learn.save('initial')
```
# Train Model
```
learn.fit_one_cycle(1, 1e-6) # learn.fit_one_cycle(1, 0.01)
# learn.save('save-1')
learn.lr_find(end_lr=1)
learn.recorder.plot()
learn.load('initial')
learn.fit_one_cycle(100, 3e-3) # learn.fit_one_cycle(1, 0.01)
learn.load('initial')
learn.fit_one_cycle(100, 1e-2) # learn.fit_one_cycle(1, 0.01)
learn.load('initial')
learn.fit_one_cycle(100, 1e-3) # learn.fit_one_cycle(1, 0.01)
learn.load('initial')
learn.fit_one_cycle(100, 1e-4) # learn.fit_one_cycle(1, 0.01)
#train an additional 100 epochs
learn.fit_one_cycle(100, 1e-4) # learn.fit_one_cycle(1, 0.01)
gc.collect()
```
| true |
code
| 0.63799 | null | null | null | null |
|
# Targeting Direct Marketing with Amazon SageMaker XGBoost
_**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_
## Background
Direct marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.
This notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include:
* Preparing your Amazon SageMaker notebook
* Downloading data from the internet into Amazon SageMaker
* Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms
* Estimating a model using the Gradient Boosting algorithm
* Evaluating the effectiveness of the model
* Setting the model up to make on-going predictions
---
## Preparation
_This notebook was created and tested on an ml.m4.xlarge notebook instance._
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
```
# Define IAM role
import boto3
import sagemaker
import re
from sagemaker import get_execution_role
region = boto3.Session().region_name
session = sagemaker.Session()
bucket = session.default_bucket()
prefix = 'sagemaker/DEMO-xgboost-dm'
role = get_execution_role()
```
Now let's bring in the Python libraries that we'll use throughout the analysis
```
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker # Amazon SageMaker's Python SDK provides many helper functions
from sagemaker.predictor import csv_serializer # Converts strings for HTTP POST requests on inference
! python -m pip install smdebug
```
---
## Data
Let's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket.
\[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
```
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
!conda install -y -c conda-forge unzip
!unzip -o bank-additional.zip
```
Now lets read this into a Pandas data frame and take a look.
```
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_rows',10)
data
```
Let's talk about the data. At a high level, we can see:
* We have a little over 40K customer records, and 20 features for each customer
* The features are mixed; some numeric, some categorical
* The data appears to be sorted, at least by `time` and `contact`, maybe more
_**Specifics on each of the features:**_
*Demographics:*
* `age`: Customer's age (numeric)
* `job`: Type of job (categorical: 'admin.', 'services', ...)
* `marital`: Marital status (categorical: 'married', 'single', ...)
* `education`: Level of education (categorical: 'basic.4y', 'high.school', ...)
*Past customer events:*
* `default`: Has credit in default? (categorical: 'no', 'unknown', ...)
* `housing`: Has housing loan? (categorical: 'no', 'yes', ...)
* `loan`: Has personal loan? (categorical: 'no', 'yes', ...)
*Past direct marketing contacts:*
* `contact`: Contact communication type (categorical: 'cellular', 'telephone', ...)
* `month`: Last contact month of year (categorical: 'may', 'nov', ...)
* `day_of_week`: Last contact day of the week (categorical: 'mon', 'fri', ...)
* `duration`: Last contact duration, in seconds (numeric). Important note: If duration = 0 then `y` = 'no'.
*Campaign information:*
* `campaign`: Number of contacts performed during this campaign and for this client (numeric, includes last contact)
* `pdays`: Number of days that passed by after the client was last contacted from a previous campaign (numeric)
* `previous`: Number of contacts performed before this campaign and for this client (numeric)
* `poutcome`: Outcome of the previous marketing campaign (categorical: 'nonexistent','success', ...)
*External environment factors:*
* `emp.var.rate`: Employment variation rate - quarterly indicator (numeric)
* `cons.price.idx`: Consumer price index - monthly indicator (numeric)
* `cons.conf.idx`: Consumer confidence index - monthly indicator (numeric)
* `euribor3m`: Euribor 3 month rate - daily indicator (numeric)
* `nr.employed`: Number of employees - quarterly indicator (numeric)
*Target variable:*
* `y`: Has the client subscribed a term deposit? (binary: 'yes','no')
### Transformation
Cleaning up data is part of nearly every machine learning project. It arguably presents the biggest risk if done incorrectly and is one of the more subjective aspects in the process. Several common techniques include:
* Handling missing values: Some machine learning algorithms are capable of handling missing values, but most would rather not. Options include:
* Removing observations with missing values: This works well if only a very small fraction of observations have incomplete information.
* Removing features with missing values: This works well if there are a small number of features which have a large number of missing values.
* Imputing missing values: Entire [books](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247) have been written on this topic, but common choices are replacing the missing value with the mode or mean of that column's non-missing values.
* Converting categorical to numeric: The most common method is one hot encoding, which for each feature maps every distinct value of that column to its own feature which takes a value of 1 when the categorical feature is equal to that value, and 0 otherwise.
* Oddly distributed data: Although for non-linear models like Gradient Boosted Trees, this has very limited implications, parametric models like regression can produce wildly inaccurate estimates when fed highly skewed data. In some cases, simply taking the natural log of the features is sufficient to produce more normally distributed data. In others, bucketing values into discrete ranges is helpful. These buckets can then be treated as categorical variables and included in the model when one hot encoded.
* Handling more complicated data types: Mainpulating images, text, or data at varying grains is left for other notebook templates.
Luckily, some of these aspects have already been handled for us, and the algorithm we are showcasing tends to do well at handling sparse or oddly distributed data. Therefore, let's keep pre-processing simple.
```
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators
```
Another question to ask yourself before building a model is whether certain features will add value in your final use case. For example, if your goal is to deliver the best prediction, then will you have access to that data at the moment of prediction? Knowing it's raining is highly predictive for umbrella sales, but forecasting weather far enough out to plan inventory on umbrellas is probably just as difficult as forecasting umbrella sales without knowledge of the weather. So, including this in your model may give you a false sense of precision.
Following this logic, let's remove the economic features and `duration` from our data as they would need to be forecasted with high precision to use as inputs in future predictions.
Even if we were to use values of the economic indicators from the previous quarter, this value is likely not as relevant for prospects contacted early in the next quarter as those contacted later on.
```
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
```
When building a model whose primary goal is to predict a target value on new data, it is important to understand overfitting. Supervised learning models are designed to minimize error between their predictions of the target value and actuals, in the data they are given. This last part is key, as frequently in their quest for greater accuracy, machine learning models bias themselves toward picking up on minor idiosyncrasies within the data they are shown. These idiosyncrasies then don't repeat themselves in subsequent data, meaning those predictions can actually be made less accurate, at the expense of more accurate predictions in the training phase.
The most common way of preventing this is to build models with the concept that a model shouldn't only be judged on its fit to the data it was trained on, but also on "new" data. There are several different ways of operationalizing this, holdout validation, cross-validation, leave-one-out validation, etc. For our purposes, we'll simply randomly split the data into 3 uneven groups. The model will be trained on 70% of data, it will then be evaluated on 20% of data to give us an estimate of the accuracy we hope to have on "new" data, and 10% will be held back as a final testing dataset which will be used later on.
```
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
```
Amazon SageMaker's XGBoost container expects data in the libSVM or CSV data format. For this example, we'll stick to CSV. Note that the first column must be the target variable and the CSV should not include headers. Also, notice that although repetitive it's easiest to do this after the train|validation|test split rather than before. This avoids any misalignment issues due to random reordering.
```
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
```
Now we'll copy the file to S3 for Amazon SageMaker's managed training to pickup.
```
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
```
---
## Training
Now we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm.
There are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms.
`xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using Amazon SageMaker's managed, distributed training framework.
First we'll need to specify the ECR container location for Amazon SageMaker's implementation of XGBoost.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
container = sagemaker.image_uris.retrieve(region=boto3.Session().region_name, framework='xgboost', version='1.0-1')
```
Then, because we're training with the CSV file format, we'll create `s3_input`s that our training function can use as a pointer to the files in S3, which also specify that the content type is CSV.
```
s3_input_train = sagemaker.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.TrainingInput(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
base_job_name = "demo-smdebug-xgboost-regression"
bucket_path='s3://{}/{}/output'.format(bucket, prefix)
```
### Enabling Debugger in Estimator object
#### DebuggerHookConfig
Enabling Amazon SageMaker Debugger in training job can be accomplished by adding its configuration into Estimator object constructor:
```python
from sagemaker.debugger import DebuggerHookConfig, CollectionConfig
estimator = Estimator(
...,
debugger_hook_config = DebuggerHookConfig(
s3_output_path="s3://{bucket_name}/{location_in_bucket}", # Required
collection_configs=[
CollectionConfig(
name="metrics",
parameters={
"save_interval": "10"
}
)
]
)
)
```
Here, the `DebuggerHookConfig` object instructs `Estimator` what data we are interested in.
Two parameters are provided in the example:
- `s3_output_path`: it points to S3 bucket/path where we intend to store our debugging tensors.
Amount of data saved depends on multiple factors, major ones are: training job / data set / model / frequency of saving tensors.
This bucket should be in your AWS account, and you should have full access control over it.
**Important Note**: this s3 bucket should be originally created in the same region where your training job will be running, otherwise you might run into problems with cross region access.
- `collection_configs`: it enumerates named collections of tensors we want to save.
Collections are a convinient way to organize relevant tensors under same umbrella to make it easy to navigate them during analysis.
In this particular example, you are instructing Amazon SageMaker Debugger that you are interested in a single collection named `metrics`.
We also instructed Amazon SageMaker Debugger to save metrics every 10 iteration.
See [Collection](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md#collection) documentation for all parameters that are supported by Collections and DebuggerConfig documentation for more details about all parameters DebuggerConfig supports.
#### Rules
Enabling Rules in training job can be accomplished by adding the `rules` configuration into Estimator object constructor.
- `rules`: This new parameter will accept a list of rules you wish to evaluate against the tensors output by this training job.
For rules, Amazon SageMaker Debugger supports two types:
- SageMaker Rules: These are rules specially curated by the data science and engineering teams in Amazon SageMaker which you can opt to evaluate against your training job.
- Custom Rules: You can optionally choose to write your own rule as a Python source file and have it evaluated against your training job.
To provide Amazon SageMaker Debugger to evaluate this rule, you would have to provide the S3 location of the rule source and the evaluator image.
In this example, you will use a Amazon SageMaker's LossNotDecreasing rule, which helps you identify if you are running into a situation where the training loss is not going down.
```python
from sagemaker.debugger import rule_configs, Rule
estimator = Estimator(
...,
rules=[
Rule.sagemaker(
rule_configs.loss_not_decreasing(),
rule_parameters={
"collection_names": "metrics",
"num_steps": "10",
},
),
],
)
```
- `rule_parameters`: In this parameter, you provide the runtime values of the parameter in your constructor.
You can still choose to pass in other values which may be necessary for your rule to be evaluated.
In this example, you will use Amazon SageMaker's LossNotDecreasing rule to monitor the `metircs` collection.
The rule will alert you if the tensors in `metrics` has not decreased for more than 10 steps.
First we'll need to specify training parameters to the estimator. This includes:
1. The `xgboost` algorithm container
1. The IAM role to use
1. Training instance type and count
1. S3 location for output data
1. Algorithm hyperparameters
And then a `.fit()` function which specifies:
1. S3 location for output data. In this case we have both a training and validation set which are passed in.
```
from sagemaker.debugger import rule_configs, Rule, DebuggerHookConfig, CollectionConfig
from sagemaker.estimator import Estimator
sess = sagemaker.Session()
save_interval = 5
xgboost_estimator = Estimator(
role=role,
base_job_name=base_job_name,
instance_count=1,
instance_type='ml.m5.4xlarge',
image_uri=container,
max_run=1800,
sagemaker_session=sess,
debugger_hook_config=DebuggerHookConfig(
s3_output_path=bucket_path, # Required
collection_configs=[
CollectionConfig(
name="metrics",
parameters={
"save_interval": str(save_interval)
}
),
CollectionConfig(
name="predictions",
parameters={
"save_interval": str(save_interval)
}
),
CollectionConfig(
name="feature_importance",
parameters={
"save_interval": str(save_interval)
}
),
CollectionConfig(
name="average_shap",
parameters={
"save_interval": str(save_interval)
}
)
],
)
)
xgboost_estimator.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgboost_estimator.fit(
{"train": s3_input_train, "validation": s3_input_validation},
# This is a fire and forget event. By setting wait=False, you submit the job to run in the background.
# Amazon SageMaker starts one training job and release control to next cells in the notebook.
# Follow this notebook to see status of the training job.
wait=False
)
```
### Result
As a result of the above command, Amazon SageMaker starts one training job and one rule job for you. The first one is the job that produces the tensors to be analyzed. The second one analyzes the tensors to check if `train-rmse` and `validation-rmse` are not decreasing at any point during training.
Check the status of the training job below.
After your training job is started, Amazon SageMaker starts a rule-execution job to run the LossNotDecreasing rule.
**Note that the next cell blocks until the rule execution job ends. You can stop it at any point to proceed to the rest of the notebook. Once it says Rule Evaluation Status is Started, and shows the `RuleEvaluationJobArn`, you can look at the status of the rule being monitored.**
```
import time
from time import gmtime, strftime
# Below command will give the status of training job
job_name = xgboost_estimator.latest_training_job.name
client = xgboost_estimator.sagemaker_session.sagemaker_client
description = client.describe_training_job(TrainingJobName=job_name)
print('Training job name: ' + job_name)
print(description['TrainingJobStatus'])
if description['TrainingJobStatus'] != 'Completed':
while description['SecondaryStatus'] not in ['Training', 'Completed']:
description = client.describe_training_job(TrainingJobName=job_name)
primary_status = description['TrainingJobStatus']
secondary_status = description['SecondaryStatus']
print("{}: {}, {}".format(strftime('%X', gmtime()), primary_status, secondary_status))
time.sleep(15)
```
## Data Analysis - Manual
Now that you've trained the system, analyze the data.
Here, you focus on after-the-fact analysis.
You import a basic analysis library, which defines the concept of trial, which represents a single training run.
```
from smdebug.trials import create_trial
description = client.describe_training_job(TrainingJobName=job_name)
s3_output_path = xgboost_estimator.latest_job_debugger_artifacts_path()
# This is where we create a Trial object that allows access to saved tensors.
trial = create_trial(s3_output_path)
```
You can list all the tensors that you know something about. Each one of these names is the name of a tensor. The name is a combination of the feature name, which in these cases, is auto-assigned by XGBoost, and whether it's an evaluation metric, feature importance, or SHAP value.
```
trial.tensor_names()
```
For each tensor, ask for the steps where you have data. In this case, every five steps
```
trial.tensor("predictions").values()
```
You can obtain each tensor at each step as a NumPy array.
```
type(trial.tensor("predictions").value(10))
```
### Performance metrics
You can also create a simple function that visualizes the training and validation errors as the training progresses.
Each gradient should get smaller over time, as the system converges to a good solution.
Remember that this is an interactive analysis. You are showing these tensors to give an idea of the data.
```
import matplotlib.pyplot as plt
import seaborn as sns
import re
def get_data(trial, tname):
"""
For the given tensor name, walks though all the iterations
for which you have data and fetches the values.
Returns the set of steps and the values.
"""
tensor = trial.tensor(tname)
steps = tensor.steps()
vals = [tensor.value(s) for s in steps]
return steps, vals
def plot_collection(trial, collection_name, regex='.*', figsize=(8, 6)):
"""
Takes a `trial` and a collection name, and
plots all tensors that match the given regex.
"""
fig, ax = plt.subplots(figsize=figsize)
sns.despine()
tensors = trial.collection(collection_name).tensor_names
for tensor_name in sorted(tensors):
if re.match(regex, tensor_name):
steps, data = get_data(trial, tensor_name)
ax.plot(steps, data, label=tensor_name)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel('Iteration')
plot_collection(trial, "metrics")
```
### Feature importances
You can also visualize the feature priorities as determined by
[xgboost.get_score()](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.Booster.get_score).
If you instructed Estimator to log the `feature_importance` collection, all five importance types supported by `xgboost.get_score()` will be available in the collection.
```
def plot_feature_importance(trial, importance_type="weight"):
SUPPORTED_IMPORTANCE_TYPES = ["weight", "gain", "cover", "total_gain", "total_cover"]
if importance_type not in SUPPORTED_IMPORTANCE_TYPES:
raise ValueError(f"{importance_type} is not one of the supported importance types.")
plot_collection(
trial,
"feature_importance",
regex=f"feature_importance/{importance_type}/.*")
plot_feature_importance(trial)
plot_feature_importance(trial, importance_type="cover")
```
### SHAP
[SHAP](https://github.com/slundberg/shap) (SHapley Additive exPlanations) is
another approach to explain the output of machine learning models.
SHAP values represent a feature's contribution to a change in the model output.
You instructed Estimator to log the average SHAP values in this example so the SHAP values (as calculated by [xgboost.predict(pred_contribs=True)](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.Booster.predict)) will be available the `average_shap` collection.
```
plot_collection(trial,"average_shap")
```
| true |
code
| 0.452717 | null | null | null | null |
|
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
## Introduction
Machine learning literature makes heavy use of probabilistic graphical models
and bayesian statistics. In fact, state of the art (SOTA) architectures, such as
[variational autoencoders][vae-blog] (VAE) or [generative adversarial
networks][gan-blog] (GAN), are intrinsically stochastic by nature. To
wholesomely understand research in this field not only do we need a broad
knowledge of mathematics, probability, and optimization but we somehow need
intuition about how these concepts are applied to real world problems. For
example, one of the most common applications of deep learning techniques is
vision. We may want to classify images or generate new ones. Most SOTA
techniques pose these problems in a probabilistic framework. We frequently see
things like $p(\mathbf{x}|\mathbf{z})$ where $\mathbf{x}$ is an image and
$\mathbf{z}$ is a latent variable. What do we mean by the probability of an
image? What is a latent variable, and why is it necessary[^Bishop2006] to pose
the problems this way?
Short answer, it is necessary due to the inherent uncertainty of our universe.
In this case, uncertainty in image acquisition can be introduced via many
sources, such as the recording apparatus, the finite precision of our
measurements, as well as the intrinsic stochasticity of the process being
measured. Perhaps the most important source of uncertainty we will consider is
due to there being sources of variability that are themselves unobserved.
Probability theory provides us with a framework to reason in the presence of
uncertainty and information theory allows us to quantify uncertainty. As we
elluded earlier the field of machine learning makes heavy use of both, and
this is no coincidence.
## Representations
How do we describe a face? The word "face" is a symbol and this symbol means
different things to different people. Yet, there is enough commonality between
our interpretations that we are able to effectively communicate with one
another using the word. How is that? What are the underlying features of faces
that we all hold common? Why is a simple smiley face clip art so obviously
perceived as a face? To make it more concrete, why are two simple ellipses
decorated underneath by a short curve so clearly a face, while an eye lid,
lower lip, one ear and a nostril, not?
**Insert Image of Faces**
*Left: Most would likely agree, this is clearly a face. Middle:
With nearly all of the details removed, a mere two circles and
curve are enough to create what the author still recognizes
as a face. Right: Does this look like a face to you? An ear,
nostril, eyelid, and lip do not seem to convey a face as clearly
as the eyes and the mouth do. We will quantify this demonstration
shortly.*
Features, or representations, are built on the idea that characteristics of the
symbol "face" are not a property of any one face. Rather, they only arise from
the myriad of things we use the symbol to represent. In other words, a
particular face is not ascribed meaning by the word "face" - the word "face"
derives meaning from the many faces it represents. This suggests that facial
characteristics can be described through the statistical properties of all
faces. Loosely speaking, these underlying statistical characteristics are what
the machine learning field often calls latent variables.
## Probability of an Image
Most images are contaminated with noise that must be addressed. At the
highest level, we have noise being added to the data by the imaging device. The
next level of uncertainty comes as a consequence of discretization.
Images in reality are continuous but in the process of imaging we only measure
certain points along the face. Consider for example a military satellite
tracking a vehicle. If one wishes to predict the future location of the van,
the prediction is limited to be within one of the discrete cells that make up
its measurements. However, the true location of the van could be anywhere
within that grid cell. There is also intrinsic stochasticity at the atomic
level that we ignore. The fluctuations taking place at that scale are assumed
to be averaged out in our observations.
The unobserved sources of variability will be our primary focus. Before we
address that, let us lay down some preliminary concepts. We are going to assume
that there exists some true unknown process that determines what faces look
like. A dataset of faces can then be considered as a sample of this process at
various points throughout its life. This suggests that these snapshots are a
outputs of the underlying data generating process. Considering the many
sources of uncertainty outlined above, it is natural to describe this process
as a probability distribution. There will be many ways to interpret the data as
a probability, but we will begin by considering any one image to be the result
of a data generating distribution, $P_{data}(\mathbf{x})$. Here $\mathbf{x}$ is considered to be
an image of a face with $n$ pixels. So $P_{data}$ is a joint distribution over
each pixel of the frame with a probability density function (pdf),
$p_{data}(x_1,x_2,\dots,x_n)$.
To build intuition about what $p_{data}(\mathbf{x})$ is and how it relates to
the assumed data generating process, we will explore a simple example. Take an
image with only 2 pixels... [$x_1$,$x_2$] where both $x_1$ and $x_2$ are in
[0,1]. Each image can be considered as a two dimensional point, in
$\mathbb{R}^2$. All possible images would occupy a square in the 2 dimensional
plane. An example of what this might look like can be seen in Figure
\ref{fig:images_in_2dspace} on page \pageref{fig:images_in_2dspace}. Any one
point inside the unit square would represent an image. For example the image
associated with the point $(0.25,0.85)$ is shown below.
```
x1 = np.random.uniform(size=500)
x2 = np.random.uniform(size=500)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x1,x2, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25)
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('images_in_2dspace.pdf')
```
Any one point inside the unit square would represent an image. For example the image associated with the point $(0.25,0.85)$ is shown below.
```
im = [(0.25, 0.85)]
plt.imshow(im, cmap='gray',vmin=0,vmax=1)
plt.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off'
)
plt.xticks([])
plt.yticks([])
plt.xlabel('Pixel 1 = 0.25 Pixel 2 = 0.85')
plt.savefig('sample_2dspace_image.pdf')
```
Now consider the case where there is some
process correlating the two variables. This
would be similar to their being some rules behind
the structure of faces. We know, that this must be
the case because if it weren't then faces would
be created randomly and we would not see the
patterns that was do. In
this case, the pixels would be correlated in
some manner due to the mechanism driving the
construction of faces. In this simple case,
let's consider a direct correlation of the
form $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}+\epsilon$
where $\epsilon$ is a noise term coming from
a low variability normal distribution
$\epsilon \sim N(0,\frac{1}{10})$. We see
in Figure \ref{fig:structured_images_in_2dspace}
on page \pageref{fig:structured_images_in_2dspace}
that in this case, the images plotted
in two dimensions resulting from this
relationship form a distinct pattern.
```
x1 = lambda x2: 0.5*np.cos(2*np.pi*x2)+0.5
x2 = np.linspace(0,1,200)
eps = np.random.normal(scale=0.1, size=200)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x2,x1(x2)+eps, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25); plt.axes().set_aspect('equal')
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('structured_images_in_2dspace.pdf')
```
We will refer to the structure suggested by
the two dimensional points as the 'manifold'.
This is a common practice when analyzing images.
A 28 by 28 dimensional image will be a point in
784 dimensional space. If we are examining
images with structure, various images of the
number 2 for example, then it turns out that
these images will form a manifold in 784
dimensional space. In most cases, as is the
case in our contrived example, this manifold
exists in a lower dimensional space than that
of the images themselves. The goal is to 'learn'
this manifold. In our simple case we can describe
the manifold as a function of only 1 variable
$$f(t) = <t,\frac{1}{2} \cos(2\pi t)+\frac{1}{2}>$$
This is what we would call the underlying data
generating process. In practice we usually
describe the manifold in terms of a probability
distribution. We will refer to the data
generating distribution in our example as
$p_{test}(x_1, x_2)$. Why did we choose a
probability to describe the manifold created
by the data generating process? How might this
probability be interpreted?
Learning the actual distribution turns out to
be a difficult task. Here we will use a
common non parametric technique for describing
distributions, the histogram. Looking at a
histogram of the images, or two dimensional points,
will give us insight into the structure of the
distribution from which they came. Notice here
though that the histogram merely describes the
distribution, we do not know what it is.
```
from matplotlib.colors import LogNorm
x2 = np.random.uniform(size=100000)
eps = np.random.normal(scale=0.1, size=100000)
hist2d = plt.hist2d(x2,x1(x2)+eps, bins=50, norm=LogNorm())
plt.xlim(0.0,1.0); plt.ylim(-0.3,1.3); plt.axes().set_aspect('equal')
plt.xlabel('Pixel 2'); plt.ylabel('Pixel 1')
plt.colorbar();
plt.savefig('histogram_of_structured_images.pdf')
```
As our intuition might have suggested, the data
generating distribution looks very similar to
the structure suggested by the two dimensional
images plotted above. There is high probability
very near the actual curve
$x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}$
and low probability as we move away. We imposed
the uncertainty via the Gaussian noise term
$\epsilon$. However, in real data the
uncertainty can be due to the myriad sources
outlined above. In these cases a complex
probability distribution isn't an arbitrary
choice for representing the data, it becomes
necessary.
Hopefully we're now beginning to understand how
to interpret $p_{test}(x_1, x_2)$. One might say
$p_{test}$ measures how likely a certain
configuration of $x_1$ and $x_2$ is to have
arisen from the data generating process $f(t)$.
Therefore if one can learn the data generating
distribution, then they have a descriptive
measure of the true underlying data generating
process. This intuition extends to the
$p_{data}(x)$ for faces that was presented
above. A sample from the LFW dataset is shown in
Figure \ref{fig:Agnelo_Queiroz_0001} on page
\pageref{fig:Agnelo_Queiroz_0001}.
| true |
code
| 0.617513 | null | null | null | null |
|
# Astronomy 8824 - Numerical and Statistical Methods in Astrophysics
## Statistical Methods Topic I. High Level Backround
These notes are for the course Astronomy 8824: Numerical and Statistical Methods in Astrophysics. It is based on notes from David Weinberg with modifications and additions by Paul Martini.
David's original notes are available from his website: http://www.astronomy.ohio-state.edu/~dhw/A8824/index.html
#### Background reading:
- Statistics, Data Mining, and Machine Learning in Astronomy, Chapter 3 (see David's [Reader's Guide](http://www.astronomy.ohio-state.edu/~dhw/A8824/ivezic_guide.pdf))
```
import math
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import optimize
# matplotlib settings
SMALL_SIZE = 14
MEDIUM_SIZE = 16
BIGGER_SIZE = 18
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('lines', linewidth=2)
plt.rc('axes', linewidth=2)
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
```
LaTex macros hidden here --
$\newcommand{\expect}[1]{{\left\langle #1 \right\rangle}}$
$\newcommand{\intinf}{\int_{-\infty}^{\infty}}$
$\newcommand{\xbar}{\overline{x}}$
### Statistical Tasks in Astrophysics
Four common statistical tasks:
1. Parameter estimation
2. Comparison of hypotheses
3. Absolute evaluation of a hypothesis
4. Forecasting of errors
Another task, slightly less common: Prediction of values from a model fit to some set of data, when the parameters of the model are uncertain.
### Simple Example: Data points with error bars
**Parameter estimation:** What are slope and amplitude of a power-law fit?
What are the uncertainties in the parameters?
When you fit a power-law model to data, you _assume_ that power-law description is valid.
**Hypothesis comparison:** Is a double power-law better than a single power-law?
Hypothesis comparisons are trickier when the number of parameters is different, since one must decide whether the fit to the data is _sufficiently_ better given the extra freedom in the more complex model.
A simpler comparison would be single power-law vs. two constant plateaus with a break at a specified location, both with two parameters.
**Absolute evaluation:** Are the data consistent with a power-law?
Absolute assessments of this sort are generally much more problematic than hypothesis comparisons.
**Forecasting of errors:** How many more measurements, or what reduction of uncertainties in the measurements, would allow single and double power-law models to be clearly distinguished?
Need to specify goals, and assumptions about the data. This is a common need for observing proposals, grant proposals, satellite proposals etc.
### Complicated example: CMB power spectrum with errors.
**Parameter estimation:** In a "vanilla" $\Lambda$CDM model, what are the best values of $\Omega_m$, $\Omega_b$, $h$, $n$, and $\tau$?
One often wants to combine CMB with other data to break degeneracies and get better constraints.
**Hypothesis comparisons:** Are data consistent with $\Omega_m=1$? Do they favor inclusion of space curvature, or gravity waves?
This typically involves comparison of models with different numbers of parameters.
**Absolute assessment:** Can the restricted, "vanilla" $\Lambda$CDM model be rejected?
**Forecasting:** What constraints or tests could be achieved with a new experiment?
This kind of analysis played a key role in the design and approval of WMAP, Planck, DESI, and other major cosmological surveys.
There is presently a lot of work along these lines for future cosmological surveys and CMB experiments.
### PDF, Mean, and Variance
If $p(x)$ is the **probability distribution function** (pdf) of a **random variable** $x$, then $p(x) dx$ is the probability that $x$ lies in a small interval $dx$.
The **expectation value** of a random variable $x$ is $\expect{x} = \intinf xp(x)dx = \mu$. The expectation value of $x$ is equal to the (arithmetic) mean. It is sometimes also written $\mu = E(x)$.
The expectation value of a function $y(x)$ is $\expect{y(x)} = \intinf y(x) p(x) dx.$
The variance is $V(x)=\expect{(x-\mu)^2} \equiv \sigma^2$.
The standard deviation is $\sigma = \sqrt{\sigma^2}$. This is also called the dispersion.
#### Useful variance relation
$$
V(x)=\expect{(x-\mu)^2} = \int (x - \mu)^2 p(x) dx
$$
$$
= \int (x^2 - 2\mu x + \mu^2) p(x) dx = \int x^2 p(x) dx - 2 \mu \int x p(x) dx + \mu^2 \int p(x) dx
$$
$$
= \expect{x^2} - 2 \expect{x}^2 + \expect{x}^2
$$
This reduces to the useful result that $V(x) = \expect{x^2} - \expect{x}^2$.
#### Sum of the variances
For _independent_ random variables $y_1$, $y_2$, ... $y_N$ (drawn from the same distribution or different distributions), the variance of the sum is the sum of the variances:
$$
V(y_1+y_2+...y_N) = \sum_{i=1,N} V(y_i).
$$
This can be proved by induction.
If random variables $x$ and $y$ are independent, then $p(x,y) = p(x)p(y)$ and
$$
{\rm Cov}(x,y) \equiv \expect{(x-\mu_x)(y-\mu_y)}=0.
$$
The second statement can be proved from the first.
#### Demonstration
$$
Var(y_1 + y_2) = \expect{(y_1 + y_2)^2} - \expect{y_1+y_2}^2
$$
$$
= \expect{y_1^2 + 2 y_1 y_2 + y_2^2} - \expect{y_1+y_2}^2
$$
Then looking at just the first term:
$$
\expect{y_1^2 + 2 y_1 y_2 + y_2^2} = \int y_1^2 p(y_1) p(y_2) dy_1 dy_2 + 2 \int y_1 y_2 p(y_1) p(y_2) dy_1 dy_2 + \int y_2^2 p(y_1) p(y_2) dy_1 dy_2\int
$$
Note that the integral \int p(y_1) dy_1 = 1 by definition, so we can simplify the above to:
$$
= \expect{y_1^2} + 2 \expect{y_1 y_2} + \expect{y_2^2}
$$
Now looking at the second term:
$$
\expect{y_1+y_2}^2 = \left[ \int (y_1 + y_2) p(y_1) p(y_2) dy_1 dy_2 \right]^2
$$
$$
= \expect{y_1}^2 + 2 \expect{y_1} \expect{y_2} + \expect{y_2}^2
$$
Now combining these two:
$$
Var(y_1 + y_2) = \expect{y_1^2} + 2 \expect{y_1 y_2} + \expect{y_2^2} - \expect{y_1}^2 - 2 \expect{y_1} \expect{y_2} - \expect{y_2}^2
$$
$$
= \expect{y_1^2} + \expect{y_2^2} - \expect{y_1}^2 - \expect{y_2}^2
$$
Which is equivalent to:
$$
Var(y_1 + y_2) = Var(y_1) + Var(y_2)
$$
#### Linearity of Expectation
This is often invoked more generally as a statement about the _Linearity of Expectation_.
$$
\expect{x + y} = \int (x + y) p(x) p(y) dx dy = \int x p(x) p(y) dx dy + \int y p(x) p(y) dx dy = \expect{x} + \expect{y}
$$
### Covariance
Covariance is a measure of the _joint probability_ of 2 random variables. It describes how they change together.
It is commonly written as:
$$
Cov(y_1, y_2) = \expect{ (y_1 - \expect{y_1} ) (y_2 - \expect{y_2}) } = \expect{ (y_1 - \mu_1) (y_2 - \mu_2) }
$$
This can also be written as:
$$
Cov(y_1, y_2) = \expect{y_1 y_2 - \expect{y_1} y_2 - y_1 \expect{y_2} + \expect{y_1} \expect{y_2} }
$$
using the linearity of expectation
$$
= \expect{y_1 y_2} - \expect{y_1}\expect{y_2} - \expect{y_1}\expect{y_2} + \expect{y_1} \expect{y_2}
$$
or
$$
Cov(y_1, y_2) = \expect{y_1 y_2} - \expect{y_1} \expect{y_2}
$$
Note that if $y_1$ and $y_2$ are independent variables,
$$
\expect{y_1 y_2} = \int y_1 y_2 p(y_1) p(y_2) dy_1 dy_2 = \int y_1 p(y_1) dy_1 \int y_2 p(y_2) dy_2 = \expect{y_1} \expect{y_2}
$$
and therefore $Cov(y_1, y_2) = 0$.
```
### Covariance Example
np.random.seed(1216)
sig_x = 2
sig_y = 1
sig_xy = 0
mean = np.array([0, 0], dtype=float)
cov = np.array( [[sig_x, sig_xy], [sig_xy, sig_y]], dtype=float)
x = np.random.multivariate_normal(mean, cov, size=1000)
fig, axarr = plt.subplots(1, 2, figsize=(14,7))
axarr[0].plot(x.T[0], x.T[1], 'k.')
axarr[0].set_xlabel(r"$x_1$")
axarr[0].set_ylabel(r"$x_2$")
axarr[0].set_xlim(-5, 5)
axarr[0].set_ylim(-5, 5)
axarr[0].text(-4, 4, r"$\sigma_{xy} = 0.0$")
sig_x = 2
sig_y = 1
sig_xy = 0.5
mean = np.array([0, 0], dtype=float)
cov = np.array( [[sig_x, sig_xy], [sig_xy, sig_y]], dtype=float)
x = np.random.multivariate_normal(mean, cov, size=1000)
axarr[1].plot(x.T[0], x.T[1], 'k.')
axarr[1].set_xlim(-5, 5)
axarr[1].set_ylim(-5, 5)
axarr[1].plot( [x[0], x[-1]], [0, 0], 'k:')
axarr[1].set_xlabel("$x_1$")
axarr[1].text(-4, 4, r"$\sigma_{xy} = 0.5$")
```
### Estimators
An estimator is a mathematical function of data that estimates a quantity of interest. An important distinction to keep in mind for data is the distinction between "population statistics" (the underlying distribution) and "sample statistics" (the measurements of the population).
Ideally one wants an estimator to be
- _unbiased_ -- even with a small amount of data, the expectation value of estimator is equal to the quantity being estimated
- _efficient_ -- makes good use of the data, giving a low variance about the true value of the quantity
- _robust_ -- isn't easily thrown off by data that violate your assumptions about the pdf, e.g., by non-Gaussian tails of the error distribution
- _consistent_ -- in the limit of lots of data, it converges to the true value
These four desiderata sometimes pull in different directions.
Suppose we have $N$ independent data points (the sample) drawn from an unknown distribution $p(x)$ (the population).
#### The mean estimator
The obvious estimator for the mean of the distribution is the sample mean, $\xbar={1\over N}\sum x_i$. The expectation value for the sample mean is:
$$
\expect{\xbar} = \expect{\frac{1}{N} \sum x_i} =
\frac{1}{N} \sum \expect{x_i} = \mu.
$$
Thus, the sample mean is an _unbiased_ estimator of $\mu$.
#### Variance of the mean estimator
The variance of this estimator is
$$
\expect{(\xbar-\mu)^2} = V\left(\frac{1}{N} \sum x_i\right) =
{1 \over N^2} V\left(\sum x_i\right) =
{1 \over N^2} \sum V(x_i) =
{1 \over N^2} \times N\sigma^2 = {\sigma^2 \over N},
$$
where $\sigma^2$ is the variance of the underlying distribution.
We have used the fact that $\expect{\xbar}=\mu$, and we have used the assumed independence of the $x_i$ to go from the variance of a sum to a sum of variances.
#### Other mean estimators
An alternative estimator for the mean is the value of the third sample member, $x_3$.
Since $\expect{x_3} = \mu$, this estimator is unbiased, but $V(x_3) = \sigma^2$, so this estimate is noisier than the sample mean by $\sqrt{N}$.
A more reasonable estimator is the sample _median_, though this is a biased estimator if $p(x)$ is asymmetric about the mean.
If $p(x)$ is Gaussian, then the variance of the sample median is ${\pi \over 2}{\sigma^2 \over N}$, so it is a less _efficient_ estimator than the sample mean.
However, if $p(x)$ has long non-Gaussian tails, then the median may be a much _more_ efficient estimator of the true mean(i.e., giving a more accurate answer for a fixed number of data points), since it is not sensitive to rare large or small values.
Estimators that are insensitive to the extremes of a distribution are often called _robust_ estimators.
#### Variance estimator
The obvious estimator for the variance of the distribution is the sample variance
$$
s^2 = \frac{1}{N} \sum (x_i-\xbar)^2 = \frac{1}{N} \sum x_i^2 - \xbar^2.
$$
However, a short derivation shows that the sample variance is biased low:
$$
\expect{s^2} = {N-1 \over N}\sigma^2,
$$
This is because we had to use the sample mean rather than the true mean, which on average drives down the variance.
An unbiased estimator is therefore
$$
\hat{\sigma}^2 = {1\over N-1} \sum (x_i-\xbar)^2.
$$
If you compute the mean of a sample, or of data values in a bin, the estimated _standard deviation of the mean_ is
$$
\hat{\sigma}_\mu = \left[{1 \over N(N-1)}\sum (x_i-\xbar)^2\right]^{1/2}.
$$
Note that this is smaller by $N^{-1/2}$ than the estimate of the dispersion within the bin. You should always be clear which quantity (dispersion or standard deviation of the mean) you are plotting.
If $p(x)$ is Gaussian, then the distribution of $\xbar/\sigma$ is a Gaussian of width $N^{-1/2}$. However, the distribution of $\xbar/\hat{\sigma}$ is broader (a Student's $t$ distribution).
### Snap-judging Error Bars
What is wrong with this plot?
```
Npts = 20
x = np.linspace(0, 5, Npts)
m = 2
b = 3
y = m*x + b
sig_y = np.random.normal(0, 1, Npts)
fx = y + sig_y
err_y = 3*np.ones(len(x)) # + 2.*np.ones(len(x))
plt.figure(figsize=(10,5))
plt.errorbar(x, fx, yerr=err_y, fmt='bo', capsize=4, label="Data")
plt.plot(x, y, 'k:', label="Relation")
plt.ylabel("Y")
plt.xlabel("X")
plt.legend(loc='upper left')
```
### Bayesian vs. Frequentist Statistics
Suppose we have measured the mean mass of a sample of G stars, by some method, and say: at the 68\% confidence level the mean mass of G stars is $a \pm b$. What does this statement mean?
Bayesian answer: There is some true mean mass $\alpha$ of G stars, and there is a 68\% probability that $a-b \leq \alpha \leq a+b$.
More pedantically: The hypothesis that the true mean mass $\alpha$ of G stars lies in the range $a-b$ to $a+b$ has a 68\% probability of being true.
The **probability of the hypothesis is a real-numbered expression of the degree of belief we should have in the hypothesis**, and it obeys the axioms of probability theory.
In "classical" or "frequentist" statistics, a probability is a statement about the frequency of outcomes in many repeated trials. With this restricted definition, **one can't refer to the probability
of a hypothesis -- it is either true or false**. One can refer to the probability of data if a hypothesis is true, where probability means the fraction of time the data would have come out the way it did in many repeated trials.
Frequentist answer: The statement means something like: if $\alpha = a$, we would have expected to obtain a sample mean in the range $a\pm b$ 68\% of the time.
##### This is the fundamental conceptual difference between Bayesian and frequentist statistics.
**Bayesian:** Evaluate the probability of a hypothesis in light of data (and prior information). Parameter values or probability of truth of a hypothesis are random variables, _data are not_ (though they are drawn from a pdf).
**Frequentist:** Evaluate the probability of obtaining the data --- more precisely, the fraction of times a given _statistic_ (such as the sample mean) applied to the data would come out the way it did in many repeated trials --- given the hypothesis, or parameter values. A probability is a statement about the frequency of outcomes in many repeated trials. Data are random variables, parameter values or truth of hypotheses are not.
#### Summary of the differences
| Bayesian | Frequentist |
| :-: | :-: |
| Evaluate the probability of a hypothesis, given the data | Evaluate the probability of obtaining the data |
| Parameters and probability of truth are random variables | Data are random variables |
| Data are not random variables | Parameters and probability of truth are not random variables |
| Need to specify alternatives to evaluate hypotheses | Statistical tests implicitly account for alternatives |
David's opinion: The Bayesian formulation corresponds better to the way scientists actually think about probability, hypotheses, and data. It provides a better conceptual basis for figuring out what to do in a case where a
standard recipe does not neatly apply. But frequentist methods sometimes seem easier to apply, and they clearly capture _some_ of our intuition about probability.
Bottom line: One should be a Bayesian in principle, but maybe not always
in practice.
### Probability Axioms and Bayes' Theorem
Probabilities are real numbers $0 \leq p \leq 1$ obeying the axioms
$$
p(A|C) + p(\overline{A}|C) = 1.
$$
$$
p(AB|C) = p(A|BC)P(B|C)
$$
$\overline{A}$ means "not $A$"
$AB$ means "$A$ and $B$" and is thus equivalent to $BA$.
From this equivalence we see that
$$
p(AB|C) = p(A|BC)p(B|C)=p(BA|C)=p(B|AC)p(A|C).
$$
From the 2nd and 4th entries above, we arrive at **Bayes' Theorem**
$$
p(A|BC) = p(A|C) {p(B|AC) \over p(B|C)}.
$$
### Bayesian Inference
In application to scientific inference, this theorem is usually written
$$
p(H|DI) = p(H|I) {p(D|HI) \over p(D|I)},
$$
where
$H$ = hypothesis, which might be a statement about a parameter value, e.g., the population mean lies in the range $x \rightarrow x+dx$.
$D$ = data
$I$ = background information, which may be minimally informative or highly
informative.
$p(H|I)$ = **prior probability**, i.e., before data are considered
$p(D|HI)$ = **likelihood** of data given $H$ and $I$
$p(D|I)$ = **global likelihood**
$p(H|DI)$ = **posterior probability**, the probability of the hypothesis
after consideration of the data
Bayes' Theorem tells us how to update our estimate of the probability of a hypothesis in light of new data.
It can be applied sequentially, with the posterior probability from one experiment becoming the prior for the next, as more data become available.
Calculation of likelihood $P(D|HI)$ is sometimes straightforward, sometimes difficult. The background information
$I$ may specify assumptions like a Gaussian error distribution, independence of data points.
Important aspect of Bayesian approach: only the actual data enter, not hypothetical data that could have been taken.
_All the evidence of the data is contained in the likelihood._
### Global Likelihood and Absolute Assessment
The global likelihood of the data, $P(D|I)$ is the sum (or integral) over "all" hypotheses. This can be a slippery concept.
Often $P(D|I)$ doesn't matter: in comparing hypotheses or parameter values, it cancels out.
When needed, it can often be found by requiring that $p(H|DI)$ integrate (or sum) to one, as it must if it is a true probability.
The Bayesian approach forces specification of alternatives to evaluate hypotheses.
Frequentist assessment tends to do this implicitly via the choice of statistical test.
### Criticism of Bayesian approach
The incorporation of priors makes Bayesian methods seem subjective, and it is the main source of criticism of the Bayesian approach.
If the data are compelling and the prior is broad, then the prior doesn't have much effect on the posterior. But if the data are weak, or the prior is narrow, then it can have a big effect.
Sometimes there are well defined ways of assigning an "uninformative" prior, but sometimes there is genuine ambiguity.
Bayesian methods sometimes seem like a lot of work to get to a straightforward answer.
In particular, we sometimes want to carry out an "absolute" hypothesis test without having to enumerate all alternative hypotheses.
### Criticism of frequentist approach
The frequentist approach doesn't correspond as well to scientific intuition. We want to talk about the probability of hypotheses or parameter values.
The choice of which statistical test to apply is often arbitrary. There is not a clear way to go from the result of a test to an actual scientific inference about parameter values or validity of a hypothesis.
Bayesians argue (and I agree) that frequentist methods obtain the appearance of objectivity only by sweeping priors under the rug, making assumptions implicit rather than explicit.
Frequentist approach relies on hypothetical data as well as actual data obtained. Choice of hypothetical data sets is often ambiguous, e.g., in the "stopping" problem.
Sometimes we _do_ have good prior information. It is straightforward to incorporate this in a Bayesian approach, while it is not in the frequentist approach.
Frequentist methods are poorly equipped to handle "nuisance parameters," which in the Bayesian approach are easily handled by marginalization.
For example, the marginal distribution of a parameter $x$
$$
p(x) = \int p(x|a,b,c) da\,db\,dc
$$
can only exist if $x$ is a random variable.
| true |
code
| 0.722166 | null | null | null | null |
|
# Earthquakes
In this notebook we'll try and model the intensity of earthquakes, basically replicating one of the examples in [this](http://user.it.uu.se/~thosc112/dahlin2014-lic.pdf) paper. To that end, let's first grab the data we need from USGS. We then filter the data to only include earthquakes of a magnitude 7.0, on the Richter scale, or higher.
```
from requests import get
from datetime import datetime
from json import loads
import pandas as pd
url = url = "https://earthquake.usgs.gov/fdsnws/event/1/query.geojson?minsig=600"
resp = get(url, params={"starttime": datetime(1900, 1, 1), "endtime": datetime(2021, 1, 1)})
json = resp.json()
data = pd.DataFrame.from_dict((i["properties"] for i in json["features"]), orient="columns")
data.set_index("time", inplace=True)
data.index = pd.to_datetime(data.index, unit="ms")
data = data.where(data["mag"] >= 7.0).sort_index()
by_year = data.groupby(data.index.year)["mag"].count()
by_year.plot(figsize=(16, 9), color="gray")
```
Next, we'll setup the model for the data. We'll use the same one as Dahlin uses, i.e.
\begin{cases}
d \log {\lambda_t} = \kappa (\mu - \log{\lambda_t})dt + \sigma dW_t, \\
Y_t \sim \mathcal{P} \left ( \lambda_t \right),
\end{cases}
where $\mathcal{P(x)}$ denotes a Poisson distribution with rate $x$.
```
from pyfilter.timeseries import models as m, GeneralObservable, StateSpaceModel
from pyfilter.distributions import Prior
from torch.distributions import Poisson, Normal, Exponential, LogNormal
import torch
class EarthquakeObservable(GeneralObservable):
def build_density(self, x):
return Poisson(rate=x.values.exp(), validate_args=False)
priors = Prior(Exponential, rate=5.0), Prior(Normal, loc=0.0, scale=1.0), Prior(LogNormal, loc=0.0, scale=1.0)
initial_state_mean = Prior(Normal, loc=0.0, scale=1.0)
latent = m.OrnsteinUhlenbeck(*priors, initial_state_mean=initial_state_mean, dt=1.0, ndim=1)
obs = EarthquakeObservable(torch.Size([]), ())
ssm = StateSpaceModel(latent, obs)
```
Next, we'll perform the inference. For this model we'll use PMMH together with a gradient based proposal, corresponding to PMH1 of the dissertation referenced earlier.
```
from pyfilter.inference.batch.mcmc import PMMH, proposals as p
from pyfilter.filters.particle import APF
as_tensor = torch.from_numpy(by_year.values).int()
filt = APF(ssm, 500, record_states=True)
alg = PMMH(filt, 3000, num_chains=6, proposal=p.GradientBasedProposal(scale=5e-2)).cuda()
state = alg.fit(as_tensor.cuda())
```
Plot one smoothed realization.
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(16, 9))
smoothed = filt.smooth(state.filter_state.states).mean((1, 2)).exp().cpu().numpy()[1:]
ax.plot(by_year.index, smoothed, color="gray", label="Rate")
ax2 = ax.twinx()
by_year.plot(ax=ax2, color="salmon", alpha=0.75, label="Earthquakes")
fig.legend()
```
And finally plot the posterior distributions of the parameters.
```
from pyfilter.inference.utils import params_to_tensor
from arviz import plot_trace
parameters = state.samples.values().transpose(1, 0).cpu().numpy()
# fig, ax = plt.subplots(parameters.shape[-1], figsize=(16, 9))
plot_trace(parameters)
```
| true |
code
| 0.620593 | null | null | null | null |
|
[](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/03_getting-started-with-transformers.ipynb)
💡 **Welcome!**
We’ve assembled a toolkit that university instructors and organizers can use to easily prepare labs, homework, or classes. The content is designed in a self-contained way such that it can easily be incorporated into the existing curriculum. This content is free and uses widely known Open Source technologies (`transformers`, `gradio`, etc).
Alternatively, you can request for someone on the Hugging Face team to run the tutorials for your class via the [ML demo.cratization tour](https://huggingface2.notion.site/ML-Demo-cratization-tour-with-66847a294abd4e9785e85663f5239652) initiative!
You can find all the tutorials and resources we’ve assembled [here](https://huggingface2.notion.site/Education-Toolkit-7b4a9a9d65ee4a6eb16178ec2a4f3599).
# Tutorial: Getting Started with Transformers
**Learning goals:** The goal of this tutorial is to learn how:
1. Transformer neural networks can be used to tackle a wide range of tasks in natural language processing and beyond.
3. Transfer learning allows one to adapt Transformers to specific tasks.
2. The `pipeline()` function from the `transformers` library can be used to run inference with models from the [Hugging Face Hub](https://huggingface.co/models).
This tutorial is based on the first of our O'Reilly book [_Natural Language Processing with Transformers_](https://transformersbook.com/) - check it out if you want to dive deeper into the topic!
**Duration**: 30-45 minutes
**Prerequisites:** Knowledge of Python and basic familiarity with machine learning
**Author**: [Lewis Tunstall](https://twitter.com/_lewtun) (feel free to ping me with any questions about this tutorial)
All of these steps can be done for free! All you need is an Internet browser and a place where you can write Python 👩💻
## 0. Why Transformers?
Deep learning is currently undergoing a period of rapid progress across a wide variety of domains, including:
* 📖 Natural language processing
* 👀 Computer vision
* 🔊 Audio
* 🧬 Biology
* and many more!
The main driver of these breakthroughs is the **Transformer** -- a novel **neural network** developed by Google researchers in 2017. In short, if you’re into deep learning, you need Transformers!
Here's a few examples of what Transformers can do:
* 💻 They can **generate code** as in products like [GitHub Copilot](https://copilot.github.com/), which is based on OpenAI's family of [GPT models](https://huggingface.co/gpt2?text=My+name+is+Clara+and+I+am).
* ❓ They can be used for **improve search engines**, like [Google did](https://www.blog.google/products/search/search-language-understanding-bert/) with a Transformer called [BERT](https://huggingface.co/bert-base-uncased).
* 🗣️ They can **process speech in multiple languages** to perform speech recognition, speech translation, and language identification. For example, Facebook's [XLS-R model](https://huggingface.co/spaces/facebook/XLS-R-2B-22-16) can automatically transcribe audio in one language to another!
Training these models **from scratch** involves **a lot of resources**: you need large amounts of compute, data, and days to train for 😱.
Fortunately, you don't need to do this in most cases! Thanks to a technique known as **transfer learning**, it is possible to adapt a model that has been trained from scratch (usually called a **pretrained model**), to a variety of downstream tasks. This process is called **fine-tuning** and can typically be carried with a single GPU and a dataset of the size that you're like to find in your university or company.
The models that we'll be looking at in this tutorial are all examples of fine-tuned models, and you can learn more about the transfer learning process in the video below:
```
from IPython.display import YouTubeVideo
YouTubeVideo('BqqfQnyjmgg')
```
Now, Transformers are coolest kids in town, but how can we use them? If only there was a library that could help us ... oh wait, there is! The [Hugging Face Transformers library](https://github.com/huggingface/transformers) provides a unified API across dozens of Transformer architectures, as well as the means to train models and run inference with them. So to get started, let's install the library with the following command:
```
%%capture
%pip install transformers[sentencepiece]
```
Now that we've installed the library, let's take a look at some applications!
## 1. Pipelines for Transformers
The fastest way to learn what Transformers can do is via the `pipeline()` function. This function loads a model from the Hugging Face Hub and takes care of all the preprocessing and postprocessing steps that are needed to convert inputs into predictions:
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/pipeline.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=800>
In the next few sections we'll see how these steps are combined for different applications. If you want to learn more about what is happening under the hood, then check out the video below:
```
YouTubeVideo('1pedAIvTWXk')
```
## 2. Text classification
Let's start with one of the most common tasks in NLP: text classification. We need a snippet of text for our models to analyze, so let's use the following (fictious!) customer feedback about a certain online order:
```
text = """Dear Amazon, last week I ordered an Optimus Prime action figure \
from your online store in Germany. Unfortunately, when I opened the package, \
I discovered to my horror that I had been sent an action figure of Megatron \
instead! As a lifelong enemy of the Decepticons, I hope you can understand my \
dilemma. To resolve the issue, I demand an exchange of Megatron for the \
Optimus Prime figure I ordered. Enclosed are copies of my records concerning \
this purchase. I expect to hear from you soon. Sincerely, Bumblebee."""
```
While we're at it, let's create a simple wrapper so that we can pretty print out texts:
```
import textwrap
wrapper = textwrap.TextWrapper(width=80, break_long_words=False, break_on_hyphens=False)
print(wrapper.fill(text))
```
Now suppose that we'd like to predict the _sentiment_ of this text, i.e. whether the feedback is positive or negative. This is a special type of text classification that is often used in industry to aggregate customer feedback across products or services. The example below shows how a Transformer like BERT converts the inputs into atomic chunks called **tokens** which are then fed through the network to produce a single prediction:
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/clf_arch.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
To load a Transformer model for this task is quite simple. We just need to specify the task in the `pipeline()` function as follows;
```
from transformers import pipeline
sentiment_pipeline = pipeline('text-classification')
```
When you run this code, you'll see a message about which Hub model is being used by default. In this case, the `pipeline()` function loads the `distilbert-base-uncased-finetuned-sst-2-english` model, which is a small BERT variant trained on [SST-2](https://paperswithcode.com/sota/sentiment-analysis-on-sst-2-binary) which is a sentiment analysis dataset.
💡 The first time you execute the code, the model will be automatically downloaded from the Hub and cached for later use!
Now we are ready to run our example through pipeline and look at some predictions:
```
sentiment_pipeline(text)
```
The model predicts negative sentiment with a high confidence which makes sense given that we have a disgruntled customer. You can also see that the pipeline returns a list of Python dictionaries with the predictions. We can also pass several texts at the same time in which case we would get several dicts in the list for each text one.
⚡ **Your turn!** Feed a list of texts with different types of sentiment to the `sentiment_pipeline` object. Do the predictions always make sense?
## 3. Named entity recognition
Let's now do something a little more sophisticated. Instead of just finding the overall sentiment, let's see if we can extract **entities** such as organizations, locations, or individuals from the text. This task is called named entity recognition, or NER for short. Instead of predicting just a class for the whole text **a class is predicted for each token**, as shown in the example below:
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/ner_arch.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
Again, we just load a pipeline for NER without specifying a model. This will load a default BERT model that has been trained on the [CoNLL-2003](https://huggingface.co/datasets/conll2003) dataset:
```
ner_pipeline = pipeline('ner')
```
When we pass our text through the model, we now get a long list of Python dictionaries, where each dictionary corresponds to one detected entity. Since multiple tokens can correspond to a a single entity ,we can apply an aggregation strategy that merges entities if the same class appears in consequtive tokens:
```
entities = ner_pipeline(text, aggregation_strategy="simple")
print(entities)
```
This isn't very easy to read, so let's clean up the outputs a bit:
```
for entity in entities:
print(f"{entity['word']}: {entity['entity_group']} ({entity['score']:.2f})")
```
That's much better! It seems that the model found most of the named entities but was confused about "Megatron" andn "Decepticons", which are characters in the transformers franchise. This is no surprise since the original dataset probably did not contain many transformer characters. For this reason it makes sense to further fine-tune a model on your on dataset!
Now that we've seen an example of text and token classification using Transformers, let's look at an interesting application called **question answering**.
## 4. Question answering
In this task, the model is given a **question** and a **context** and needs to find the answer to the question within the context. This problem can be rephrased as a classification problem: For each token the model needs to predict whether it is the start or the end of the answer. In the end we can extract the answer by looking at the span between the token with the highest start probability and highest end probability:
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/qa_arch.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
You can imagine that this requires quite a bit of pre- and post-processing logic. Good thing that the pipeline takes care of all that! As usual, we load the model by specifying the task in the `pipeline()` function:
```
qa_pipeline = pipeline("question-answering")
```
This default model is trained on the famous [SQuAD dataset](https://huggingface.co/datasets/squad). Let's see if we can ask it what the customer wants:
```
question = "What does the customer want?"
outputs = qa_pipeline(question=question, context=text)
outputs
```
Awesome, that sounds about right!
## 5. Text summarization
Let's see if we can go beyond these natural language understanding tasks (NLU) where BERT excels and delve into the generative domain. Note that generation is much more computationally demanding since we usually generate one token at a time and need to run this several times. An example for how this process works is shown below:
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/gen_steps.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=600>
A popular task involving generation is summarization. Let's see if we can use a transformer to generate a summary for us:
```
summarization_pipeline = pipeline("summarization")
```
This model is trained was trained on the [CNN/Dailymail dataset](https://huggingface.co/datasets/cnn_dailymail) to summarize news articles.
```
outputs = summarization_pipeline(text, max_length=45, clean_up_tokenization_spaces=True)
print(wrapper.fill(outputs[0]['summary_text']))
```
That's not too bad! We can see the model was able to get the main gist of the customer feedback and even identified the author as "Bumblebee".
## 6. Translation
But what if there is no model in the language of my data? You can still try to translate the text. The [Helsinki NLP team](https://huggingface.co/models?pipeline_tag=translation&sort=downloads&search=Helsinkie-NLP) has provided over 1,000 language pair models for translation 🤯. Here we load one that translates English to German:
```
translator = pipeline("translation_en_to_de", model="Helsinki-NLP/opus-mt-en-de")
```
Let's translate the our text to German:
```
outputs = translator(text, clean_up_tokenization_spaces=True, min_length=100)
print(wrapper.fill(outputs[0]['translation_text']))
```
We can see that the text is clearly not perfectly translated, but the core meaning stays the same. Another cool application of translation models is data augmentation via backtranslation!
## 7. Zero-shot classification
As a last example let's have a look at a cool application showing the versatility of transformers: zero-shot classification. In zero-shot classification the model receives a text and a list of candidate labels and determines which labels are compatible with the text. Instead of having fixed classes this allows for flexible classification without any labelled data! Usually this is a good first baseline!
```
zero_shot_classifier = pipeline("zero-shot-classification",
model="vicgalle/xlm-roberta-large-xnli-anli")
```
Let's have a look at an example:
```
text = 'Dieser Tutorial ist großartig! Ich hoffe, dass jemand von Hugging Face meine Universität besuchen wird :)'
classes = ['Treffen', 'Arbeit', 'Digital', 'Reisen']
zero_shot_classifier(text, classes, multi_label=True)
```
This seems to have worked really well on this short example. Naturally, for longer and more domain specific examples this approach might suffer.
## 8. Going beyond text
As mentioned at the start of this tutorial, Transformers can also be used for domains other than NLP! For these domains, there are many more pipelines that you can experiment with. Look at the following list for an overview:
```
from transformers import pipelines
for task in pipelines.SUPPORTED_TASKS:
print(task)
```
Let's have a look at an application involving images!
### Computer vision
Recently, transformer models have also entered computer vision. Check out the DETR model on the [Hub](https://huggingface.co/facebook/detr-resnet-101-dc5):
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/object_detection.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=400>
### Audio
Another promising area is audio processing. Especially Speech2Text there have been some promising advancements recently. See for example the [wav2vec2 model](https://huggingface.co/facebook/wav2vec2-base-960h):
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/speech2text.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=400>
### Table QA
Finally, a lot of real world data is still in form of tables. Being able to query tables is very useful and with [TAPAS](https://huggingface.co/google/tapas-large-finetuned-wtq) you can do tabular question-answering:
<img src="https://github.com/huggingface/workshops/blob/main/nlp-zurich/images/tapas.png?raw=1" alt="Alt text that describes the graphic" title="Title text" width=400>
## 9. Where to next?
Hopefully this tutorial has given you a taste of what Transformers can do and you're now excited to learn more! Here's a few resources you can use to dive deeper into the topic and the Hugging Face ecosystem:
🤗 **A Tour through the Hugging Face Hub**
In this tutorial, you get to:
- Explore the over 30,000 models shared in the Hub.
- Learn efficient ways to find the right model and datasets for your own task.
- Learn how to contribute and work collaboratively in your ML workflows
***Duration: 20-40 minutes***
👉 [click here to access the tutorial](https://www.notion.so/Workshop-A-Tour-through-the-Hugging-Face-Hub-2098e4bae9ba4288857e85c87ff1c851)
✨ **Build and Host Machine Learning Demos with Gradio & Hugging Face**
In this tutorial, you get to:
- Explore ML demos created by the community.
- Build a quick demo for your machine learning model in Python using the `gradio` library
- Host the demos for free with Hugging Face Spaces
- Add your demo to the Hugging Face org for your class or conference
***Duration: 20-40 minutes***
👉 [click here to access the tutorial](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/02_ml-demos-with-gradio.ipynb)
🎓 **The Hugging Face Course**
This course teaches you about applying Transformers to various tasks in natural language processing and beyond. Along the way, you'll learn how to use the Hugging Face ecosystem — 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers, and 🤗 Accelerate — as well as the Hugging Face Hub. It's completely free too!
```
YouTubeVideo('00GKzGyWFEs')
```
| true |
code
| 0.602296 | null | null | null | null |
|
<table style="border: none" align="center">
<tr style="border: none">
<th style="border: none"><font face="verdana" size="4" color="black"><b> Demonstrate adversarial training using ART </b></font></font></th>
</tr>
</table>
In this notebook we demonstrate adversarial training using ART on the MNIST dataset.
## Contents
1. [Load prereqs and data](#prereqs)
2. [Train and evaluate a baseline classifier](#classifier)
3. [Adversarially train a robust classifier](#adv_training)
4. [Evaluate the robust classifier](#evaluation)
<a id="prereqs"></a>
## 1. Load prereqs and data
```
import warnings
warnings.filterwarnings('ignore')
from keras.models import load_model
from art.config import ART_DATA_PATH
from art.utils import load_dataset, get_file
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import FastGradientMethod
from art.attacks.evasion import BasicIterativeMethod
from art.defences.trainer import AdversarialTrainer
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
(x_train, y_train), (x_test, y_test), min_, max_ = load_dataset('mnist')
```
<a id="classifier"></a>
## 2. Train and evaluate a baseline classifier
Load the classifier model:
```
path = get_file('mnist_cnn_original.h5', extract=False, path=ART_DATA_PATH,
url='https://www.dropbox.com/s/p2nyzne9chcerid/mnist_cnn_original.h5?dl=1')
classifier_model = load_model(path)
classifier = KerasClassifier(clip_values=(min_, max_), model=classifier_model, use_logits=False)
classifier_model.summary()
```
Evaluate the classifier performance on the first 100 original test samples:
```
x_test_pred = np.argmax(classifier.predict(x_test[:100]), axis=1)
nb_correct_pred = np.sum(x_test_pred == np.argmax(y_test[:100], axis=1))
print("Original test data (first 100 images):")
print("Correctly classified: {}".format(nb_correct_pred))
print("Incorrectly classified: {}".format(100-nb_correct_pred))
```
Generate some adversarial samples:
```
attacker = FastGradientMethod(classifier, eps=0.5)
x_test_adv = attacker.generate(x_test[:100])
```
And evaluate performance on those:
```
x_test_adv_pred = np.argmax(classifier.predict(x_test_adv), axis=1)
nb_correct_adv_pred = np.sum(x_test_adv_pred == np.argmax(y_test[:100], axis=1))
print("Adversarial test data (first 100 images):")
print("Correctly classified: {}".format(nb_correct_adv_pred))
print("Incorrectly classified: {}".format(100-nb_correct_adv_pred))
```
<a id="adv_training"></a>
## 3. Adversarially train a robust classifier
```
path = get_file('mnist_cnn_robust.h5', extract=False, path=ART_DATA_PATH,
url='https://www.dropbox.com/s/yutsncaniiy5uy8/mnist_cnn_robust.h5?dl=1')
robust_classifier_model = load_model(path)
robust_classifier = KerasClassifier(clip_values=(min_, max_), model=robust_classifier_model, use_logits=False)
```
Note: the robust classifier has the same architecture as above, except the first dense layer has **1024** instead of **128** units. (This was recommend by Madry et al. (2017), *Towards Deep Learning Models Resistant to Adversarial Attacks*)
```
robust_classifier_model.summary()
```
Also as recommended by Madry et al., we use BIM/PGD attacks during adversarial training:
```
attacks = BasicIterativeMethod(robust_classifier, eps=0.3, eps_step=0.01, max_iter=40)
```
Perform adversarial training:
```
# We had performed this before, starting with a randomly intialized model.
# Adversarial training takes about 80 minutes on an NVIDIA V100.
# The resulting model is the one loaded from mnist_cnn_robust.h5 above.
# Here is the command we had used for the Adversarial Training
# trainer = AdversarialTrainer(robust_classifier, attacks, ratio=1.0)
# trainer.fit(x_train, y_train, nb_epochs=83, batch_size=50)
```
<a id="evaluation"></a>
## 4. Evaluate the robust classifier
Evaluate the robust classifier's performance on the original test data:
```
x_test_robust_pred = np.argmax(robust_classifier.predict(x_test[:100]), axis=1)
nb_correct_robust_pred = np.sum(x_test_robust_pred == np.argmax(y_test[:100], axis=1))
print("Original test data (first 100 images):")
print("Correctly classified: {}".format(nb_correct_robust_pred))
print("Incorrectly classified: {}".format(100-nb_correct_robust_pred))
```
Evaluate the robust classifier's performance on the adversarial test data (**white-box** setting):
```
attacker_robust = FastGradientMethod(robust_classifier, eps=0.5)
x_test_adv_robust = attacker_robust.generate(x_test[:100])
x_test_adv_robust_pred = np.argmax(robust_classifier.predict(x_test_adv_robust), axis=1)
nb_correct_adv_robust_pred = np.sum(x_test_adv_robust_pred == np.argmax(y_test[:100], axis=1))
print("Adversarial test data (first 100 images):")
print("Correctly classified: {}".format(nb_correct_adv_robust_pred))
print("Incorrectly classified: {}".format(100-nb_correct_adv_robust_pred))
```
Compare the performance of the original and the robust classifier over a range of `eps` values:
```
eps_range = [0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
nb_correct_original = []
nb_correct_robust = []
for eps in eps_range:
attacker.set_params(**{'eps': eps})
attacker_robust.set_params(**{'eps': eps})
x_test_adv = attacker.generate(x_test[:100])
x_test_adv_robust = attacker_robust.generate(x_test[:100])
x_test_adv_pred = np.argmax(classifier.predict(x_test_adv), axis=1)
nb_correct_original += [np.sum(x_test_adv_pred == np.argmax(y_test[:100], axis=1))]
x_test_adv_robust_pred = np.argmax(robust_classifier.predict(x_test_adv_robust), axis=1)
nb_correct_robust += [np.sum(x_test_adv_robust_pred == np.argmax(y_test[:100], axis=1))]
eps_range = [0] + eps_range
nb_correct_original = [nb_correct_pred] + nb_correct_original
nb_correct_robust = [nb_correct_robust_pred] + nb_correct_robust
fig, ax = plt.subplots()
ax.plot(np.array(eps_range), np.array(nb_correct_original), 'b--', label='Original classifier')
ax.plot(np.array(eps_range), np.array(nb_correct_robust), 'r--', label='Robust classifier')
legend = ax.legend(loc='upper center', shadow=True, fontsize='large')
legend.get_frame().set_facecolor('#00FFCC')
plt.xlabel('Attack strength (eps)')
plt.ylabel('Correct predictions')
plt.show()
```
| true |
code
| 0.712795 | null | null | null | null |
|
# Title of the work
```
import pickle
import logging
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from matplotlib import rcParams
rcParams['font.size'] = 14
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# logging.getLogger('tensorflow').setLevel(logging.INFO)
print('Tensorflow version:', tf.__version__)
```
## Definitions
```
number_components = [x for x in range(1, 9)]
encoder_layers = [
[40],
[100, 40],
[400, 100, 40],
]
lr = 0.01
# lr = 0.001
optimizer = tf.keras.optimizers.SGD(learning_rate=lr)
# optimizer = tf.keras.optimizers.RMSprop(learning_rate=lr)
# dataset_filter = 'all' # done
dataset_filter = 'normal' # doing now
seed = 42
np.random.seed(seed)
number_epochs = 600
test_size = 0.5 # proportion of the number of samples used for testing, i.e., (1-test_size) used for training
figure_format = 'svg'
folder = '/nobackup/carda/datasets/ml-simulation-optical/2019-ecoc-demo'
```
## Importing dataset
```
with open(folder + '/compiled-dataset.h5', 'rb') as file:
final_dataframe, scaled_dataframe, class_columns, class_names = pickle.load(file)
input_dim = final_dataframe.shape[1] - 3 # the last three columns are classes
```
## Auxiliary functions
```
def build_model(data_dim, layers, optimizer='sgd', loss='mse', metrics=['mse', 'msle']):
model = tf.keras.Sequential(name='encoder_' + '-'.join(str(x) for x in layers))
model.add(tf.keras.layers.Dense(layers[0], input_shape=(data_dim,), name='input_and_0'))
for i in range(1, len(layers)-1):
model.add(tf.keras.layers.Dense(layers[i], name=f'encoder_{i}'))
print('enc:', layers[i], i)
# model.add(tf.keras.layers.Dense(layers[len(layers)-1], name=f'encoder_{len(layers)-1}', activation='tanh'))
for i in range(len(layers)-1, -1, -1):
model.add(tf.keras.layers.Dense(layers[i], name=f'decoder_{i}'))
print('dec:', layers[i], i)
# model.add(DenseTied(model.layers[i], name=f'decoder_{i}'))
model.add(tf.keras.layers.Dense(data_dim, name=f'output'))
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
return model
```
## Building training and testing datasets
```
if dataset_filter == 'normal':
normal_conditions = scaled_dataframe[(scaled_dataframe['attack'] == 0)].values
else:
normal_conditions = scaled_dataframe.values
x_train, x_test, y_train, y_test = train_test_split(normal_conditions[:, :input_dim], normal_conditions[:, -1], test_size=test_size, random_state=seed)
```
## Training the autoencoders
```
histories = []
for layer in encoder_layers:
for n_components in number_components:
final_layer = layer + [n_components]
print(final_layer)
model = build_model(input_dim, final_layer, optimizer=optimizer)
model.summary()
# saving a graphical representation
tf.keras.utils.plot_model(model, to_file=f'./models/{dataset_filter}_{optimizer._name}_{lr}_{model.name}-model.png', show_shapes=True, show_layer_names=False)
history = model.fit(x_train, x_train, epochs=number_epochs, batch_size=64, verbose=0, validation_data=(x_test, x_test))
model.save(f'./models/{dataset_filter}_{optimizer._name}_{lr}_{model.name}-model.h5')
histories.append(history.history)
metrics = [x for x in histories[0].keys() if 'val' not in x]
for i, metric in enumerate(metrics):
plt.figure(figsize=(12, 4.5))
plt.subplot(1, 2, 1)
plt.title(f'Optm: {optimizer._name} / lr: {lr}')
for j, layer in enumerate(encoder_layers):
for n_components in number_components:
layers = layer + [n_components]
ls = '-'
if len(layers) == 2:
ls = '-'
elif len(layers) == 3:
ls = ':'
elif len(layers) == 4:
ls = '--'
plt.semilogy(histories[j][metric], label='-'.join(str(x) for x in layers), linestyle=ls)
plt.xlabel('Epoch')
plt.ylabel(metric)
plt.subplot(1, 2, 2)
for j, layer in enumerate(encoder_layers):
for n_components in number_components:
layers = layer + [n_components]
ls = '-'
if len(layers) == 2:
ls = '-'
elif len(layers) == 3:
ls = ':'
elif len(layers) == 4:
ls = '--'
diff = np.array(histories[j]['val_' + metric]) - np.array(histories[j][metric])
print(j, np.sum(diff), np.mean(diff))
plt.semilogy(histories[j]['val_' + metric], label='-'.join(str(x) for x in layers), linestyle=ls)
plt.xlabel('Epoch')
plt.ylabel('val ' + metric)
# plt.xlim([-5, 50])
plt.legend(ncol=2)
plt.tight_layout()
plt.savefig(f'./figures/{dataset_filter}_{optimizer._name}_{lr}_{"-".join(str(x) for x in layers)}-accuracy-{metric}.{figure_format}')
plt.show()
with open(f'./models/{dataset_filter}_histories.h5', 'wb') as file:
pickle.dump({'histories': histories}, file)
print('done')
```
| true |
code
| 0.693304 | null | null | null | null |
|
Ordinal Regression
--
Ordinal regression aims at fitting a model to some data $(X, Y)$, where $Y$ is an ordinal variable. To do so, we use a `VPG` model with a specific likelihood (`gpflow.likelihoods.Ordinal`).
```
import gpflow
import numpy as np
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (12, 6)
plt = matplotlib.pyplot
#make a one dimensional ordinal regression problem
# This function generates a set of inputs X,
# quantitative output f (latent) and ordinal values Y
def generate_data(num_data):
# First generate random inputs
X = np.random.rand(num_data, 1)
# Now generate values of a latent GP
kern = gpflow.kernels.RBF(1, lengthscales=0.1)
K = kern.compute_K_symm(X)
f = np.random.multivariate_normal(mean=np.zeros(num_data), cov=K).reshape(-1, 1)
# Finally convert f values into ordinal values Y
Y = np.round((f + f.min())*3)
Y = Y - Y.min()
Y = np.asarray(Y, np.float64)
return X, f, Y
np.random.seed(1)
num_data = 20
X, f, Y = generate_data(num_data)
plt.figure(figsize=(11, 6))
plt.plot(X, f, '.')
plt.ylabel('latent function value')
plt.twinx()
plt.plot(X, Y, 'kx', mew=1.5)
plt.ylabel('observed data value')
# construct ordinal likelihood - bin_edges is the same as unique(Y) but centered
bin_edges = np.array(np.arange(np.unique(Y).size + 1), dtype=float)
bin_edges = bin_edges - bin_edges.mean()
likelihood=gpflow.likelihoods.Ordinal(bin_edges)
# build a model with this likelihood
m = gpflow.models.VGP(X, Y,
kern=gpflow.kernels.Matern32(1),
likelihood=likelihood)
# fit the model
gpflow.train.ScipyOptimizer().minimize(m)
# here we'll plot the expected value of Y +- 2 std deviations, as if the distribution were Gaussian
plt.figure(figsize=(11, 6))
Xtest = np.linspace(m.X.read_value().min(), m.X.read_value().max(), 100).reshape(-1, 1)
mu, var = m.predict_y(Xtest)
line, = plt.plot(Xtest, mu, lw=2)
col=line.get_color()
plt.plot(Xtest, mu+2*np.sqrt(var), '--', lw=2, color=col)
plt.plot(Xtest, mu-2*np.sqrt(var), '--', lw=2, color=col)
plt.plot(m.X.read_value(), m.Y.read_value(), 'kx', mew=2)
# to see the predictive density, try predicting every possible discrete value for Y.
def pred_density(m):
Xtest = np.linspace(m.X.read_value().min(), m.X.read_value().max(), 100).reshape(-1, 1)
ys = np.arange(m.Y.read_value().max()+1)
densities = []
for y in ys:
Ytest = np.ones_like(Xtest) * y
# Predict the log density
densities.append(m.predict_density(Xtest, Ytest))
return np.hstack(densities).T
fig = plt.figure(figsize=(14, 6))
plt.imshow(np.exp(pred_density(m)), interpolation='nearest',
extent=[m.X.read_value().min(), m.X.read_value().max(), -0.5, m.Y.read_value().max()+0.5],
origin='lower', aspect='auto', cmap=plt.cm.viridis)
plt.colorbar()
plt.plot(X, Y, 'kx', mew=2, scalex=False, scaley=False)
# Predictive density for a single input x=0.5
x_new = 0.5
ys = np.arange(np.max(m.Y.value+1)).reshape([-1, 1])
x_new_vec = x_new*np.ones_like(ys)
# for predict_density x and y need to have the same number of rows
dens_new = np.exp(m.predict_density(x_new_vec, ys))
fig = plt.figure(figsize=(8, 4))
plt.bar(x=ys.flatten(), height=dens_new.flatten())
```
| true |
code
| 0.695273 | null | null | null | null |
|
# `ricecooker` exercises
This mini-tutorial will walk you through the steps of running a simple chef script `ExercisesChef` that creates two exercises nodes, and four exercises questions.
### Running the notebooks
To follow along and run the code in this notebook, you'll need to clone the `ricecooker` repository, crate a virtual environement, install `ricecooker` using `pip install ricecooker`, install Jypyter notebook using `pip install jupyter`, then start the jupyter notebook server by running `jupyter notebook`. You will then be able to run all the code sections in this notebook and poke around.
### Creating a Sushi Chef class
```
from ricecooker.chefs import SushiChef
from ricecooker.classes.nodes import TopicNode, ExerciseNode
from ricecooker.classes.questions import SingleSelectQuestion, MultipleSelectQuestion, InputQuestion, PerseusQuestion
from ricecooker.classes.licenses import get_license
from le_utils.constants import licenses
from le_utils.constants import exercises
from le_utils.constants.languages import getlang
class ExercisesChef(SushiChef):
channel_info = {
'CHANNEL_TITLE': 'Sample Exercises',
'CHANNEL_SOURCE_DOMAIN': '<yourdomain.org>', # where you got the content
'CHANNEL_SOURCE_ID': '<unique id for channel>', # channel's unique id CHANGE ME
'CHANNEL_LANGUAGE': 'en', # le_utils language code
'CHANNEL_DESCRIPTION': 'A test channel with different types of exercise questions', # (optional)
'CHANNEL_THUMBNAIL': None, # (optional)
}
def construct_channel(self, **kwargs):
channel = self.get_channel(**kwargs)
topic = TopicNode(title="Math Exercises", source_id="folder-id")
channel.add_child(topic)
exercise_node = ExerciseNode(
source_id='<some unique id>',
title='Basic questions',
author='LE content team',
description='Showcase of the simple question type supported by Ricecooker and Studio',
language=getlang('en').code,
license=get_license(licenses.PUBLIC_DOMAIN),
thumbnail=None,
exercise_data={
'mastery_model': exercises.M_OF_N, # \
'm': 2, # learners must get 2/3 questions correct to complete exercise
'n': 3, # /
'randomize': True, # show questions in random order
},
questions=[
MultipleSelectQuestion(
id='sampleEX_Q1',
question = "Which numbers the following numbers are even?",
correct_answers = ["2", "4",],
all_answers = ["1", "2", "3", "4", "5"],
hints=['Even numbers are divisible by 2.'],
),
SingleSelectQuestion(
id='sampleEX_Q2',
question = "What is 2 times 3?",
correct_answer = "6",
all_answers = ["2", "3", "5", "6"],
hints=['Multiplication of $a$ by $b$ is like computing the area of a rectangle with length $a$ and width $b$.'],
),
InputQuestion(
id='sampleEX_Q3',
question = "Name one of the *factors* of 10.",
answers = ["1", "2", "5", "10"],
hints=['The factors of a number are the divisors of the number that leave a whole remainder.'],
)
]
)
topic.add_child(exercise_node)
# LOAD JSON DATA (as string) FOR PERSEUS QUESTIONS
RAW_PERSEUS_JSON_STR = open('../../examples/exercises/chefdata/perseus_graph_question.json', 'r').read()
# or
# import requests
# RAW_PERSEUS_JSON_STR = requests.get('https://raw.githubusercontent.com/learningequality/sample-channels/master/contentnodes/exercise/perseus_graph_question.json').text
exercise_node2 = ExerciseNode(
source_id='<another unique id>',
title='An exercise containing a perseus question',
author='LE content team',
description='An example exercise with a Persus question',
language=getlang('en').code,
license=get_license(licenses.CC_BY, copyright_holder='Copyright holder name'),
thumbnail=None,
exercise_data={
'mastery_model': exercises.M_OF_N,
'm': 1,
'n': 1,
},
questions=[
PerseusQuestion(
id='ex2bQ4',
raw_data=RAW_PERSEUS_JSON_STR,
source_url='https://github.com/learningequality/sample-channels/blob/master/contentnodes/exercise/perseus_graph_question.json'
),
]
)
topic.add_child(exercise_node2)
return channel
```
### Running the chef
Run of you chef by creating an instance of the chef class and calling it's `run` method:
```
chef = ExercisesChef()
args = {
'command': 'dryrun', # use 'uploadchannel' for real run
'verbose': True,
'token': 'YOURTOKENHERE9139139f3a23232'
}
options = {}
chef.run(args, options)
```
Congratulations, you put some math exercises on the internet!
**Note**: you will need to change the value of `CHANNEL_SOURCE_ID` if you
before you try running this script with `{'command': 'uploadchannel', ...}`.
The combination of source domain and source id are used to compute the `channel_id`
for the Kolibri channel you're creating. If you keep the lines above unchanged,
you'll get an error because you don't have edit rights on that channel.
| true |
code
| 0.7586 | null | null | null | null |
|
# Pandas cheat sheet
This notebook has some common data manipulations you might do while working in the popular Python data analysis library [`pandas`](https://pandas.pydata.org/). It assumes you're already are set up to analyze data in pandas using Python 3.
(If you're _not_ set up, [here's IRE's guide](https://docs.google.com/document/d/1cYmpfZEZ8r-09Q6Go917cKVcQk_d0P61gm0q8DAdIdg/edit#) to setting up Python. [Hit me up](mailto:[email protected]) if you get stuck.)
### Topics
- [Importing pandas](#Importing-pandas)
- [Creating a dataframe from a CSV](#Creating-a-dataframe-from-a-CSV)
- [Checking out the data](#Checking-out-the-data)
- [Selecting columns of data](#Selecting-columns-of-data)
- [Getting unique values in a column](#Getting-unique-values-in-a-column)
- [Running basic summary stats](#Running-basic-summary-stats)
- [Sorting your data](#Sorting-your-data)
- [Filtering rows of data](#Filtering-rows-of-data)
- [Filtering text columns with string methods](#Filtering-text-columns-with-string-methods)
- [Filtering against multiple values](#Filtering-against-multiple-values)
- [Exclusion filtering](#Exclusion-filtering)
- [Adding a calculated column](#Adding-a-calculated-column)
- [Filtering for nulls](#Filtering-for-nulls)
- [Grouping and aggregating data](#Grouping-and-aggregating-data)
- [Pivot tables](#Pivot-tables)
- [Applying a function across rows](#Applying-a-function-across-rows)
- [Joining data](#Joining-data)
### Importing pandas
Before we can use pandas, we need to import it. The most common way to do this is:
```
import pandas as pd
```
### Creating a dataframe from a CSV
To begin with, let's import a CSV of Major League Baseball player salaries on opening day. The file, which is in the same directory as this notebook, is called `mlb.csv`.
Pandas has a `read_csv()` method that we can use to get this data into a [dataframe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) (it has methods to read other file types, too). At minimum, you need to tell this method where the file lives:
```
mlb = pd.read_csv('mlb.csv')
```
### Checking out the data
When you first load up your data, you'll want to get a sense of what's in there. A pandas dataframe has several useful things to help you get a quick read of your data:
- `.head()`: Shows you the first 5 records in the data frame (optionally, if you want to see a different number of records, you can pass in a number)
- `.tail()`: Same as `head()`, but it pull records from the end of the dataframe
- `.sample(n)` will give you a sample of *n* rows of the data -- just pass in a number
- `.info()` will give you a count of non-null values in each column -- useful for seeing if any columns have null values
- `.describe()` will compute summary stats for numeric columns
- `.columns` will list the column names
- `.dtypes` will list the data types of each column
- `.shape` will give you a pair of numbers: _(number of rows, number of columns)_
```
mlb.head()
mlb.tail()
mlb.sample(5)
mlb.info()
mlb.describe()
mlb.columns
mlb.dtypes
mlb.shape
```
To get the number of records in a dataframe, you can access the first item in the `shape` pair, or you can just use the Python function `len()`:
```
len(mlb)
```
### Selecting columns of data
If you need to select just one column of data, you can use "dot notation" (`mlb.SALARY`) as long as your column name doesn't have spaces and it isn't the name of a dataframe method (e.g., `product`). Otherwise, you can use "bracket notation" (`mlb['SALARY']`).
Selecting one column will return a [`Series`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
If you want to select multiple columns of data, use bracket notation and pass in a _list_ of columns that you want to select. In Python, a list is a collection of items enclosed in square brackets, separated by commas: `['SALARY', 'NAME']`.
Selecting multiple columns will return a [`DataFrame`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
```
# select one column of data
teams = mlb.TEAM
# bracket notation would do the same thing -- note the quotes around the column name
# teams = mlb['TEAM']
teams.head()
type(teams)
# select multiple columns of data
salaries_and_names = mlb[['SALARY', 'NAME']]
salaries_and_names.head()
type(salaries_and_names)
```
### Getting unique values in a column
As you evaluate your data, you'll often want to get a list of unique values in a column (for cleaning, filtering, grouping, etc.).
To do this, you can use the Series method `unique()`. If you wanted to get a list of baseball positions, you could do:
```
mlb.POS.unique()
```
If useful, you could also sort the results alphabetically with the Python [`sorted()`](https://docs.python.org/3/library/functions.html#sorted) function:
```
sorted(mlb.POS.unique())
```
Sometimes you just need the _number_ of unique values in a column. To do this, you can use the pandas method `nunique()`:
```
mlb.POS.nunique()
```
(You can also run `nunique()` on an entire dataframe:)
```
mlb.nunique()
```
If you want to count up the number of times a value appears in a column of data -- the equivalent of doing a pivot table in Excel and aggregating by count -- you can use the Series method [`value_counts()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.value_counts.html).
To get a list of MLB teams and the number of times each one appears in our salary data -- in other words, the roster count for each team -- we could do:
```
mlb.TEAM.value_counts()
```
### Running basic summary stats
Some of this already surfaced with `describe()`, but in some cases you'll want to compute these stats manually:
- `sum()`
- `mean()`
- `median()`
- `max()`
- `min()`
You can run these on a Series (e.g., a column of data), or on an entire DataFrame.
```
mlb.SALARY.sum()
mlb.SALARY.mean()
mlb.SALARY.median()
mlb.SALARY.max()
mlb.SALARY.min()
# entire dataframe
mlb.mean()
```
### Sorting your data
You can use the [`sort_values()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) method to sort a dataframe by one or more columns. The default is to sort the values ascending; if you want your results sorted descending, specify `ascending=False`.
Let's sort our dataframe by `SALARY` descending:
```
mlb.sort_values('SALARY', ascending=False).head()
```
To sort by multiple columns, pass a list of columns to the `sort_values()` method -- the sorting will happen in the order you specify in the list. You'll also need to pass a list to the `ascending` keyword argument, otherwise both will sort ascending.
Let's sort our dataframe first by `TEAM` ascending, then by `SALARY` descending:
```
mlb.sort_values(['TEAM', 'SALARY'], ascending=[True, False]).head()
```
### Filtering rows of data
To filter your data by some criteria, you'd pass your filtering condition(s) to a dataframe using bracket notation.
You can use Python's [comparison operators](https://docs.python.org/3/reference/expressions.html#comparisons) in your filters, which include:
- `>` greater than
- `<` less than
- `>=` greater than or equal to
- `<=` less than or equal to
- `==` equal to
- `!=` not equal to
Example: You want to filter your data to keep records where the `TEAM` value is 'ARI':
```
diamondbacks = mlb[mlb.TEAM == 'ARI']
diamondbacks.head()
```
We could filter to get all records where the `TEAM` value is _not_ 'ARI':
```
non_diamondbacks = mlb[mlb.TEAM != 'ARI']
non_diamondbacks.head()
```
We could filter our data to just grab the players that make at least $1 million:
```
million_a_year = mlb[mlb.SALARY >= 1000000]
million_a_year.head()
```
### Filtering against multiple values
You can use the `isin()` method to test a value against multiple matches -- just hand it a _list_ of values to check against.
Example: Let's say we wanted to filter to get just players in Texas (in other words, just the Texas Rangers and the Houston Astros):
```
tx = mlb[mlb.TEAM.isin(['TEX', 'HOU'])]
tx.head()
```
### Exclusion filtering
Sometimes it's easier to specify what records you _don't_ want returned. To flip the meaning of a filter condition, prepend a tilde `~`.
For instance, if we wanted to get all players who are _not_ from Texas, we'd use the same filter condition we just used to get the TX players but add a tilde at the beginning:
```
not_tx = mlb[~mlb.TEAM.isin(['TEX', 'HOU'])]
not_tx.head()
```
### Filtering text columns with string methods
You can access the text values in a column with `.str`, and you can use any of Python's native string functions to manipulate them.
For our purposes, though, the pandas [`str.contains()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.contains.html) method is useful for filtering data by matching text patterns.
If we wanted to get every player with 'John' in their name, we could do something like this:
```
johns = mlb[mlb.NAME.str.contains('John', case=False)]
johns.head()
```
Note the `case=False` keyword argument -- we're telling pandas to match case-insensitive. And if the pattern you're trying to match is more complex, the method is set up to support [regular expressions](https://docs.python.org/3/howto/regex.html) by default.
### Multiple filters
Sometimes you have multiple filters to apply to your data. Lots of the time, it makes sense to break the filters out into separate statements.
For instance, if you wanted to get all Texas players who make at least $1 million, I might do this:
```
tx = mlb[mlb.TEAM.isin(['TEX', 'HOU'])]
# note that I'm filtering the dataframe I just created, not the original `mlb` dataframe
tx_million_a_year = tx[tx.SALARY >= 1000000]
tx_million_a_year.head()
```
But sometimes you want to chain your filters together into one statement. Use `|` for "or" and `&` for "and" rather than Python's built-in `or` and `and` statements, and use grouping parentheses around each statement.
The same filter in one statement:
```
tx_million_a_year = mlb[(mlb.TEAM.isin(['TEX', 'HOU'])) & (mlb.SALARY > 1000000)]
tx_million_a_year.head()
```
Do what works for you and makes sense in context, but I find the first version a little easier to read.
### Adding a calculated column
To add a new column to a dataframe, use bracket notation to supply the name of the new column (in quotes, or apostrophes, as long as they match), then set it equal to a value -- maybe a calculation derived from other data in your dataframe.
For example, let's create a new column, `contract_total`, that multiplies the annual salary by the number of contract years:
```
mlb['contract_total'] = mlb['SALARY'] * mlb['YEARS']
mlb.head()
```
### Filtering for nulls
You can use the `isnull()` method to get records that are null, or `notnull()` to get records that aren't. The most common use I've seen for these methods is during filtering to see how many records you're missing (and, therefore, how that affects your analysis).
The MLB data is complete, so to demonstrate this, let's load up a new data set: A cut of the [National Inventory of Dams](https://ire.org/nicar/database-library/databases/national-inventory-of-dams/) database, courtesy of the NICAR data library. (We'll need to specify the `encoding` on this CSV because it's not UTF-8.)
```
dams = pd.read_csv('dams.csv',
encoding='latin-1')
dams.head()
```
Maybe we're interested in looking at the year the dam was completed (the `Year_Comp`) column. Running `.info()` on the dataframe shows that we're missing some values:
```
dams.info()
```
We can filter for `isnull()` to take a closer look:
```
no_year_comp = dams[dams.Year_Comp.isnull()]
no_year_comp.head()
```
How many are we missing? That will help us determine whether the analysis would be valid:
```
# calculate the percentage of records with no Year_Comp value
# (part / whole) * 100
(len(no_year_comp) / len(dams)) * 100
```
So this piece of our analysis would exclude one-third of our records -- something you'd need to explain to your audience, if indeed your reporting showed that the results of your analysis would still be meaningful.
To get records where the `Year_Comp` is not null, we'd use `notnull()`:
```
has_year_comp = dams[dams.Year_Comp.notnull()]
has_year_comp.head()
```
What years remain? Let's use `value_counts()` to find out:
```
has_year_comp.Year_Comp.value_counts()
```
(To sort by year, not count, we could tack on a `sort_index()`:
```
has_year_comp.Year_Comp.value_counts().sort_index()
```
### Grouping and aggregating data
You can use the `groupby()` method to group and aggregate data in pandas, similar to what you'd get by running a pivot table in Excel or a `GROUP BY` query in SQL. We'll also provide the aggregate function to use.
Let's group our baseball salary data by team to see which teams have the biggest payrolls -- in other words, we want to use `sum()` as our aggregate function:
```
grouped_mlb = mlb.groupby('TEAM').sum()
grouped_mlb.head()
```
If you don't specify what columns you want, it will run `sum()` on every numeric column. Typically I select just the grouping column and the column I'm running the aggregation on:
```
grouped_mlb = mlb[['TEAM', 'SALARY']].groupby('TEAM').sum()
grouped_mlb.head()
```
... and we can sort descending, with `head()` to get the top payrolls:
```
grouped_mlb.sort_values('SALARY', ascending=False).head(10)
```
You can use different aggregate functions, too. Let's say we wanted to get the top median salaries by team:
```
mlb[['TEAM', 'SALARY']].groupby('TEAM').median().sort_values('SALARY', ascending=False).head(10)
```
You can group by multiple columns by passing a list. Here, we'll select our columns of interest and group by `TEAM`, then by `POS`, using `sum()` as our aggregate function:
```
mlb[['TEAM', 'POS', 'SALARY']].groupby(['TEAM', 'POS']).sum()
```
### Pivot tables
Sometimes you need a full-blown pivot table, and [pandas has a function to make one](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html).
For this example, we'll look at some foreign trade data -- specifically, eel product imports from 2010 to mid-2017:
```
eels = pd.read_csv('eels.csv')
eels.head()
```
Let's run a pivot table where the grouping column is `country`, the values are the sum of `kilos`, and the columns are the year:
```
pivoted_sums = pd.pivot_table(eels,
index='country',
columns='year',
values='kilos',
aggfunc=sum)
pivoted_sums.head()
```
Let's sort by the `2017` value. While we're at it, let's fill in null values (`NaN`) with zeroes using the [`fillna()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html) method.
```
pivoted_sums.sort_values(2017, ascending=False).fillna(0)
```
### Applying a function across rows
Often, you'll want to calculate a value for every column but it won't be that simple, and you'll write a separate function that accepts one row of data, does some calculations and returns a value. We'll use the [`apply()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.apply.html) method to accomplish this.
For this example, we're going to load up a CSV of gators killed by hunters in Florida:
```
gators = pd.read_csv('gators.csv')
gators.head()
```
We want to find the longest gator in our data, of course, but there's a problem: right now, the caracass size value is being stored as text: `{} ft. {} in.`. The pattern is predicatable, though, and we can use some Python to turn those values into constant numbers -- inches -- that we can then sort on. Here's our function:
```
def get_inches(row):
'''Accepts a row from our dataframe, calculates carcass length in inches and returns that value'''
# get the value in the 'Carcass Size' column
carcass_size = row['Carcass Size']
# split the text on 'ft.'
# the result is a list
size_split = carcass_size.split('ft.')
# strip whitespace from the first item ([0]) in the resulting list -- the feet --
# and coerce it to an integer with the Python `int()` function
feet = int(size_split[0].strip())
# in the second item ([1]) in the resulting list -- the inches -- replace 'in.' with nothing,
# strip whitespace and coerce to an integer
inches = int(size_split[1].replace('in.', '').strip())
# add the feet times 12 plus the inches and return that value
return inches + (feet * 12)
```
Now we're going to create a new column, `length_in` and use the [`apply()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.apply.html) method to apply our function to every row. The `axis=1` keyword argument means that we're applying our function row-wise, not column-wise.
```
gators['length_in'] = gators.apply(get_inches, axis=1)
gators.sort_values('length_in', ascending=False).head()
```
### Joining data
You can use [`merge()`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.merge.html) to join data in pandas.
In this simple example, we're going to take a CSV of country population data in which each country is represented by an [ISO 3166-1 numeric country code](https://en.wikipedia.org/wiki/ISO_3166-1_numeric) and join it to a CSV that's basically a lookup table with the ISO codes and the names of the countries to which they refer.
Some of the country codes have leading zeroes, so we're going to use the `dtype` keyword when we import each CSV to specify that the `'code'` column in each dataset should be treated as a string (text), not a number.
```
pop_csv = pd.read_csv('country-population.csv', dtype={'code': str})
pop_csv.head()
code_csv = pd.read_csv('country-codes.csv', dtype={'code': str})
code_csv.head()
```
Now we'll use `merge()` to join them.
The `on` keyword argument tells the method what column to join on. If the names of the columns were different, you'd use `left_on` and `right_on`, with the "left" dataframe being the first one you hand to the `merge()` function.
The `how` keyword argument tells the method what type of join to use -- the default is `'inner'`.
```
joined_data = pd.merge(pop_csv,
code_csv,
on='code',
how='left')
joined_data.head()
```
| true |
code
| 0.483161 | null | null | null | null |
|
# IPython Magic Commands
Here we'll begin discussing some of the enhancements that IPython adds on top of the normal Python syntax.
These are known in IPython as *magic commands*, and are prefixed by the ``%`` character.
These magic commands are designed to succinctly solve various common problems in standard data analysis.
Magic commands come in two flavors: *line magics*, which are denoted by a single ``%`` prefix and operate on a single line of input, and *cell magics*, which are denoted by a double ``%%`` prefix and operate on multiple lines of input.
We'll demonstrate and discuss a few brief examples here, and come back to more focused discussion of several useful magic commands later in the chapter.
## Running External Code: ``%run``
As you begin developing more extensive code, you will likely find yourself working in both IPython for interactive exploration, as well as a text editor to store code that you want to reuse.
Rather than running this code in a new window, it can be convenient to run it within your IPython session.
This can be done with the ``%run`` magic.
For example, let's create a ``myscript.py`` file with the following contents (note that we are using the `%%bash` magic to write bash code in notebook:
```
%%bash
echo """
'''square functions'''
def square(x):
'''square a number'''
return x ** 2
for N in range(1, 4):
print(N, 'squared is', square(N))""" > myscript.py
```
We can see the content of this file either from the Files tab on the laft bar or using a terminal command such as `cat`:
```
%%bash
cat myscript.py
```
You can execute this from your IPython session as follows:
```
%run myscript.py
```
Note that after you've run this script, any functions defined within it are available for use in your IPython session:
```
square(5)
square??
```
There are several options to fine-tune how your code is run; you can see the documentation in the normal way, by typing **``%run?``** in the IPython interpreter.
## Timing Code Execution: ``%timeit``
Another example of a useful magic function is ``%timeit``, which will automatically determine the execution time of the single-line Python statement that follows it.
For example, we may want to check the performance of a list comprehension:
```
%timeit L = [n ** 2 for n in range(1000)]
```
The benefit of ``%timeit`` is that for short commands it will automatically perform multiple runs in order to attain more robust results.
For multi line statements, adding a second ``%`` sign will turn this into a cell magic that can handle multiple lines of input.
For example, here's the equivalent construction with a ``for``-loop:
```
%%timeit
L = []
for n in range(1000):
L.append(n ** 2)
```
We can immediately see that list comprehensions are about 20% faster than the equivalent ``for``-loop construction in this case.
## Help on Magic Functions: ``?``, ``%magic``, and ``%lsmagic``
Like normal Python functions, IPython magic functions have docstrings, and this useful
documentation can be accessed in the standard manner.
So, for example, to read the documentation of the ``%timeit`` magic simply type this:
```
%timeit?
```
Documentation for other functions can be accessed similarly.
To access a general description of available magic functions, including some examples, you can type this:
```
%magic
```
For a quick and simple list of all available magic functions, type this:
```
%lsmagic
```
| true |
code
| 0.204938 | null | null | null | null |
|
<h1> 2c. Loading large datasets progressively with the tf.data.Dataset </h1>
In this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:
<ol>
<li> Refactor the input to read data from disk progressively.
<li> Refactor the feature creation so that it is not one-to-one with inputs.
</ol>
<br/>
The Pandas function in the previous notebook first read the whole data into memory -- on a large dataset, this won't be an option.
```
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
```
<h2> 1. Refactor the input </h2>
Read data created in Lab1a, but this time make it more general, so that we can later handle large datasets. We use the Dataset API for this. It ensures that, as data gets delivered to the model in mini-batches, it is loaded from disk only when needed.
```
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(row):
columns = tf.decode_csv(row, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
features.pop('key') # discard, not a real feature
label = features.pop('fare_amount') # remove label from features and store
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # loop indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
def get_train_input_fn():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid_input_fn():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
```
<h2> 2. Refactor the way features are created. </h2>
For now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.
```
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
```
<h2> Create and train the model </h2>
Note that we train for num_steps * batch_size examples.
```
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train_input_fn, steps = 200)
```
<h3> Evaluate model </h3>
As before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.
```
metrics = model.evaluate(input_fn = get_valid_input_fn, steps = None)
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
```
## Challenge Exercise
Create a neural network that is capable of finding the volume of a cylinder given the radius of its base (r) and its height (h). Assume that the radius and height of the cylinder are both in the range 0.5 to 2.0. Unlike in the challenge exercise for b_estimator.ipynb, assume that your measurements of r, h and V are all rounded off to the nearest 0.1. Simulate the necessary training dataset. This time, you will need a lot more data to get a good predictor.
Hint (highlight to see):
<p style='color:white'>
Create random values for r and h and compute V. Then, round off r, h and V (i.e., the volume is computed from the true value of r and h; it's only your measurement that is rounded off). Your dataset will consist of the round values of r, h and V. Do this for both the training and evaluation datasets.
</p>
Now modify the "noise" so that instead of just rounding off the value, there is up to a 10% error (uniformly distributed) in the measurement followed by rounding off.
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| true |
code
| 0.482551 | null | null | null | null |
|
# Performing measurements using QCoDeS parameters and DataSet
This notebook shows some ways of performing different measurements using
QCoDeS parameters and the [DataSet](DataSet-class-walkthrough.ipynb) via a powerful ``Measurement`` context manager. Here, it is assumed that the reader has some degree of familiarity with fundamental objects and methods of QCoDeS.
## Implementing a measurement
Now, let us start with necessary imports:
```
%matplotlib inline
import numpy.random as rd
import matplotlib.pyplot as plt
import numpy as np
from time import sleep, monotonic
import qcodes as qc
from qcodes import Station, load_or_create_experiment, \
initialise_database, Measurement, load_by_run_spec, load_by_guid
from qcodes.tests.instrument_mocks import DummyInstrument, DummyInstrumentWithMeasurement
from qcodes.dataset.plotting import plot_dataset
from qcodes.dataset.descriptions.detect_shapes import detect_shape_of_measurement
qc.logger.start_all_logging()
```
In what follows, we shall define some utility functions as well as declare our dummy instruments. We, then, add these instruments to a ``Station`` object.
The dummy dmm is setup to generate an output depending on the values set on the dummy dac simulating a real experiment.
```
# preparatory mocking of physical setup
dac = DummyInstrument('dac', gates=['ch1', 'ch2'])
dmm = DummyInstrumentWithMeasurement(name='dmm', setter_instr=dac)
station = qc.Station(dmm, dac)
# now make some silly set-up and tear-down actions
def veryfirst():
print('Starting the measurement')
def numbertwo(inst1, inst2):
print('Doing stuff with the following two instruments: {}, {}'.format(inst1, inst2))
def thelast():
print('End of experiment')
```
**Note** that database and experiments may be missing.
If this is the first time you create a dataset, the underlying database file has
most likely not been created. The following cell creates the database file. Please
refer to documentation on [The Experiment Container](The-Experiment-Container.ipynb) for details.
Furthermore, datasets are associated to an experiment. By default, a dataset (or "run")
is appended to the latest existing experiments. If no experiment has been created,
we must create one. We do that by calling the `load_or_create_experiment` function.
Here we explicitly pass the loaded or created experiment to the `Measurement` object to ensure that we are always
using the `performing_meas_using_parameters_and_dataset` `Experiment` created within this tutorial. Note that a keyword argument `name` can also be set as any string value for `Measurement` which later becomes the `name` of the dataset that running that `Measurement` produces.
```
initialise_database()
exp = load_or_create_experiment(
experiment_name='performing_meas_using_parameters_and_dataset',
sample_name="no sample"
)
```
And then run an experiment:
```
meas = Measurement(exp=exp, name='exponential_decay')
meas.register_parameter(dac.ch1) # register the first independent parameter
meas.register_parameter(dmm.v1, setpoints=(dac.ch1,)) # now register the dependent oone
meas.add_before_run(veryfirst, ()) # add a set-up action
meas.add_before_run(numbertwo, (dmm, dac)) # add another set-up action
meas.add_after_run(thelast, ()) # add a tear-down action
meas.write_period = 0.5
with meas.run() as datasaver:
for set_v in np.linspace(0, 25, 10):
dac.ch1.set(set_v)
get_v = dmm.v1.get()
datasaver.add_result((dac.ch1, set_v),
(dmm.v1, get_v))
dataset1D = datasaver.dataset # convenient to have for data access and plotting
ax, cbax = plot_dataset(dataset1D)
```
And let's add an example of a 2D measurement. For the 2D, we'll need a new batch of parameters, notably one with two
other parameters as setpoints. We therefore define a new Measurement with new parameters.
```
meas = Measurement(exp=exp, name='2D_measurement_example')
meas.register_parameter(dac.ch1) # register the first independent parameter
meas.register_parameter(dac.ch2) # register the second independent parameter
meas.register_parameter(dmm.v2, setpoints=(dac.ch1, dac.ch2)) # now register the dependent oone
# run a 2D sweep
with meas.run() as datasaver:
for v1 in np.linspace(-1, 1, 200):
for v2 in np.linspace(-1, 1, 200):
dac.ch1(v1)
dac.ch2(v2)
val = dmm.v2.get()
datasaver.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v2, val))
dataset2D = datasaver.dataset
ax, cbax = plot_dataset(dataset2D)
```
## Accessing and exporting the measured data
QCoDeS ``DataSet`` implements a number of methods for accessing the data of a given dataset. Here we will concentrate on the two most user friendly methods. For a more detailed walkthrough of the `DataSet` class, refer to [DataSet class walkthrough](DataSet-class-walkthrough.ipynb) notebook.
The method `get_parameter_data` returns the data as a dictionary of ``numpy`` arrays. The dictionary is indexed by the measured (dependent) parameter in the outermost level and the names of the dependent and independent parameters in the innermost level. The first parameter in the innermost level is always the dependent parameter.
```
dataset1D.get_parameter_data()
```
By default `get_parameter_data` returns all data stored in the dataset. The data that is specific to one or more measured parameters can be returned by passing the parameter name(s) or by using `ParamSpec` object:
```
dataset1D.get_parameter_data('dmm_v1')
```
You can also simply fetch the data for one or more dependent parameter
```
dataset1D.get_parameter_data('dac_ch1')
```
For more details about accessing data of a given `DataSet`, see [Accessing data in DataSet notebook](Accessing-data-in-DataSet.ipynb).
The data can also be exported as one or more [Pandas](https://pandas.pydata.org/) DataFrames.
The DataFrames cane be returned either as a single dataframe or as a dictionary from measured parameters to DataFrames.
If you measure all parameters as a function of the same set of parameters you probably want to export to a single dataframe.
```
dataset1D.to_pandas_dataframe()
```
However, there may be cases where the data within a dataset cannot be put into a single dataframe.
In those cases you can use the other method to export the dataset to a dictionary from name of the measured parameter to Pandas dataframes.
```
dataset1D.to_pandas_dataframe_dict()
```
When exporting a two or higher dimensional datasets as a Pandas DataFrame a [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html) is used to index the measured parameter based on all the dependencies
```
dataset2D.to_pandas_dataframe()[0:10]
```
If your data is on a regular grid it may make sense to view the data as an [XArray](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html) Dataset. The dataset can be directly exported to a XArray Dataset.
```
dataset2D.to_xarray_dataset()
```
Note, however, that XArray is only suited for data that is on a rectangular grid with few or no missing values. If the data does not lie on a grid, all the measured data points will have an unique combination of the two dependent parameters. When exporting to XArray, NaN's will therefore replace all the missing combinations of `dac_ch1` and `dac_ch2` and the data is unlikely to be useful in this format.
For more details about using Pandas and XArray see [Working With Pandas and XArray](./Working-With-Pandas-and-XArray.ipynb)
It is also possible to export the datasets directly to various file formats see [Exporting QCoDes Datasets](./Exporting-data-to-other-file-formats.ipynb)
## Reloading datasets
To load existing datasets QCoDeS provides several functions. The most useful and generic function is called `load_by_run_spec`.
This function takes one or more pieces of information about a dataset and will either, if the dataset is uniquely identifiable by the information, load the dataset or print information about all the datasets that match the supplied information allowing you to provide more information to uniquely identify the dataset.
Here, we will load a dataset based on the `captured_run_id` printed on the plot above.
```
dataset1D.captured_run_id
loaded_ds = load_by_run_spec(captured_run_id=dataset1D.captured_run_id)
loaded_ds.the_same_dataset_as(dataset1D)
```
As long as you are working within one database file the dataset should be uniquely identified by `captured_run_id`. However, once you mix several datasets from different database files this is likely not unique. See the following section and [Extracting runs from one DB file to another](Extracting-runs-from-one-DB-file-to-another.ipynb) for more information on how to handle this.
### DataSet GUID
Internally each dataset is refereed too by a Globally Unique Identifier (GUID) that ensures that the dataset uniquely identified even if datasets from several databases with potentially identical captured_run_id, experiment and sample names.
A dataset can always be reloaded from the GUID if known.
```
print(f"Dataset GUID is: {dataset1D.guid}")
loaded_ds = load_by_guid(dataset1D.guid)
loaded_ds.the_same_dataset_as(dataset1D)
```
## Specifying shape of measurement
As the context manager allows you to store data of any shape (with the only restriction being that you supply values for both dependent and independent parameters together), it cannot know if the data is being measured on a grid. As a consequence, the Numpy array of data loaded from the dataset may not be of the shape that you expect. `plot_dataset`, `DataSet.to_pandas...` and `DataSet.to_xarray...` contain logic that can detect the shape of the data measured at load time. However, if you know the shape of the measurement that you are going to perform up front, you can choose to specify it before initializing the measurement using ``Measurement.set_shapes`` method.
`dataset.get_parameter_data` and `dataset.cache.data` automatically makes use of this information to return shaped data when loaded from the database. Note that these two methods behave slightly different when loading data on a partially completed dataset. `dataset.get_parameter_data` will only reshape the data if the number of points measured matches the number of points expected according to the metadata. `dataset.cache.data` will however return a dataset with empty placeholders (either NaN, zeros or empty strings depending on the datatypes) for missing values in a partially filled dataset.
Note that if you use the doNd functions demonstrated in [Using doNd functions in comparison to Measurement context manager for performing measurements](Using_doNd_functions_in_comparison_to_Measurement_context_manager_for_performing_measurements.ipynb) the shape information will be detected and stored automatically.
In the example below we show how the shape can be specified manually.
```
n_points_1 = 100
n_points_2 = 200
meas_with_shape = Measurement(exp=exp, name='shape_specification_example_measurement')
meas_with_shape.register_parameter(dac.ch1) # register the first independent parameter
meas_with_shape.register_parameter(dac.ch2) # register the second independent parameter
meas_with_shape.register_parameter(dmm.v2, setpoints=(dac.ch1, dac.ch2)) # now register the dependent oone
meas_with_shape.set_shapes(detect_shape_of_measurement((dmm.v2,), (n_points_1, n_points_2)))
with meas_with_shape.run() as datasaver:
for v1 in np.linspace(-1, 1, n_points_1):
for v2 in np.linspace(-1, 1, n_points_2):
dac.ch1(v1)
dac.ch2(v2)
val = dmm.v2.get()
datasaver.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v2, val))
dataset = datasaver.dataset # convenient to have for plotting
for name, data in dataset.get_parameter_data()['dmm_v2'].items():
print(f"{name}: data.shape={data.shape}, expected_shape=({n_points_1},{n_points_2})")
assert data.shape == (n_points_1, n_points_2)
```
## Performing several measuments concurrently
It is possible to perform two or more measurements at the same time. This may be convenient if you need to measure several parameters as a function of the same independent parameters.
```
# setup two measurements
meas1 = Measurement(exp=exp, name='multi_measurement_1')
meas1.register_parameter(dac.ch1)
meas1.register_parameter(dac.ch2)
meas1.register_parameter(dmm.v1, setpoints=(dac.ch1, dac.ch2))
meas2 = Measurement(exp=exp, name='multi_measurement_2')
meas2.register_parameter(dac.ch1)
meas2.register_parameter(dac.ch2)
meas2.register_parameter(dmm.v2, setpoints=(dac.ch1, dac.ch2))
with meas1.run() as datasaver1, meas2.run() as datasaver2:
v1points = np.concatenate((np.linspace(-2, -0.5, 10),
np.linspace(-0.51, 0.5, 200),
np.linspace(0.51, 2, 10)))
v2points = np.concatenate((np.linspace(-2, -0.25, 10),
np.linspace(-0.26, 0.5, 200),
np.linspace(0.51, 2, 10)))
for v1 in v1points:
for v2 in v2points:
dac.ch1(v1)
dac.ch2(v2)
val1 = dmm.v1.get()
datasaver1.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v1, val1))
val2 = dmm.v2.get()
datasaver2.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v2, val2))
ax, cbax = plot_dataset(datasaver1.dataset)
ax, cbax = plot_dataset(datasaver2.dataset)
```
## Interrupting measurements early
There may be cases where you do not want to complete a measurement. Currently QCoDeS is designed to allow the user
to interrupt the measurements with a standard KeyBoardInterrupt. KeyBoardInterrupts can be raised with either a Ctrl-C keyboard shortcut or using the interrupt button in Juypter / Spyder which is typically in the form of a Square stop button. QCoDeS is designed such that KeyboardInterrupts are delayed around critical parts of the code and the measurement is stopped when its safe to do so.
## QCoDeS Array and MultiParameter
The ``Measurement`` object supports automatic handling of ``Array`` and ``MultiParameters``. When registering these parameters
the individual components are unpacked and added to the dataset as if they were separate parameters. Lets consider a ``MultiParamter`` with array components as the most general case.
First lets use a dummy instrument that produces data as ``Array`` and ``MultiParameters``.
```
from qcodes.tests.instrument_mocks import DummyChannelInstrument
mydummy = DummyChannelInstrument('MyDummy')
```
This instrument produces two ``Array``s with the names, shapes and setpoints given below.
```
mydummy.A.dummy_2d_multi_parameter.names
mydummy.A.dummy_2d_multi_parameter.shapes
mydummy.A.dummy_2d_multi_parameter.setpoint_names
meas = Measurement(exp=exp)
meas.register_parameter(mydummy.A.dummy_2d_multi_parameter)
meas.parameters
```
When adding the MultiParameter to the measurement we can see that we add each of the individual components as a
separate parameter.
```
with meas.run() as datasaver:
datasaver.add_result((mydummy.A.dummy_2d_multi_parameter, mydummy.A.dummy_2d_multi_parameter()))
```
And when adding the result of a ``MultiParameter`` it is automatically unpacked into its components.
```
plot_dataset(datasaver.dataset)
datasaver.dataset.get_parameter_data('MyDummy_ChanA_that')
datasaver.dataset.to_pandas_dataframe()
datasaver.dataset.to_xarray_dataset()
```
## Avoiding verbosity of the Measurement context manager for simple measurements
For simple 1D/2D grid-type of measurements, it may feel like an overkill to use the verbose and flexible Measurement context manager construct. For this case, so-called ``doNd`` functions come ti rescue - convenient one- or two-line calls, read more about them in [Using doNd functions](./Using_doNd_functions_in_comparison_to_Measurement_context_manager_for_performing_measurements.ipynb).
## Optimizing measurement time
There are measurements that are data-heavy or time consuming, or both. QCoDeS provides some features and tools that should help in optimizing the measurement time. Some of those are:
* [Saving data in the background](./Saving_data_in_the_background.ipynb)
* Setting more appropriate ``paramtype`` when registering parameters, see [Paramtypes explained](./Paramtypes%20explained.ipynb)
* Adding result to datasaver by creating threads per instrument, see [Threaded data acquisition](./Threaded%20data%20acquisition.ipynb)
## The power of the Measurement context manager construct
This new form is so free that we may easily do thing impossible with the old Loop construct.
Say, that from the plot of the above 1D measurement,
we decide that a voltage below 1 V is uninteresting,
so we stop the sweep at that point, thus,
we do not know in advance how many points we'll measure.
```
meas = Measurement(exp=exp)
meas.register_parameter(dac.ch1) # register the first independent parameter
meas.register_parameter(dmm.v1, setpoints=(dac.ch1,)) # now register the dependent oone
with meas.run() as datasaver:
for set_v in np.linspace(0, 25, 100):
dac.ch1.set(set_v)
get_v = dmm.v1.get()
datasaver.add_result((dac.ch1, set_v),
(dmm.v1, get_v))
if get_v < 1:
break
dataset = datasaver.dataset
ax, cbax = plot_dataset(dataset)
```
Or we might want to simply get as many points as possible in 10 s
randomly sampling the region between 0 V and 10 V (for the setpoint axis).
```
from time import monotonic, sleep
with meas.run() as datasaver:
t_start = monotonic()
while monotonic() - t_start < 3:
set_v = 10/2*(np.random.rand() + 1)
dac.ch1.set(set_v)
# some sleep to not get too many points (or to let the system settle)
sleep(0.04)
get_v = dmm.v1.get()
datasaver.add_result((dac.ch1, set_v),
(dmm.v1, get_v))
dataset = datasaver.dataset # convenient to have for plotting
axes, cbax = plot_dataset(dataset)
# we slightly tweak the plot to better visualise the highly non-standard axis spacing
axes[0].lines[0].set_marker('o')
axes[0].lines[0].set_markerfacecolor((0.6, 0.6, 0.9))
axes[0].lines[0].set_markeredgecolor((0.4, 0.6, 0.9))
axes[0].lines[0].set_color((0.8, 0.8, 0.8))
```
### Finer sampling in 2D
Looking at the plot of the 2D measurement above, we may decide to sample more finely in the central region:
```
meas = Measurement(exp=exp)
meas.register_parameter(dac.ch1) # register the first independent parameter
meas.register_parameter(dac.ch2) # register the second independent parameter
meas.register_parameter(dmm.v2, setpoints=(dac.ch1, dac.ch2)) # now register the dependent oone
with meas.run() as datasaver:
v1points = np.concatenate((np.linspace(-1, -0.5, 5),
np.linspace(-0.51, 0.5, 200),
np.linspace(0.51, 1, 5)))
v2points = np.concatenate((np.linspace(-1, -0.25, 5),
np.linspace(-0.26, 0.5, 200),
np.linspace(0.51, 1, 5)))
for v1 in v1points:
for v2 in v2points:
dac.ch1(v1)
dac.ch2(v2)
val = dmm.v2.get()
datasaver.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v2, val))
dataset = datasaver.dataset # convenient to have for plotting
ax, cbax = plot_dataset(dataset)
```
### Simple adaptive 2D sweep
.. or even perform an adaptive sweep... ooohh...
(the example below is a not-very-clever toy model example,
but it nicely shows a semi-realistic measurement that the old Loop
could not handle)
```
v1_points = np.linspace(-1, 1, 250)
v2_points = np.linspace(1, -1, 250)
threshold = 0.25
with meas.run() as datasaver:
# Do normal sweeping until the peak is detected
for v2ind, v2 in enumerate(v2_points):
for v1ind, v1 in enumerate(v1_points):
dac.ch1(v1)
dac.ch2(v2)
val = dmm.v2.get()
datasaver.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v2, val))
if val > threshold:
break
else:
continue
break
print(v1ind, v2ind, val)
print('-'*10)
# now be more clever, meandering back and forth over the peak
doneyet = False
rowdone = False
v1_step = 1
while not doneyet:
v2 = v2_points[v2ind]
v1 = v1_points[v1ind+v1_step-1]
dac.ch1(v1)
dac.ch2(v2)
val = dmm.v2.get()
datasaver.add_result((dac.ch1, v1),
(dac.ch2, v2),
(dmm.v2, val))
if val < threshold:
if rowdone:
doneyet = True
v2ind += 1
v1_step *= -1
rowdone = True
else:
v1ind += v1_step
rowdone = False
dataset = datasaver.dataset # convenient to have for plotting
ax, cbax = plot_dataset(dataset)
```
### Random sampling
We may also chose to sample completely randomly across the phase space
```
meas2 = Measurement(exp=exp, name='random_sampling_measurement')
meas2.register_parameter(dac.ch1)
meas2.register_parameter(dac.ch2)
meas2.register_parameter(dmm.v2, setpoints=(dac.ch1, dac.ch2))
threshold = 0.25
npoints = 5000
with meas2.run() as datasaver:
for i in range(npoints):
x = 2*(np.random.rand()-.5)
y = 2*(np.random.rand()-.5)
dac.ch1(x)
dac.ch2(y)
z = dmm.v2()
datasaver.add_result((dac.ch1, x),
(dac.ch2, y),
(dmm.v2, z))
dataset = datasaver.dataset # convenient to have for plotting
ax, cbax = plot_dataset(dataset)
datasaver.dataset.to_pandas_dataframe()[0:10]
```
Unlike the data measured above, which lies on a grid, here, all the measured data points have an unique combination of the two dependent parameters. When exporting to XArray NaN's will therefore replace all the missing combinations of `dac_ch1` and `dac_ch2` and the data is unlikely to be useful in this format.
```
datasaver.dataset.to_xarray_dataset()
```
### Optimiser
An example to show that the algorithm is flexible enough to be used with completely unstructured data such as the output of an downhill simplex optimization. The downhill simplex is somewhat more sensitive to noise and it is important that 'fatol' is set to match the expected noise.
```
from scipy.optimize import minimize
def set_and_measure(*xk):
dac.ch1(xk[0])
dac.ch2(xk[1])
return dmm.v2.get()
noise = 0.0005
x0 = [np.random.rand(), np.random.rand()]
with meas.run() as datasaver:
def mycallback(xk):
dac.ch1(xk[0])
dac.ch2(xk[1])
datasaver.add_result((dac.ch1, xk[0]),
(dac.ch2, xk[1]),
(dmm.v2, dmm.v2.cache.get()))
res = minimize(lambda x: -set_and_measure(*x),
x0,
method='Nelder-Mead',
tol=1e-10,
callback=mycallback,
options={'fatol': noise})
dataset = datasaver.dataset # convenient to have for plotting
res
ax, cbax = plot_dataset(dataset)
```
## Subscriptions
The ``Measurement`` object can also handle subscriptions to the dataset. Subscriptions are, under the hood, triggers in the underlying SQLite database. Therefore, the subscribers are only called when data is written to the database (which happens every `write_period`).
When making a subscription, two things must be supplied: a function and a mutable state object. The function **MUST** have a call signature of `f(result_list, length, state, **kwargs)`, where ``result_list`` is a list of tuples of parameter values inserted in the dataset, ``length`` is an integer (the step number of the run), and ``state`` is the mutable state object. The function does not need to actually use these arguments, but the call signature must match this.
Let us consider two generic examples:
### Subscription example 1: simple printing
```
def print_which_step(results_list, length, state):
"""
This subscriber does not use results_list nor state; it simply
prints how many results we have added to the database
"""
print(f'The run now holds {length} rows')
meas = Measurement(exp=exp)
meas.register_parameter(dac.ch1)
meas.register_parameter(dmm.v1, setpoints=(dac.ch1,))
meas.write_period = 0.2 # We write to the database every 0.2s
meas.add_subscriber(print_which_step, state=[])
with meas.run() as datasaver:
for n in range(7):
datasaver.add_result((dac.ch1, n), (dmm.v1, n**2))
print(f'Added points to measurement, step {n}.')
sleep(0.2)
```
### Subscription example 2: using the state
We add two subscribers now.
```
def get_list_of_first_param(results_list, length, state):
"""
Modify the state (a list) to hold all the values for
the first parameter
"""
param_vals = [parvals[0] for parvals in results_list]
state += param_vals
meas = Measurement(exp=exp)
meas.register_parameter(dac.ch1)
meas.register_parameter(dmm.v1, setpoints=(dac.ch1,))
meas.write_period = 0.2 # We write to the database every 0.2s
first_param_list = []
meas.add_subscriber(print_which_step, state=[])
meas.add_subscriber(get_list_of_first_param, state=first_param_list)
with meas.run() as datasaver:
for n in range(10):
datasaver.add_result((dac.ch1, n), (dmm.v1, n**2))
print(f'Added points to measurement, step {n}.')
print(f'First parameter value list: {first_param_list}')
sleep(0.1)
```
| true |
code
| 0.505432 | null | null | null | null |
|
# Activations functions.
> Activations functions. Set of act_fn.
Activation functions, forked from https://github.com/rwightman/pytorch-image-models/timm/models/layers/activations.py
Mish: Self Regularized
Non-Monotonic Activation Function
https://github.com/digantamisra98/Mish
fastai forum discussion https://forums.fast.ai/t/meet-mish-new-activation-function-possible-successor-to-relu
Mish is in Pytorch from version 1.9. Use this version!
```
# hide
# forked from https://github.com/rwightman/pytorch-image-models/timm/models/layers/activations.py
import torch
from torch import nn as nn
from torch.nn import functional as F
```
## Mish
```
def mish(x, inplace: bool = False):
"""Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681
NOTE: I don't have a working inplace variant
"""
return x.mul(F.softplus(x).tanh())
class Mish(nn.Module):
"""Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681"""
def __init__(self, inplace: bool = False):
"""NOTE: inplace variant not working """
super(Mish, self).__init__()
def forward(self, x):
return mish(x)
```
## MishJit
```
@torch.jit.script
def mish_jit(x, _inplace: bool = False):
"""Jit version of Mish.
Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681
"""
return x.mul(F.softplus(x).tanh())
class MishJit(nn.Module):
def __init__(self, inplace: bool = False):
"""Jit version of Mish.
Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681"""
super(MishJit, self).__init__()
def forward(self, x):
return mish_jit(x)
```
## MishJitMe - memory-efficient.
```
@torch.jit.script
def mish_jit_fwd(x):
# return x.mul(torch.tanh(F.softplus(x)))
return x.mul(F.softplus(x).tanh())
@torch.jit.script
def mish_jit_bwd(x, grad_output):
x_sigmoid = torch.sigmoid(x)
x_tanh_sp = F.softplus(x).tanh()
return grad_output.mul(x_tanh_sp + x * x_sigmoid * (1 - x_tanh_sp * x_tanh_sp))
class MishJitAutoFn(torch.autograd.Function):
""" Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681
A memory efficient, jit scripted variant of Mish"""
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return mish_jit_fwd(x)
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_tensors[0]
return mish_jit_bwd(x, grad_output)
def mish_me(x, inplace=False):
return MishJitAutoFn.apply(x)
class MishMe(nn.Module):
""" Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681
A memory efficient, jit scripted variant of Mish"""
def __init__(self, inplace: bool = False):
super(MishMe, self).__init__()
def forward(self, x):
return MishJitAutoFn.apply(x)
```
## HardMishJit
```
@torch.jit.script
def hard_mish_jit(x, inplace: bool = False):
""" Hard Mish
Experimental, based on notes by Mish author Diganta Misra at
https://github.com/digantamisra98/H-Mish/blob/0da20d4bc58e696b6803f2523c58d3c8a82782d0/README.md
"""
return 0.5 * x * (x + 2).clamp(min=0, max=2)
class HardMishJit(nn.Module):
""" Hard Mish
Experimental, based on notes by Mish author Diganta Misra at
https://github.com/digantamisra98/H-Mish/blob/0da20d4bc58e696b6803f2523c58d3c8a82782d0/README.md
"""
def __init__(self, inplace: bool = False):
super(HardMishJit, self).__init__()
def forward(self, x):
return hard_mish_jit(x)
```
## HardMishJitMe - memory efficient.
```
@torch.jit.script
def hard_mish_jit_fwd(x):
return 0.5 * x * (x + 2).clamp(min=0, max=2)
@torch.jit.script
def hard_mish_jit_bwd(x, grad_output):
m = torch.ones_like(x) * (x >= -2.)
m = torch.where((x >= -2.) & (x <= 0.), x + 1., m)
return grad_output * m
class HardMishJitAutoFn(torch.autograd.Function):
""" A memory efficient, jit scripted variant of Hard Mish
Experimental, based on notes by Mish author Diganta Misra at
https://github.com/digantamisra98/H-Mish/blob/0da20d4bc58e696b6803f2523c58d3c8a82782d0/README.md
"""
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return hard_mish_jit_fwd(x)
@staticmethod
def backward(ctx, grad_output):
x = ctx.saved_tensors[0]
return hard_mish_jit_bwd(x, grad_output)
def hard_mish_me(x, inplace: bool = False):
return HardMishJitAutoFn.apply(x)
class HardMishMe(nn.Module):
""" A memory efficient, jit scripted variant of Hard Mish
Experimental, based on notes by Mish author Diganta Misra at
https://github.com/digantamisra98/H-Mish/blob/0da20d4bc58e696b6803f2523c58d3c8a82782d0/README.md
"""
def __init__(self, inplace: bool = False):
super(HardMishMe, self).__init__()
def forward(self, x):
return HardMishJitAutoFn.apply(x)
#hide
act_fn = Mish(inplace=True)
```
# end
model_constructor
by ayasyrev
| true |
code
| 0.79954 | null | null | null | null |
|
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
```
## Introduction
Who Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels.
A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embeddings on segments that were previously time stamped. These speaker embeddings would then be clustered into clusters based on number of speakers present in the audio recording.
In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization.
In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker Identification and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_tasks/Speaker_Identification_Verification.ipynb).
In ORACLE-VAD-DIARIZATION we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Voice_Activity_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.
For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
```
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')
an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')
if not os.path.exists(an4_audio):
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_audio = wget.download(an4_audio_url, data_dir)
if not os.path.exists(an4_rttm):
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_rttm = wget.download(an4_rttm_url, data_dir)
```
Let's plot and listen to the audio and visualize the RTTM speaker labels
```
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
```
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
```
from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels, labels_to_pyannote_object
```
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
```
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
```
Speaker Diarization scripts commonly expects following arguments:
1. manifest_filepath : Path to manifest file containing json lines of format: {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-', 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
2. out_dir : directory where outputs and intermediate files are stored.
3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either
4. vad.model_path or external_manifestpath containing speech activity labels has to be passed.
Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else None. If you would like to score the system with known rttms then that should be passed as well, else None. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else None.
**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**.
For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)
Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `pathsfiles_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files
```
# Create a manifest for input with below format.
# {'audio_filepath': /path/to/audio_file, 'offset': 0, 'duration':None, 'label': 'infer', 'text': '-',
# 'num_speakers': None, 'rttm_filepath': /path/to/rttm/file, 'uem_filepath'='/path/to/uem/filepath'}
import json
meta = {
'audio_filepath': an4_audio,
'offset': 0,
'duration':None,
'label': 'infer',
'text': '-',
'num_speakers': 2,
'rttm_filepath': an4_rttm,
'uem_filepath' : None
}
with open('data/input_manifest.json','w') as fp:
json.dump(meta,fp)
fp.write('\n')
!cat data/input_manifest.json
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
```
# ORACLE-VAD DIARIZATION
Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also be used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.
For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.
This is just an argument in our config, and system automatically computes oracle manifest based on the rttms provided through input manifest file
Our config file is based on [hydra](https://hydra.cc/docs/intro/).
With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for successful runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
```
from omegaconf import OmegaConf
MODEL_CONFIG = os.path.join(data_dir,'offline_diarization.yaml')
if not os.path.exists(MODEL_CONFIG):
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_tasks/diarization/conf/offline_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
```
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
```
pretrained_speaker_model='ecapa_tdnn'
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = True # ----> ORACLE VAD
config.diarizer.clustering.parameters.oracle_num_speakers = True
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
```
With DER 0 -> means it clustered speaker embeddings correctly. Let's view
```
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
```
# VAD DIARIZATION
In this method we compute VAD time stamps using NeMo VAD model on input manifest file and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers
Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computation
and speaker embedding extraction
```
print(OmegaConf.to_yaml(config))
```
As can be seen most of the variables in config are self explanatory
with VAD variables under vad section and speaker related variables under speaker embeddings section.
To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
```
pretrained_vad = 'vad_marblenet'
pretrained_speaker_model = 'ecapa_tdnn'
```
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and published in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to tune on dev set similar to your dataset if you would like to improve the performance.
And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
```
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.manifest_filepath = 'data/input_manifest.json'
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
config.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5
config.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75
config.diarizer.oracle_vad = False # compute VAD provided with model_path to vad config
config.diarizer.clustering.parameters.oracle_num_speakers=True
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.parameters.onset = 0.8
config.diarizer.vad.parameters.offset = 0.6
config.diarizer.vad.parameters.min_duration_on = 0.1
config.diarizer.vad.parameters.min_duration_off = 0.4
```
Now that we passed all the variables we needed lets initialize the clustering model with above config
```
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
```
And Diarize with single line of code
```
sd_model.diarize()
```
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering.
To generate VAD predicted time step. We perform VAD inference to have frame level prediction → (optional: use decision smoothing) → given `threshold`, write speech segment to RTTM-like time stamps manifest.
we use vad decision smoothing (87.5% overlap median) as described [here](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/asr/parts/utils/vad_utils.py)
you can also tune the threshold on your dev set. Use this provided [script](https://github.com/NVIDIA/NeMo/blob/stable/scripts/voice_activity_detection/vad_tune_threshold.py)
```
# VAD predicted time stamps
# you can also use single threshold(=onset=offset) for binarization and plot here
from nemo.collections.asr.parts.utils.vad_utils import plot
plot(
an4_audio,
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
an4_rttm,
per_args = config.diarizer.vad.parameters, #threshold
)
print(f"postprocessing_params: {config.diarizer.vad.parameters}")
```
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
```
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
```
# Storing and Restoring models
Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
```
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
```
Restore from saved model
```
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
```
# ADD ON - ASR
```
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip([an4_audio], quartznet.transcribe(paths2audio_files=[an4_audio])):
print(f"Audio in {fname} was recognized as:\n{transcription}")
```
| true |
code
| 0.691107 | null | null | null | null |
|
#### demo: training a DND LSTM on a contextual choice task
This is an implementation of the following paper:
```
Ritter, S., Wang, J. X., Kurth-Nelson, Z., Jayakumar, S. M., Blundell, C., Pascanu, R., & Botvinick, M. (2018).
Been There, Done That: Meta-Learning with Episodic Recall. arXiv [stat.ML].
Retrieved from http://arxiv.org/abs/1805.09692
```
```
'''
If you are using google colab, uncomment and run the following lines!
which grabs the dependencies from github
'''
# !git clone https://github.com/qihongl/dnd-lstm.git
# !cd dnd-lstm/src/
# import os
# os.chdir('dnd-lstm/src/')
import time
import torch
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from task import ContextualChoice
from model import DNDLSTM as Agent
from utils import compute_stats, to_sqnp
from model.DND import compute_similarities
from model.utils import get_reward, compute_returns, compute_a2c_loss
sns.set(style='white', context='talk', palette='colorblind')
seed_val = 0
torch.manual_seed(seed_val)
np.random.seed(seed_val)
'''init task'''
n_unique_example = 50
n_trials = 2 * n_unique_example
# n time steps of a trial
trial_length = 10
# after `tp_corrupt`, turn off the noise
t_noise_off = 5
# input/output/hidden/memory dim
obs_dim = 32
task = ContextualChoice(
obs_dim=obs_dim, trial_length=trial_length,
t_noise_off=t_noise_off
)
'''init model'''
# set params
dim_hidden = 32
dim_output = 2
dict_len = 100
learning_rate = 5e-4
n_epochs = 20
# init agent / optimizer
agent = Agent(task.x_dim, dim_hidden, dim_output, dict_len)
optimizer = torch.optim.Adam(agent.parameters(), lr=learning_rate)
'''train'''
log_return = np.zeros(n_epochs,)
log_loss_value = np.zeros(n_epochs,)
log_loss_policy = np.zeros(n_epochs,)
log_Y = np.zeros((n_epochs, n_trials, trial_length))
log_Y_hat = np.zeros((n_epochs, n_trials, trial_length))
# loop over epoch
for i in range(n_epochs):
time_start = time.time()
# get data for this epoch
X, Y = task.sample(n_unique_example)
# flush hippocampus
agent.reset_memory()
agent.turn_on_retrieval()
# loop over the training set
for m in range(n_trials):
# prealloc
cumulative_reward = 0
probs, rewards, values = [], [], []
h_t, c_t = agent.get_init_states()
# loop over time, for one training example
for t in range(trial_length):
# only save memory at the last time point
agent.turn_off_encoding()
if t == trial_length-1 and m < n_unique_example:
agent.turn_on_encoding()
# recurrent computation at time t
output_t, _ = agent(X[m][t].view(1, 1, -1), h_t, c_t)
a_t, prob_a_t, v_t, h_t, c_t = output_t
# compute immediate reward
r_t = get_reward(a_t, Y[m][t])
# log
probs.append(prob_a_t)
rewards.append(r_t)
values.append(v_t)
# log
cumulative_reward += r_t
log_Y_hat[i, m, t] = a_t.item()
returns = compute_returns(rewards)
loss_policy, loss_value = compute_a2c_loss(probs, values, returns)
loss = loss_policy + loss_value
optimizer.zero_grad()
loss.backward()
optimizer.step()
# log
log_Y[i] = np.squeeze(Y.numpy())
log_return[i] += cumulative_reward / n_trials
log_loss_value[i] += loss_value.item() / n_trials
log_loss_policy[i] += loss_policy.item() / n_trials
# print out some stuff
time_end = time.time()
run_time = time_end - time_start
print(
'Epoch %3d | return = %.2f | loss: val = %.2f, pol = %.2f | time = %.2f' %
(i, log_return[i], log_loss_value[i], log_loss_policy[i], run_time)
)
'''learning curve'''
f, axes = plt.subplots(1, 2, figsize=(8, 3))
axes[0].plot(log_return)
axes[0].set_ylabel('Return')
axes[0].set_xlabel('Epoch')
axes[1].plot(log_loss_value)
axes[1].set_ylabel('Value loss')
axes[1].set_xlabel('Epoch')
sns.despine()
f.tight_layout()
'''show behavior'''
corrects = log_Y_hat[-1] == log_Y[-1]
acc_mu_no_memory, acc_se_no_memory = compute_stats(
corrects[:n_unique_example])
acc_mu_has_memory, acc_se_has_memory = compute_stats(
corrects[n_unique_example:])
n_se = 2
f, ax = plt.subplots(1, 1, figsize=(7, 4))
ax.errorbar(range(trial_length), y=acc_mu_no_memory,
yerr=acc_se_no_memory * n_se, label='w/o memory')
ax.errorbar(range(trial_length), y=acc_mu_has_memory,
yerr=acc_se_has_memory * n_se, label='w/ memory')
ax.axvline(t_noise_off, label='turn off noise', color='grey', linestyle='--')
ax.set_xlabel('Time')
ax.set_ylabel('Correct rate')
ax.set_title('Choice accuracy by condition')
f.legend(frameon=False, bbox_to_anchor=(1, .6))
sns.despine()
f.tight_layout()
'''visualize keys and values'''
keys, vals = agent.get_all_mems()
n_mems = len(agent.dnd.keys)
dmat_kk, dmat_vv = np.zeros((n_mems, n_mems)), np.zeros((n_mems, n_mems))
for i in range(n_mems):
dmat_kk[i, :] = to_sqnp(compute_similarities(
keys[i], keys, agent.dnd.kernel))
dmat_vv[i, :] = to_sqnp(compute_similarities(
vals[i], vals, agent.dnd.kernel))
# plot
dmats = {'key': dmat_kk, 'value': dmat_vv}
f, axes = plt.subplots(1, 2, figsize=(12, 5))
for i, (label_i, dmat_i) in enumerate(dmats.items()):
sns.heatmap(dmat_i, cmap='viridis', square=True, ax=axes[i])
axes[i].set_xlabel(f'id, {label_i} i')
axes[i].set_ylabel(f'id, {label_i} j')
axes[i].set_title(
f'{label_i}-{label_i} similarity, metric = {agent.dnd.kernel}'
)
f.tight_layout()
'''project memory content to low dim space'''
# convert the values to a np array, #memories x mem_dim
vals_np = np.vstack([to_sqnp(vals[i]) for i in range(n_mems)])
# project to PC space
vals_centered = (vals_np - np.mean(vals_np, axis=0, keepdims=True))
U, S, _ = np.linalg.svd(vals_centered, full_matrices=False)
vals_pc = np.dot(U, np.diag(S))
# pick pcs
pc_x = 0
pc_y = 1
# plot
f, ax = plt.subplots(1, 1, figsize=(7, 5))
Y_phase2 = to_sqnp(Y[:n_unique_example, 0])
for y_val in np.unique(Y_phase2):
ax.scatter(
vals_pc[Y_phase2 == y_val, pc_x],
vals_pc[Y_phase2 == y_val, pc_y],
marker='o', alpha=.7,
)
ax.set_title(f'Each point is a memory (i.e. value)')
ax.set_xlabel(f'PC {pc_x}')
ax.set_ylabel(f'PC {pc_y}')
ax.legend(['left trial', 'right trial'], bbox_to_anchor=(.6, .3))
sns.despine(offset=20)
f.tight_layout()
```
| true |
code
| 0.743357 | null | null | null | null |
|
# Support Vector Machines (SVM) with Sklearn
This notebook creates and measures an [LinearSVC with Sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC). This has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples compared to SVC.
* Method: LinearSVC
* Dataset: Iris
## Imports
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.svm import LinearSVC
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
from mlxtend.evaluate import confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
%matplotlib inline
```
## Load and Prepare the Data
```
# Load the data
data = load_iris()
# Show the information about the dataset
print(data.DESCR)
# Split the data into labels (targets) and features
label_names = data['target_names']
labels = data['target']
feature_names = data['feature_names']
features = data['data']
# View the data
print(label_names)
print(labels[0])
print("")
print(feature_names)
print(features[0])
# Create test and training sets
X_train, X_test, Y_train, Y_test = train_test_split(features,
labels,
test_size=0.33,
random_state=42)
```
## Fit a LinearSVC Model
Parameters
* C: tells the SVM optimization how much you want to avoid misclassifying each training example
* If C is large: the hyperplane does a better job of getting all the training points classified correctly
* If C is small: the optimizer will look for a larger-margin separating hyperplane even if that hyperplane misclassifies more points
* random_state: seed of the pseudo random number generator to use when shuffling the data
```
# Create an instance of the GaussianNB classifier
model = LinearSVC(C=1.0, random_state=42)
# Train the model
model.fit(X_train, Y_train)
model
# Show the intercepts
print("Intercepts: {}".format(model.intercept_))
```
## Create Predictions
```
# Create predictions
predictions = model.predict(X_test)
print(predictions)
# Create a plot to compare actual labels (Y_test) and the predicted labels (predictions)
fig = plt.figure(figsize=(20,10))
plt.scatter(Y_test, predictions)
plt.xlabel("Actual Label: $Y_i$")
plt.ylabel("Predicted Label: $\hat{Y}_i$")
plt.title("Actual vs. Predicted Label: $Y_i$ vs. $\hat{Y}_i$")
plt.show()
```
## Model Evaluation
### Accuracy
The accuracy score is either the fraction (default) or the count (normalize=False) of correct predictions.
```
print("Accuracy Score: %.2f" % accuracy_score(Y_test, predictions))
```
### K-Fold Cross Validation
This estimates the accuracy of an SVM model by splitting the data, fitting a model and computing the score 5 consecutive times. The result is a list of the scores from each consecutive run.
```
# Get scores for 5 folds over the data
clf = LinearSVC(C=1.0, random_state=42)
scores = cross_val_score(clf, data.data, data.target, cv=5)
print(scores)
```
### Confusion Matrix
**Confusion Matrix for Binary Label**

```
# Plot the multi-label confusion matrix
print("Labels:")
for label in label_names:
i, = np.where(label_names == label)
print("{}: {}".format(i, label))
cm = confusion_matrix(y_target=Y_test,
y_predicted=predictions,
binary=False)
fig, ax = plot_confusion_matrix(conf_mat=cm)
plt.title("Confusion Matrix")
plt.show()
```
| true |
code
| 0.716907 | null | null | null | null |
|
# Gaussian Mixture Model
This is tutorial demonstrates how to marginalize out discrete latent variables in Pyro through the motivating example of a mixture model. We'll focus on the mechanics of parallel enumeration, keeping the model simple by training a trivial 1-D Gaussian model on a tiny 5-point dataset. See also the [enumeration tutorial](http://pyro.ai/examples/enumeration.html) for a broader introduction to parallel enumeration.
#### Table of contents
- [Overview](#Overview)
- [Training a MAP estimator](#Training-a-MAP-estimator)
- [Serving the model: predicting membership](#Serving-the-model:-predicting-membership)
- [Predicting membership using discrete inference](#Predicting-membership-using-discrete-inference)
- [Predicting membership by enumerating in the guide](#Predicting-membership-by-enumerating-in-the-guide)
- [MCMC](#MCMC)
```
import os
from collections import defaultdict
import torch
import numpy as np
import scipy.stats
from torch.distributions import constraints
from matplotlib import pyplot
%matplotlib inline
import pyro
import pyro.distributions as dist
from pyro import poutine
from pyro.infer.autoguide import AutoDelta
from pyro.optim import Adam
from pyro.infer import SVI, TraceEnum_ELBO, config_enumerate, infer_discrete
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.3.1')
pyro.enable_validation(True)
```
## Overview
Pyro's [TraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.traceenum_elbo.TraceEnum_ELBO) can automatically marginalize out variables in both the guide and the model. When enumerating guide variables, Pyro can either enumerate sequentially (which is useful if the variables determine downstream control flow), or enumerate in parallel by allocating a new tensor dimension and using nonstandard evaluation to create a tensor of possible values at the variable's sample site. These nonstandard values are then replayed in the model. When enumerating variables in the model, the variables must be enumerated in parallel and must not appear in the guide. Mathematically, guide-side enumeration simply reduces variance in a stochastic ELBO by enumerating all values, whereas model-side enumeration avoids an application of Jensen's inequality by exactly marginalizing out a variable.
Here is our tiny dataset. It has five points.
```
data = torch.tensor([0., 1., 10., 11., 12.])
```
## Training a MAP estimator
Let's start by learning model parameters `weights`, `locs`, and `scale` given priors and data. We will learn point estimates of these using an [AutoDelta](http://docs.pyro.ai/en/dev/infer.autoguide.html#autodelta) guide (named after its delta distributions). Our model will learn global mixture weights, the location of each mixture component, and a shared scale that is common to both components. During inference, [TraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.traceenum_elbo.TraceEnum_ELBO) will marginalize out the assignments of datapoints to clusters.
```
K = 2 # Fixed number of components.
@config_enumerate
def model(data):
# Global variables.
weights = pyro.sample('weights', dist.Dirichlet(0.5 * torch.ones(K)))
scale = pyro.sample('scale', dist.LogNormal(0., 2.))
with pyro.plate('components', K):
locs = pyro.sample('locs', dist.Normal(0., 10.))
with pyro.plate('data', len(data)):
# Local variables.
assignment = pyro.sample('assignment', dist.Categorical(weights))
pyro.sample('obs', dist.Normal(locs[assignment], scale), obs=data)
```
To run inference with this `(model,guide)` pair, we use Pyro's [config_enumerate()](http://docs.pyro.ai/en/dev/poutine.html#pyro.infer.enum.config_enumerate) handler to enumerate over all assignments in each iteration. Since we've wrapped the batched Categorical assignments in a [pyro.plate](http://docs.pyro.ai/en/dev/primitives.html#pyro.plate) indepencence context, this enumeration can happen in parallel: we enumerate only 2 possibilites, rather than `2**len(data) = 32`. Finally, to use the parallel version of enumeration, we inform Pyro that we're only using a single [plate](http://docs.pyro.ai/en/dev/primitives.html#pyro.plate) via `max_plate_nesting=1`; this lets Pyro know that we're using the rightmost dimension [plate](http://docs.pyro.ai/en/dev/primitives.html#pyro.plate) and that Pyro can use any other dimension for parallelization.
```
optim = pyro.optim.Adam({'lr': 0.1, 'betas': [0.8, 0.99]})
elbo = TraceEnum_ELBO(max_plate_nesting=1)
```
Before inference we'll initialize to plausible values. Mixture models are very succeptible to local modes. A common approach is choose the best among many randomly initializations, where the cluster means are initialized from random subsamples of the data. Since we're using an [AutoDelta](http://docs.pyro.ai/en/dev/infer.autoguide.html#autodelta) guide, we can initialize by defining a custom ``init_loc_fn()``.
```
def init_loc_fn(site):
if site["name"] == "weights":
# Initialize weights to uniform.
return torch.ones(K) / K
if site["name"] == "scale":
return (data.var() / 2).sqrt()
if site["name"] == "locs":
return data[torch.multinomial(torch.ones(len(data)) / len(data), K)]
raise ValueError(site["name"])
def initialize(seed):
global global_guide, svi
pyro.set_rng_seed(seed)
pyro.clear_param_store()
global_guide = AutoDelta(poutine.block(model, expose=['weights', 'locs', 'scale']),
init_loc_fn=init_loc_fn)
svi = SVI(model, global_guide, optim, loss=elbo)
return svi.loss(model, global_guide, data)
# Choose the best among 100 random initializations.
loss, seed = min((initialize(seed), seed) for seed in range(100))
initialize(seed)
print('seed = {}, initial_loss = {}'.format(seed, loss))
```
During training, we'll collect both losses and gradient norms to monitor convergence. We can do this using PyTorch's `.register_hook()` method.
```
# Register hooks to monitor gradient norms.
gradient_norms = defaultdict(list)
for name, value in pyro.get_param_store().named_parameters():
value.register_hook(lambda g, name=name: gradient_norms[name].append(g.norm().item()))
losses = []
for i in range(200 if not smoke_test else 2):
loss = svi.step(data)
losses.append(loss)
print('.' if i % 100 else '\n', end='')
pyplot.figure(figsize=(10,3), dpi=100).set_facecolor('white')
pyplot.plot(losses)
pyplot.xlabel('iters')
pyplot.ylabel('loss')
pyplot.yscale('log')
pyplot.title('Convergence of SVI');
pyplot.figure(figsize=(10,4), dpi=100).set_facecolor('white')
for name, grad_norms in gradient_norms.items():
pyplot.plot(grad_norms, label=name)
pyplot.xlabel('iters')
pyplot.ylabel('gradient norm')
pyplot.yscale('log')
pyplot.legend(loc='best')
pyplot.title('Gradient norms during SVI');
```
Here are the learned parameters:
```
map_estimates = global_guide(data)
weights = map_estimates['weights']
locs = map_estimates['locs']
scale = map_estimates['scale']
print('weights = {}'.format(weights.data.numpy()))
print('locs = {}'.format(locs.data.numpy()))
print('scale = {}'.format(scale.data.numpy()))
```
The model's `weights` are as expected, with about 2/5 of the data in the first component and 3/5 in the second component. Next let's visualize the mixture model.
```
X = np.arange(-3,15,0.1)
Y1 = weights[0].item() * scipy.stats.norm.pdf((X - locs[0].item()) / scale.item())
Y2 = weights[1].item() * scipy.stats.norm.pdf((X - locs[1].item()) / scale.item())
pyplot.figure(figsize=(10, 4), dpi=100).set_facecolor('white')
pyplot.plot(X, Y1, 'r-')
pyplot.plot(X, Y2, 'b-')
pyplot.plot(X, Y1 + Y2, 'k--')
pyplot.plot(data.data.numpy(), np.zeros(len(data)), 'k*')
pyplot.title('Density of two-component mixture model')
pyplot.ylabel('probability density');
```
Finally note that optimization with mixture models is non-convex and can often get stuck in local optima. For example in this tutorial, we observed that the mixture model gets stuck in an everthing-in-one-cluster hypothesis if `scale` is initialized to be too large.
## Serving the model: predicting membership
Now that we've trained a mixture model, we might want to use the model as a classifier.
During training we marginalized out the assignment variables in the model. While this provides fast convergence, it prevents us from reading the cluster assignments from the guide. We'll discuss two options for treating the model as a classifier: first using [infer_discrete](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.discrete.infer_discrete) (much faster) and second by training a secondary guide using enumeration inside SVI (slower but more general).
### Predicting membership using discrete inference
The fastest way to predict membership is to use the [infer_discrete](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.discrete.infer_discrete) handler, together with `trace` and `replay`. Let's start out with a MAP classifier, setting `infer_discrete`'s temperature parameter to zero. For a deeper look at effect handlers like `trace`, `replay`, and `infer_discrete`, see the [effect handler tutorial](http://pyro.ai/examples/effect_handlers.html).
```
guide_trace = poutine.trace(global_guide).get_trace(data) # record the globals
trained_model = poutine.replay(model, trace=guide_trace) # replay the globals
def classifier(data, temperature=0):
inferred_model = infer_discrete(trained_model, temperature=temperature,
first_available_dim=-2) # avoid conflict with data plate
trace = poutine.trace(inferred_model).get_trace(data)
return trace.nodes["assignment"]["value"]
print(classifier(data))
```
Indeed we can run this classifer on new data
```
new_data = torch.arange(-3, 15, 0.1)
assignment = classifier(new_data)
pyplot.figure(figsize=(8, 2), dpi=100).set_facecolor('white')
pyplot.plot(new_data.numpy(), assignment.numpy())
pyplot.title('MAP assignment')
pyplot.xlabel('data value')
pyplot.ylabel('class assignment');
```
To generate random posterior assignments rather than MAP assignments, we could set `temperature=1`.
```
print(classifier(data, temperature=1))
```
Since the classes are very well separated, we zoom in to the boundary between classes, around 5.75.
```
new_data = torch.arange(5.5, 6.0, 0.005)
assignment = classifier(new_data, temperature=1)
pyplot.figure(figsize=(8, 2), dpi=100).set_facecolor('white')
pyplot.plot(new_data.numpy(), assignment.numpy(), 'bx', color='C0')
pyplot.title('Random posterior assignment')
pyplot.xlabel('data value')
pyplot.ylabel('class assignment');
```
### Predicting membership by enumerating in the guide
A second way to predict class membership is to enumerate in the guide. This doesn't work well for serving classifier models, since we need to run stochastic optimization for each new input data batch, but it is more general in that it can be embedded in larger variational models.
To read cluster assignments from the guide, we'll define a new `full_guide` that fits both global parameters (as above) and local parameters (which were previously marginalized out). Since we've already learned good values for the global variables, we will block SVI from updating those by using [poutine.block](http://docs.pyro.ai/en/dev/poutine.html#pyro.poutine.block).
```
@config_enumerate
def full_guide(data):
# Global variables.
with poutine.block(hide_types=["param"]): # Keep our learned values of global parameters.
global_guide(data)
# Local variables.
with pyro.plate('data', len(data)):
assignment_probs = pyro.param('assignment_probs', torch.ones(len(data), K) / K,
constraint=constraints.unit_interval)
pyro.sample('assignment', dist.Categorical(assignment_probs))
optim = pyro.optim.Adam({'lr': 0.2, 'betas': [0.8, 0.99]})
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, full_guide, optim, loss=elbo)
# Register hooks to monitor gradient norms.
gradient_norms = defaultdict(list)
svi.loss(model, full_guide, data) # Initializes param store.
for name, value in pyro.get_param_store().named_parameters():
value.register_hook(lambda g, name=name: gradient_norms[name].append(g.norm().item()))
losses = []
for i in range(200 if not smoke_test else 2):
loss = svi.step(data)
losses.append(loss)
print('.' if i % 100 else '\n', end='')
pyplot.figure(figsize=(10,3), dpi=100).set_facecolor('white')
pyplot.plot(losses)
pyplot.xlabel('iters')
pyplot.ylabel('loss')
pyplot.yscale('log')
pyplot.title('Convergence of SVI');
pyplot.figure(figsize=(10,4), dpi=100).set_facecolor('white')
for name, grad_norms in gradient_norms.items():
pyplot.plot(grad_norms, label=name)
pyplot.xlabel('iters')
pyplot.ylabel('gradient norm')
pyplot.yscale('log')
pyplot.legend(loc='best')
pyplot.title('Gradient norms during SVI');
```
We can now examine the guide's local `assignment_probs` variable.
```
assignment_probs = pyro.param('assignment_probs')
pyplot.figure(figsize=(8, 3), dpi=100).set_facecolor('white')
pyplot.plot(data.data.numpy(), assignment_probs.data.numpy()[:, 0], 'ro',
label='component with mean {:0.2g}'.format(locs[0]))
pyplot.plot(data.data.numpy(), assignment_probs.data.numpy()[:, 1], 'bo',
label='component with mean {:0.2g}'.format(locs[1]))
pyplot.title('Mixture assignment probabilities')
pyplot.xlabel('data value')
pyplot.ylabel('assignment probability')
pyplot.legend(loc='center');
```
## MCMC
Next we'll explore the full posterior over component parameters using collapsed NUTS, i.e. we'll use NUTS and marginalize out all discrete latent variables.
```
from pyro.infer.mcmc.api import MCMC
from pyro.infer.mcmc import NUTS
pyro.set_rng_seed(2)
kernel = NUTS(model)
mcmc = MCMC(kernel, num_samples=250, warmup_steps=50)
mcmc.run(data)
posterior_samples = mcmc.get_samples()
X, Y = posterior_samples["locs"].t()
pyplot.figure(figsize=(8, 8), dpi=100).set_facecolor('white')
h, xs, ys, image = pyplot.hist2d(X.numpy(), Y.numpy(), bins=[20, 20])
pyplot.contour(np.log(h + 3).T, extent=[xs.min(), xs.max(), ys.min(), ys.max()],
colors='white', alpha=0.8)
pyplot.title('Posterior density as estimated by collapsed NUTS')
pyplot.xlabel('loc of component 0')
pyplot.ylabel('loc of component 1')
pyplot.tight_layout()
```
Note that due to nonidentifiability of the mixture components the likelihood landscape has two equally likely modes, near `(11,0.5)` and `(0.5,11)`. NUTS has difficulty switching between the two modes.
```
pyplot.figure(figsize=(8, 3), dpi=100).set_facecolor('white')
pyplot.plot(X.numpy(), color='red')
pyplot.plot(Y.numpy(), color='blue')
pyplot.xlabel('NUTS step')
pyplot.ylabel('loc')
pyplot.title('Trace plot of loc parameter during NUTS inference')
pyplot.tight_layout()
```
| true |
code
| 0.573081 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/shakasom/MapsDataScience/blob/master/Chapter4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Making sense of humongous location datasets
## Installations
The geospatial libraries are not pre installed in google colab as standard python library, therefore we need to install some libraries to use. Luckily this is an easy process. You can use either apt install or pip install. You can also create anaconda environment, but that is a bit complex so pip and apt are enough in our case to get the libraries we need. These are the libraries we need to install in this tutorial:
Gdal Geopandas Folium
The installation might take 1 minute.
```
%%time
!apt update --quiet
!apt upgrade --quiet
# GDAL Important library for many geopython libraries
!apt install gdal-bin python-gdal python3-gdal --quiet
# Install rtree - Geopandas requirment
!apt install python3-rtree --quiet
# Install Geopandas
!pip install git+git://github.com/geopandas/geopandas.git --quiet
# Install descartes - Geopandas requirment
!pip install descartes --quiet
# Install Folium for Geographic data visualization
!pip install folium --quiet
# Install Pysal
!pip install pysal --quiet
# Install splot --> pysal
!pip install splot --quiet
# Install mapclassify
!pip install mapclassify --quiet
import pandas as pd
import numpy as np
import geopandas as gpd
from shapely.geometry import Point
from pysal.explore import esda
from pysal.lib import weights
#import libysal as lps
from pysal.viz.splot.esda import plot_moran, plot_local_autocorrelation, lisa_cluster
import matplotlib
import matplotlib.pyplot as plt
import folium
import os
import seaborn as sns
from sklearn.cluster import KMeans
from sklearn.cluster import dbscan
from libpysal.weights.contiguity import Queen
from esda.moran import Moran
from splot.esda import moran_scatterplot
from esda.moran import Moran_Local
from splot.esda import lisa_cluster
import pysal as ps
ps.__version__
```
## Data
The dataset for this chapter is stored in the dropbox link. It is a valuable skill to learn how to access data on the web, so we will use WGET. WGET is great utility in accessing files from the web and supports different protocols.
```
# Get the data from dropbox link
!wget https://www.dropbox.com/s/xvs0ybc402mkrn8/2019-02-avon-and-somerset-street.zip --quiet
# see the folders available
import os
os.listdir(os.getcwd())
# We have zipped data so let us unzip it
!unzip 2019-02-avon-and-somerset-street.zip
crime_somerset = pd.read_csv("2019-02-avon-and-somerset-street.csv")
crime_somerset.head()
crime_somerset.shape
crime_somerset.isnull().sum()
# Drop columns with high missing values
crime_somerset.drop(['Last outcome category','Context', 'Crime ID' ], axis=1, inplace=True)
crime_somerset.head()
crime_somerset.isnull().sum()
# Drop rows with missin values
crime_somerset.dropna(axis=0,inplace=True)
crime_somerset.isnull().sum()
crime_somerset.shape
crime_somerset.head()
```
### Convert to GeoDataFrame
```
# Function to create a Geodataframe
def create_gdf(df, lat, lon):
""" Convert pandas dataframe into a Geopandas GeoDataFrame"""
crs = {'init': 'epsg:4326'}
geometry = [Point(xy) for xy in zip(df[lon], df[lat])]
gdf = gpd.GeoDataFrame(df, crs=crs, geometry=geometry)
return gdf
crime_somerset_gdf = create_gdf(crime_somerset, 'Latitude', 'Longitude')
crime_somerset_gdf.head()
fig, ax = plt.subplots(figsize=(12,10))
crime_somerset_gdf.plot(markersize=20, ax=ax);
plt.savefig('crime_somerset_map.png', bbox_inches='tight')
```
## KMeans Clustering Location Data
```
crime_somerset_gdf.head()
```
* Split training and test dataset
```
train = crime_somerset_gdf.sample(frac=0.7, random_state=14)
test = crime_somerset_gdf.drop(train.index)
train.shape, test.shape
# Get coordinates for the train and test dataset
train_coords = train[['Latitude', 'Longitude']].values
test_coords = test[['Latitude', 'Longitude']].values
# Fit Kmeans clustering on training dataset
kmeans = KMeans(n_clusters=5)
kmeans.fit(train_coords)
# Predict on the test dataset by clustering
preds = kmeans.predict(test_coords)
# Get centers of the clusters
centers = kmeans.cluster_centers_
fig, ax = plt.subplots(figsize=(12,10))
plt.scatter(test_coords[:, 0], test_coords[:, 1], c=preds, s=30, cmap='viridis')
plt.scatter(centers[:,0], centers[:,1], c='Red', marker="s", s=50);
```
## DBSCAN
### Detecting Outliers/Noise
```
coords = crime_somerset_gdf[['Latitude', 'Longitude']]
coords[:5]
# Get labels of each cluster
_, labels = dbscan(crime_somerset_gdf[['Latitude', 'Longitude']], eps=0.1, min_samples=10)
# Create a labels dataframe with the index of the dataset
labels_df = pd.DataFrame(labels, index=crime_somerset_gdf.index, columns=['cluster'])
labels_df.head()
# Groupby Labels
labels_df.groupby('cluster').size()
# Plot the groupedby labels
sns.countplot(labels_df.cluster);
plt.show()
# Get Noise (Outliers) with label -1
noise = crime_somerset_gdf.loc[labels_df['cluster']==-1, ['Latitude', 'Longitude']]
# Get core with labels 0
core = crime_somerset_gdf.loc[labels_df['cluster']== 0, ['Latitude', 'Longitude']]
# Display scatter plot with noises as stars and core as circle points
fig, ax = plt.subplots(figsize=(12,10))
ax.scatter(noise['Latitude'], noise['Longitude'],marker= '*', s=40, c='blue' )
ax.scatter(core['Latitude'], core['Longitude'], marker= 'o', s=20, c='red')
plt.savefig('outliers.png');
plt.show();
noise
```
### Detecting Clusters
```
_, labels = dbscan(crime_somerset_gdf[['Latitude', 'Longitude']], eps=0.01, min_samples=300)
labels_df = pd.DataFrame(labels, index=crime_somerset_gdf.index, columns=['cluster'])
labels_df.groupby('cluster').size()
noise = crime_somerset_gdf.loc[labels_df['cluster']==-1, ['Latitude', 'Longitude']]
core = crime_somerset_gdf.loc[labels_df['cluster']== 0, ['Latitude', 'Longitude']]
bp1 = crime_somerset_gdf.loc[labels_df['cluster']== 1, ['Latitude', 'Longitude']]
bp2 = crime_somerset_gdf.loc[labels_df['cluster']== 2, ['Latitude', 'Longitude']]
bp3 = crime_somerset_gdf.loc[labels_df['cluster']== 3, ['Latitude', 'Longitude']]
fig, ax = plt.subplots(figsize=(12,10))
ax.scatter(noise['Latitude'], noise['Longitude'], markers=10, c='gray' )
ax.scatter(core['Latitude'], core['Longitude'], s=100, c='red')
ax.scatter(bp1['Latitude'], bp1['Longitude'], s=50, c='yellow')
ax.scatter(bp2['Latitude'], bp2['Longitude'], s=50, c='green')
ax.scatter(bp3['Latitude'], bp3['Longitude'], s=50, c='blue')
plt.savefig('cluster_ex1.png');
plt.show()
fig, ax = plt.subplots(figsize=(15,12))
ax.scatter(noise['Latitude'], noise['Longitude'],s=1, c='gray' )
ax.scatter(core['Latitude'], core['Longitude'],marker= "*", s=10, c='red')
ax.scatter(bp1['Latitude'], bp1['Longitude'], marker = "v", s=10, c='yellow')
ax.scatter(bp2['Latitude'], bp2['Longitude'], marker= "P", s=10, c='green')
ax.scatter(bp3['Latitude'], bp3['Longitude'], marker= "d", s=10, c='blue')
ax.set_xlim(left=50.8, right=51.7)
ax.set_ylim(bottom=-3.5, top=-2.0)
plt.savefig('cluster_zoomed.png');
plt.show()
#Creates four polar axes, and accesses them through the returned array
fig, axes = plt.subplots(2, 2, figsize=(12,10))
axes[0, 0].scatter(noise['Latitude'], noise['Longitude'],s=0.01, c='gray' )
axes[0, 0].title.set_text('Noise')
axes[0, 1].scatter(core['Latitude'], core['Longitude'],marker= "*", s=10, c='red')
axes[0, 1].title.set_text('Core')
axes[1, 0].scatter(bp1['Latitude'], bp1['Longitude'], marker = "v", s=50, c='yellow')
axes[1, 0].title.set_text('Border Points 1')
axes[1,1].scatter(bp2['Latitude'], bp2['Longitude'], marker= "P", s=50, c='green')
axes[1, 1].title.set_text('Border Points 2')
plt.tight_layout()
plt.show()
```
## Spatial Autocorellation
We will Polygon data for this section. Let us first get the data from the dropbox URL
```
!wget https://www.dropbox.com/s/k2ynddy79k2r46i/ASC_Beats_2016.zip
!unzip ASC_Beats_2016.zip
boundaries = gpd.read_file('ASC_Beats_2016.shp')
boundaries.head()
boundaries.crs, crime_somerset_gdf.crs
boundaries_4326 = boundaries.to_crs({'init': 'epsg:4326'})
fig, ax = plt.subplots(figsize=(12,10))
boundaries_4326.plot(ax=ax)
crime_somerset_gdf.plot(ax=ax, markersize=10, color='red')
plt.savefig('overlayed_map.png')
# Points in Polygon
crimes_with_boundaries = gpd.sjoin(boundaries_4326,crime_somerset_gdf, op='contains' )
crimes_with_boundaries.head()
grouped_crimes = crimes_with_boundaries.groupby('BEAT_CODE').size()
grouped_crimes.head()
df = grouped_crimes.to_frame().reset_index()
df.columns = ['BEAT_CODE', 'CrimeCount']
df.head()
final_result = boundaries.merge(df, on='BEAT_CODE')
final_result.head()
```
* Choropleth Map of the Crime Count
```
fig, ax = plt.subplots(figsize=(12,10))
final_result.plot(column='CrimeCount', scheme='Quantiles', k=5, cmap='YlGnBu', legend=True, ax=ax);
plt.tight_layout()
ax.set_axis_off()
plt.savefig('choroplethmap.png')
plt.title('Crimes Choropleth Map ')
plt.show()
```
### GLobal Spatial Autocorrelation
```
# Create y variable values
y = final_result['CrimeCount'].values
# Sptial lag
ylag = weights.lag_spatial(wq, y)
final_result['ylag'] = ylag
# Get Weights (Queen)
wq = Queen.from_dataframe(final_result)
wq.transform = 'r'
moran = Moran(y, wq)
moran.I
from splot.esda import plot_moran
plot_moran(moran, zstandard=True, figsize=(10,4))
plt.tight_layout()
plt.savefig('moronPlot.png')
plt.show()
moran.p_sim
```
## Visualizing Local Autocorrelation with splot - Hot Spots, Cold Spots and Spatial Outliers
```
# calculate Moran_Local and plot
moran_loc = Moran_Local(y, w)
fig, ax = moran_scatterplot(moran_loc)
plt.savefig('moron_local.png')
plt.show()
fig, ax = moran_scatterplot(moran_loc, p=0.05)
plt.show()
lisa_cluster(moran_loc, final_result, p=0.05, figsize = (10,8))
plt.tight_layout()
plt.savefig('lisa_clusters.png')
plt.show()
```
# END
| true |
code
| 0.618003 | null | null | null | null |
|
# 3장. 사이킷런을 타고 떠나는 머신 러닝 분류 모델 투어
**아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch03/ch03.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch03/ch03.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
`watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
```
#!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,pandas,matplotlib,sklearn
```
# 사이킷런 첫걸음
사이킷런에서 붓꽃 데이터셋을 적재합니다. 세 번째 열은 꽃잎의 길이이고 네 번째 열은 꽃잎의 너비입니다. 클래스는 이미 정수 레이블로 변환되어 있습니다. 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica 입니다.
```
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
print('클래스 레이블:', np.unique(y))
```
70%는 훈련 데이터 30%는 테스트 데이터로 분할합니다:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=1, stratify=y)
print('y의 레이블 카운트:', np.bincount(y))
print('y_train의 레이블 카운트:', np.bincount(y_train))
print('y_test의 레이블 카운트:', np.bincount(y_test))
```
특성을 표준화합니다:
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
```
## 사이킷런으로 퍼셉트론 훈련하기
2장의 `plot_decision_region` 함수를 다시 사용하겠습니다:
```
from sklearn.linear_model import Perceptron
ppn = Perceptron(max_iter=40, eta0=0.1, tol=1e-3, random_state=1)
ppn.fit(X_train_std, y_train)
y_pred = ppn.predict(X_test_std)
print('잘못 분류된 샘플 개수: %d' % (y_test != y_pred).sum())
from sklearn.metrics import accuracy_score
print('정확도: %.2f' % accuracy_score(y_test, y_pred))
print('정확도: %.2f' % ppn.score(X_test_std, y_test))
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# 마커와 컬러맵을 설정합니다.
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 결정 경계를 그립니다.
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=cl,
edgecolor='black')
# 테스트 샘플을 부각하여 그립니다.
if test_idx:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
edgecolor='black',
alpha=1.0,
linewidth=1,
marker='o',
s=100,
label='test set')
```
표준화된 훈련 데이터를 사용하여 퍼셉트론 모델을 훈련합니다:
```
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X=X_combined_std, y=y_combined,
classifier=ppn, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
# 로지스틱 회귀를 사용한 클래스 확률 모델링
### 로지스틱 회귀의 이해와 조건부 확률
```
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
z = np.arange(-7, 7, 0.1)
phi_z = sigmoid(z)
plt.plot(z, phi_z)
plt.axvline(0.0, color='k')
plt.ylim(-0.1, 1.1)
plt.xlabel('z')
plt.ylabel('$\phi (z)$')
# y 축의 눈금과 격자선
plt.yticks([0.0, 0.5, 1.0])
ax = plt.gca()
ax.yaxis.grid(True)
plt.tight_layout()
plt.show()
```
### 로지스틱 비용 함수의 가중치 학습하기
```
def cost_1(z):
return - np.log(sigmoid(z))
def cost_0(z):
return - np.log(1 - sigmoid(z))
z = np.arange(-10, 10, 0.1)
phi_z = sigmoid(z)
c1 = [cost_1(x) for x in z]
plt.plot(phi_z, c1, label='J(w) if y=1')
c0 = [cost_0(x) for x in z]
plt.plot(phi_z, c0, linestyle='--', label='J(w) if y=0')
plt.ylim(0.0, 5.1)
plt.xlim([0, 1])
plt.xlabel('$\phi$(z)')
plt.ylabel('J(w)')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
class LogisticRegressionGD(object):
"""경사 하강법을 사용한 로지스틱 회귀 분류기
매개변수
------------
eta : float
학습률 (0.0과 1.0 사이)
n_iter : int
훈련 데이터셋 반복 횟수
random_state : int
가중치 무작위 초기화를 위한 난수 생성기 시드
속성
-----------
w_ : 1d-array
학습된 가중치
cost_ : list
에포크마다 누적된 로지스틱 비용 함수 값
"""
def __init__(self, eta=0.05, n_iter=100, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""훈련 데이터 학습
매개변수
----------
X : {array-like}, shape = [n_samples, n_features]
n_samples 개의 샘플과 n_features 개의 특성으로 이루어진 훈련 데이터
y : array-like, shape = [n_samples]
타깃값
반환값
-------
self : object
"""
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
net_input = self.net_input(X)
output = self.activation(net_input)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
# 오차 제곱합 대신 로지스틱 비용을 계산합니다.
cost = -y.dot(np.log(output)) - ((1 - y).dot(np.log(1 - output)))
self.cost_.append(cost)
return self
def net_input(self, X):
"""최종 입력 계산"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, z):
"""로지스틱 시그모이드 활성화 계산"""
return 1. / (1. + np.exp(-np.clip(z, -250, 250)))
def predict(self, X):
"""단위 계단 함수를 사용하여 클래스 레이블을 반환합니다"""
return np.where(self.net_input(X) >= 0.0, 1, 0)
# 다음과 동일합니다.
# return np.where(self.activation(self.net_input(X)) >= 0.5, 1, 0)
X_train_01_subset = X_train[(y_train == 0) | (y_train == 1)]
y_train_01_subset = y_train[(y_train == 0) | (y_train == 1)]
lrgd = LogisticRegressionGD(eta=0.05, n_iter=1000, random_state=1)
lrgd.fit(X_train_01_subset,
y_train_01_subset)
plot_decision_regions(X=X_train_01_subset,
y=y_train_01_subset,
classifier=lrgd)
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
### 사이킷런을 사용해 로지스틱 회귀 모델 훈련하기
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', multi_class='auto', C=100.0, random_state=1)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=lr, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
lr.predict_proba(X_test_std[:3, :])
lr.predict_proba(X_test_std[:3, :]).sum(axis=1)
lr.predict_proba(X_test_std[:3, :]).argmax(axis=1)
lr.predict(X_test_std[:3, :])
lr.predict(X_test_std[0, :].reshape(1, -1))
```
### 규제를 사용해 과대적합 피하기
```
weights, params = [], []
for c in np.arange(-5, 5):
lr = LogisticRegression(solver='liblinear', multi_class='auto', C=10.**c, random_state=1)
lr.fit(X_train_std, y_train)
weights.append(lr.coef_[1])
params.append(10.**c)
weights = np.array(weights)
plt.plot(params, weights[:, 0],
label='petal length')
plt.plot(params, weights[:, 1], linestyle='--',
label='petal width')
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.legend(loc='upper left')
plt.xscale('log')
plt.show()
```
# 서포트 벡터 머신을 사용한 최대 마진 분류
```
from sklearn.svm import SVC
svm = SVC(kernel='linear', C=1.0, random_state=1)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std,
y_combined,
classifier=svm,
test_idx=range(105, 150))
plt.scatter(svm.dual_coef_[0, :], svm.dual_coef_[1, :])
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
svm.coef_
svm.dual_coef_, svm.dual_coef_.shape
```
## 사이킷런의 다른 구현
```
from sklearn.linear_model import SGDClassifier
ppn = SGDClassifier(loss='perceptron')
lr = SGDClassifier(loss='log')
svm = SGDClassifier(loss='hinge')
```
# 커널 SVM을 사용하여 비선형 문제 풀기
```
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(1)
X_xor = np.random.randn(200, 2)
y_xor = np.logical_xor(X_xor[:, 0] > 0,
X_xor[:, 1] > 0)
y_xor = np.where(y_xor, 1, -1)
plt.scatter(X_xor[y_xor == 1, 0],
X_xor[y_xor == 1, 1],
c='b', marker='x',
label='1')
plt.scatter(X_xor[y_xor == -1, 0],
X_xor[y_xor == -1, 1],
c='r',
marker='s',
label='-1')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
```
## 커널 기법을 사용해 고차원 공간에서 분할 초평면 찾기
```
svm = SVC(kernel='rbf', random_state=1, gamma=0.10, C=10.0)
svm.fit(X_xor, y_xor)
plot_decision_regions(X_xor, y_xor,
classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
svm = SVC(kernel='rbf', random_state=1, gamma=0.2, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=range(105, 150))
plt.scatter(svm.dual_coef_[0,:], svm.dual_coef_[1,:])
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
svm = SVC(kernel='rbf', random_state=1, gamma=100.0, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
# 결정 트리 학습
## 정보 이득 최대화-자원을 최대로 활용하기
```
import matplotlib.pyplot as plt
import numpy as np
def gini(p):
return p * (1 - p) + (1 - p) * (1 - (1 - p))
def entropy(p):
return - p * np.log2(p) - (1 - p) * np.log2((1 - p))
def error(p):
return 1 - np.max([p, 1 - p])
x = np.arange(0.0, 1.0, 0.01)
ent = [entropy(p) if p != 0 else None for p in x]
sc_ent = [e * 0.5 if e else None for e in ent]
err = [error(i) for i in x]
fig = plt.figure()
ax = plt.subplot(111)
for i, lab, ls, c, in zip([ent, sc_ent, gini(x), err],
['Entropy', 'Entropy (scaled)',
'Gini Impurity', 'Misclassification Error'],
['-', '-', '--', '-.'],
['black', 'lightgray', 'red', 'green', 'cyan']):
line = ax.plot(x, i, label=lab, linestyle=ls, lw=2, color=c)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15),
ncol=5, fancybox=True, shadow=False)
ax.axhline(y=0.5, linewidth=1, color='k', linestyle='--')
ax.axhline(y=1.0, linewidth=1, color='k', linestyle='--')
plt.ylim([0, 1.1])
plt.xlabel('p(i=1)')
plt.ylabel('Impurity Index')
plt.show()
```
## 결정 트리 만들기
```
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='gini',
max_depth=4,
random_state=1)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined,
classifier=tree, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
from pydotplus import graph_from_dot_data
from sklearn.tree import export_graphviz
dot_data = export_graphviz(tree,
filled=True,
rounded=True,
class_names=['Setosa',
'Versicolor',
'Virginica'],
feature_names=['petal length',
'petal width'],
out_file=None)
graph = graph_from_dot_data(dot_data)
graph.write_png('tree.png')
```

## 랜덤 포레스트로 여러 개의 결정 트리 연결하기
```
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='gini',
n_estimators=25,
random_state=1,
n_jobs=2)
forest.fit(X_train, y_train)
plot_decision_regions(X_combined, y_combined,
classifier=forest, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
# K-최근접 이웃: 게으른 학습 알고리즘
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5,
p=2,
metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
| true |
code
| 0.519521 | null | null | null | null |
|
<center>
<img src="img/scikit-learn-logo.png" width="40%" />
<br />
<h1>Robust and calibrated estimators with Scikit-Learn</h1>
<br /><br />
Gilles Louppe (<a href="https://twitter.com/glouppe">@glouppe</a>)
<br /><br />
New York University
</center>
```
# Global imports and settings
# Matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["figure.max_open_warning"] = -1
# Print options
import numpy as np
np.set_printoptions(precision=3)
# Slideshow
from notebook.services.config import ConfigManager
cm = ConfigManager()
cm.update('livereveal', {'width': 1440, 'height': 768, 'scroll': True, 'theme': 'simple'})
# Silence warnings
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
# Utils
from robustness import plot_surface
from robustness import plot_outlier_detector
%%javascript
Reveal.addEventListener("slidechanged", function(event){ window.location.hash = "header"; });
```
# Motivation
_In theory,_
- Samples $x$ are drawn from a distribution $P$;
- As data increases, convergence towards the optimal model is guaranteed.
_In practice,_
- A few samples may be distant from other samples:
- either because they correspond to rare observations,
- or because they are due to experimental errors;
- Because data is finite, outliers might strongly affect the resulting model.
_Today's goal:_ build models that are robust to outliers!
# Outline
* Motivation
* Novelty and anomaly detection
* Ensembling for robustness
* From least squares to least absolute deviances
* Calibration
# Novelty and anomaly detection
_Novelty detection:_
- Training data is not polluted by outliers, and we are interested in detecting anomalies in new observations.
_Outlier detection:_
- Training data contains outliers, and we need to fit the central mode of the training data, ignoring the deviant observations.
## API
```
# Unsupervised learning
estimator.fit(X_train) # no "y_train"
# Detecting novelty or outliers
y_pred = estimator.predict(X_test) # inliers == 1, outliers == -1
y_score = estimator.decision_function(X_test) # outliers == highest scores
# Generate data
from sklearn.datasets import make_blobs
inliers, _ = make_blobs(n_samples=200, centers=2, random_state=1)
outliers = np.random.rand(50, 2)
outliers = np.min(inliers, axis=0) + (np.max(inliers, axis=0) - np.min(inliers, axis=0)) * outliers
X = np.vstack((inliers, outliers))
ground_truth = np.ones(len(X), dtype=np.int)
ground_truth[-len(outliers):] = 0
from sklearn.svm import OneClassSVM
from sklearn.covariance import EllipticEnvelope
from sklearn.ensemble import IsolationForest
# Unsupervised learning
estimator = OneClassSVM(nu=0.4, kernel="rbf", gamma=0.1)
# clf = EllipticEnvelope(contamination=.1)
# clf = IsolationForest(max_samples=100)
estimator.fit(X)
plot_outlier_detector(estimator, X, ground_truth)
```
# Ensembling for robustness
## Bias-variance decomposition
__Theorem.__ For the _squared error loss_, the bias-variance decomposition of the expected
generalization error at $X=\mathbf{x}$ is
$$
\mathbb{E}_{\cal L} \{ Err(\varphi_{\cal L}(\mathbf{x})) \} = \text{noise}(\mathbf{x}) + \text{bias}^2(\mathbf{x}) + \text{var}(\mathbf{x})
$$
<center>
<img src="img/bv.png" width="50%" />
</center>
## Variance and robustness
- Low variance implies robustness to outliers
- High variance implies sensitivity to data pecularities
## Ensembling reduces variance
__Theorem.__ For the _squared error loss_, the bias-variance decomposition of the expected generalization error at $X=x$ of an ensemble of $M$ randomized models $\varphi_{{\cal L},\theta_m}$ is
$$
\mathbb{E}_{\cal L} \{ Err(\psi_{{\cal L},\theta_1,\dots,\theta_M}(\mathbf{x})) \} = \text{noise}(\mathbf{x}) + \text{bias}^2(\mathbf{x}) + \text{var}(\mathbf{x})
$$
where
\begin{align*}
\text{noise}(\mathbf{x}) &= Err(\varphi_B(\mathbf{x})), \\
\text{bias}^2(\mathbf{x}) &= (\varphi_B(\mathbf{x}) - \mathbb{E}_{{\cal L},\theta} \{ \varphi_{{\cal L},\theta}(\mathbf{x}) \} )^2, \\
\text{var}(\mathbf{x}) &= \rho(\mathbf{x}) \sigma^2_{{\cal L},\theta}(\mathbf{x}) + \frac{1 - \rho(\mathbf{x})}{M} \sigma^2_{{\cal L},\theta}(\mathbf{x}).
\end{align*}
```
# Load data
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:, [0, 1]]
y = iris.target
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(X, y)
plot_surface(clf, X, y)
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100).fit(X, y)
plot_surface(clf, X, y)
```
# From least squares to least absolute deviances
## Robust learning
- Most methods minimize the mean squared error $\frac{1}{N} \sum_i (y_i - \varphi(x_i))^2$
- By definition, squaring residuals gives emphasis to large residuals.
- Outliers are thus very likely to have a significant effect.
- A robust alternative is to minimize instead the mean absolute deviation $\frac{1}{N} \sum_i |y_i - \varphi(x_i)|$
- Large residuals are therefore given much less emphasis.
```
# Generate data
from sklearn.datasets import make_regression
n_outliers = 3
X, y, coef = make_regression(n_samples=100, n_features=1, n_informative=1, noise=10,
coef=True, random_state=0)
np.random.seed(1)
X[-n_outliers:] = 1 + 0.25 * np.random.normal(size=(n_outliers, 1))
y[-n_outliers:] = -100 + 10 * np.random.normal(size=n_outliers)
plt.scatter(X[:-n_outliers], y[:-n_outliers], color="b")
plt.scatter(X[-n_outliers:], y[-n_outliers:], color="r")
plt.xlim(-3, 3)
plt.ylim(-150, 120)
plt.show()
# Fit with least squares vs. least absolute deviances
from sklearn.ensemble import GradientBoostingRegressor
clf_ls = GradientBoostingRegressor(loss="ls")
clf_lad = GradientBoostingRegressor(loss="lad")
clf_ls.fit(X, y)
clf_lad.fit(X, y)
# Plot
X_test = np.linspace(-5, 5).reshape(-1, 1)
plt.scatter(X[:-n_outliers], y[:-n_outliers], color="b")
plt.scatter(X[-n_outliers:], y[-n_outliers:], color="r")
plt.plot(X_test, clf_ls.predict(X_test), "g", label="Least squares")
plt.plot(X_test, clf_lad.predict(X_test), "y", label="Lead absolute deviances")
plt.xlim(-3, 3)
plt.ylim(-150, 120)
plt.legend()
plt.show()
```
## Robust scaling
- Standardization of a dataset is a common requirement for many machine learning estimators.
- Typically this is done by removing the mean and scaling to unit variance.
- For similar reasons as before, outliers can influence the sample mean / variance in a negative way.
- In such cases, the median and the interquartile range often give better results.
```
# Generate data
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
X, y = make_blobs(n_samples=100, centers=[(0, 0), (-1, 0)], random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
X_train[0, 0] = -1000 # a fairly large outlier
# Scale data
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
standard_scaler = StandardScaler()
Xtr_s = standard_scaler.fit_transform(X_train)
Xte_s = standard_scaler.transform(X_test)
robust_scaler = RobustScaler()
Xtr_r = robust_scaler.fit_transform(X_train)
Xte_r = robust_scaler.transform(X_test)
# Plot data
fig, ax = plt.subplots(1, 3, figsize=(12, 4))
ax[0].scatter(X_train[:, 0], X_train[:, 1], color=np.where(y_train == 0, 'r', 'b'))
ax[1].scatter(Xtr_s[:, 0], Xtr_s[:, 1], color=np.where(y_train == 0, 'r', 'b'))
ax[2].scatter(Xtr_r[:, 0], Xtr_r[:, 1], color=np.where(y_train == 0, 'r', 'b'))
ax[0].set_title("Unscaled data")
ax[1].set_title("After standard scaling (zoomed in)")
ax[2].set_title("After robust scaling (zoomed in)")
# for the scaled data, we zoom in to the data center (outlier can't be seen!)
for a in ax[1:]:
a.set_xlim(-3, 3)
a.set_ylim(-3, 3)
plt.show()
# Classify using kNN
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(Xtr_s, y_train)
acc_s = knn.score(Xte_s, y_test)
print("Test set accuracy using standard scaler: %.3f" % acc_s)
knn.fit(Xtr_r, y_train)
acc_r = knn.score(Xte_r, y_test)
print("Test set accuracy using robust scaler: %.3f" % acc_r)
```
# Calibration
- In classification, you often want to predict not only the class label, but also the associated probability.
- However, not all classifiers provide well-calibrated probabilities.
- Thus, a separate calibration of predicted probabilities is often desirable as a postprocessing
```
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
X, y = make_blobs(n_samples=10000, n_features=2, cluster_std=1.0,
centers=[(-5, -5), (0, 0), (5, 5)], shuffle=False)
y[:len(X) // 2] = 0
y[len(X) // 2:] = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
# Plot
for this_y, color in zip([0, 1], ["r", "b"]):
this_X = X_train[y_train == this_y]
plt.scatter(this_X[:, 0], this_X[:, 1], c=color, alpha=0.2, label="Class %s" % this_y)
plt.legend(loc="best")
plt.title("Data")
plt.show()
from sklearn.naive_bayes import GaussianNB
from sklearn.calibration import CalibratedClassifierCV
# Without calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# With isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_isotonic.fit(X_train, y_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# Plot
order = np.lexsort((prob_pos_clf, ))
plt.plot(prob_pos_clf[order], 'r', label='No calibration')
plt.plot(prob_pos_isotonic[order], 'b', label='Isotonic calibration')
plt.plot(np.linspace(0, y_test.size, 51)[1::2], y_test[order].reshape(25, -1).mean(1), 'k--', label=r'Empirical')
plt.xlabel("Instances sorted according to predicted probability "
"(uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
plt.ylim([-0.05, 1.05])
plt.show()
```
# Summary
For robust and calibrated estimators:
- remove outliers before training;
- reduce variance by ensembling estimators;
- drive your analysis with loss functions that are robust to outliers;
- avoid the squared error loss!
- calibrate the output of your classifier if probabilities are important for your problem.
```
questions?
```
| true |
code
| 0.737483 | null | null | null | null |
|
# Segmented deformable mirrors
We will use segmented deformable mirrors and simulate the PSFs that result from segment pistons and tilts. We will compare this functionality against Poppy, another optical propagation package.
First we'll import all packages.
```
import os
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
import hcipy
import poppy
# Parameters for the pupil function
pupil_diameter = 0.019725 # m
gap_size = 90e-6 # m
num_rings = 3
segment_flat_to_flat = (pupil_diameter - (2 * num_rings + 1) * gap_size) / (2 * num_rings + 1)
focal_length = 1 # m
# Parameters for the simulation
num_pix = 1024
wavelength = 638e-9
num_airy = 20
sampling = 4
norm = False
```
## Instantiate the segmented mirrors
### HCIPy SM: `hsm`
We need to generate a pupil grid for the aperture, and a focal grid and propagator for the focal plane images after the DM.
```
# HCIPy grids and propagator
pupil_grid = hcipy.make_pupil_grid(dims=num_pix, diameter=pupil_diameter)
focal_grid = hcipy.make_focal_grid(sampling, num_airy,
pupil_diameter=pupil_diameter,
reference_wavelength=wavelength,
focal_length=focal_length)
focal_grid = focal_grid.shifted(focal_grid.delta / 2)
prop = hcipy.FraunhoferPropagator(pupil_grid, focal_grid, focal_length)
```
We generate a segmented aperture for the segmented mirror. For convenience, we'll use the HiCAT pupil without spiders. We'll use supersampling to better resolve the segment gaps.
```
aper, segments = hcipy.make_hexagonal_segmented_aperture(num_rings,
segment_flat_to_flat,
gap_size,
starting_ring=1,
return_segments=True)
aper = hcipy.evaluate_supersampled(aper, pupil_grid, 1)
segments = hcipy.evaluate_supersampled(segments, pupil_grid, 1)
plt.title('HCIPy aperture')
hcipy.imshow_field(aper, cmap='gray')
```
Now we make the segmented mirror. In order to be able to apply the SM to a plane, that plane needs to be a `Wavefront`, which combines a `Field` - here the aperture - with a wavelength, here `wavelength`.
In this example here, since the SM doesn't have any extra effects on the pupil since it's still completely flat, we don't actually have to apply the SM, although of course we could.
```
# Instantiate the segmented mirror
hsm = hcipy.SegmentedDeformableMirror(segments)
# Make a pupil plane wavefront from aperture
wf = hcipy.Wavefront(aper, wavelength)
# Apply SM if you want to
wf = hsm(wf)
plt.figure(figsize=(8, 8))
plt.title('Wavefront intensity at HCIPy SM')
hcipy.imshow_field(wf.intensity, cmap='gray')
plt.colorbar()
plt.show()
```
### Poppy SM: `psm`
We'll do the same for Poppy.
```
psm = poppy.dms.HexSegmentedDeformableMirror(name='Poppy SM',
rings=3,
flattoflat=segment_flat_to_flat*u.m,
gap=gap_size*u.m,
center=False)
# Display the transmission and phase of the poppy sm
plt.figure(figsize=(8, 8))
psm.display(what='amplitude')
```
## Create reference images
### HCIPy reference image
We need to apply the SM to the wavefront in the pupil plane and then propagate it to the image plane.
```
# Apply SM to pupil plane wf
wf_sm = hsm(wf)
# Propagate from SM to image plane
im_ref_hc = prop(wf_sm)
# Display intensity and phase in image plane
plt.figure(figsize=(8, 8))
plt.suptitle('Image plane after HCIPy SM')
# Get normalization factor for HCIPy reference image
norm_hc = np.max(im_ref_hc.intensity)
hcipy.imshow_psf(im_ref_hc, normalization='peak')
```
### Poppy reference image
For the Poppy propagation, we need to make an optical system of which we then calculate the PSF. We match HCIPy's image scale with Poppy.
```
# Make an optical system with the Poppy SM and a detector
psm.flatten()
pxscle = np.degrees(wavelength / pupil_diameter) * 3600 / sampling
fovarc = pxscle * 160
osys = poppy.OpticalSystem()
osys.add_pupil(psm)
osys.add_detector(pixelscale=pxscle, fov_arcsec=fovarc, oversample=1)
# Calculate the PSF
psf = osys.calc_psf(wavelength)
plt.figure(figsize=(8, 8))
poppy.display_psf(psf, vmin=1e-9, vmax=0.1)
# Get the PSF as an array
im_ref_pop = psf[0].data
print('Poppy PSF shape: {}'.format(im_ref_pop.shape))
# Get normalization from Poppy reference image
norm_pop = np.max(im_ref_pop)
```
### Both reference images side-by-side
```
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
hcipy.imshow_field(np.log10(im_ref_hc.intensity / norm_hc), vmin=-10, cmap='inferno')
plt.title('HCIPy reference PSF')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(np.log10(im_ref_pop / norm_pop), origin='lower', vmin=-10, cmap='inferno')
plt.title('Poppy reference PSF')
plt.colorbar()
ref_dif = im_ref_pop / norm_pop - im_ref_hc.intensity.shaped / norm_hc
lims = np.max(np.abs(ref_dif))
plt.figure(figsize=(15, 6))
plt.suptitle(f'Maximum relative error: {lims:0.2g} relative to the peak intensity')
plt.subplot(1, 2, 1)
plt.imshow(ref_dif, origin='lower', vmin=-lims, vmax=lims, cmap='RdBu')
plt.title('Full image')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(ref_dif[60:100,60:100], origin='lower', vmin=-lims, vmax=lims, cmap='RdBu')
plt.title('Zoomed in')
plt.colorbar()
```
## Applying aberrations
```
# Define function from rad of phase to m OPD
def aber_to_opd(aber_rad, wavelength):
aber_m = aber_rad * wavelength / (2 * np.pi)
return aber_m
aber_rad = 4.0
print('Aberration: {} rad'.format(aber_rad))
print('Aberration: {} m'.format(aber_to_opd(aber_rad, wavelength)))
# Poppy and HCIPy have a different way of indexing segments
# Figure out which index to poke on which mirror
poppy_index_to_hcipy_index = []
for n in range(1, num_rings + 1):
base = list(range(3 * (n - 1) * n + 1, 3 * n * (n + 1) + 1))
poppy_index_to_hcipy_index.extend(base[2 * n::-1])
poppy_index_to_hcipy_index.extend(base[:2 * n:-1])
poppy_index_to_hcipy_index = {j: i for i, j in enumerate(poppy_index_to_hcipy_index) if j is not None}
hcipy_index_to_poppy_index = {j: i for i, j in poppy_index_to_hcipy_index.items()}
# Flatten both SMs just to be sure
hsm.flatten()
psm.flatten()
# Poking segment 35 and 25
for i in [35, 25]:
hsm.set_segment_actuators(i, aber_to_opd(aber_rad, wavelength) / 2, 0, 0)
psm.set_actuator(hcipy_index_to_poppy_index[i], aber_to_opd(aber_rad, wavelength) * u.m, 0, 0)
# Display both segmented mirrors in OPD
# HCIPy
plt.figure(figsize=(8,8))
plt.title('OPD for HCIPy SM')
hcipy.imshow_field(hsm.surface * 2, mask=aper, cmap='RdBu_r', vmin=-5e-7, vmax=5e-7)
plt.colorbar()
plt.show()
# Poppy
plt.figure(figsize=(8,8))
psm.display(what='opd')
plt.show()
```
### Show focal plane images
```
### HCIPy
# Apply SM to pupil plane wf
wf_fp_pistoned = hsm(wf)
# Propagate from SM to image plane
im_pistoned_hc = prop(wf_fp_pistoned)
### Poppy
# Calculate the PSF
psf = osys.calc_psf(wavelength)
# Get the PSF as an array
im_pistoned_pop = psf[0].data
### Display intensity of both cases image plane
plt.figure(figsize=(15, 6))
plt.suptitle('Image plane after SM for $\phi$ = ' + str(aber_rad) + ' rad')
plt.subplot(1, 2, 1)
hcipy.imshow_field(np.log10(im_pistoned_hc.intensity / norm_hc), cmap='inferno', vmin=-9)
plt.title('HCIPy pistoned pair')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(np.log10(im_pistoned_pop / norm_pop), origin='lower', cmap='inferno', vmin=-9)
plt.title('Poppy pistoned pair')
plt.colorbar()
```
## A mix of piston, tip and tilt (PTT)
```
aber_rad_tt = 200e-6
aber_rad_p = 1.8
opd_piston = aber_to_opd(aber_rad_p, wavelength)
### Put aberrations on both SMs
# Flatten both SMs
hsm.flatten()
psm.flatten()
## PISTON
for i in [19, 28, 23, 16]:
hsm.set_segment_actuators(i, opd_piston / 2, 0, 0)
psm.set_actuator(hcipy_index_to_poppy_index[i], opd_piston * u.m, 0, 0)
for i in [3, 35, 30, 8]:
hsm.set_segment_actuators(i, -0.5 * opd_piston / 2, 0, 0)
psm.set_actuator(hcipy_index_to_poppy_index[i], -0.5 * opd_piston * u.m, 0, 0)
for i in [14, 18, 1, 32, 12]:
hsm.set_segment_actuators(i, 0.3 * opd_piston / 2, 0, 0)
psm.set_actuator(hcipy_index_to_poppy_index[i], 0.3 * opd_piston * u.m, 0, 0)
## TIP and TILT
for i in [2, 5, 11, 15, 22]:
hsm.set_segment_actuators(i, 0, aber_rad_tt / 2, 0.3 * aber_rad_tt / 2)
psm.set_actuator(hcipy_index_to_poppy_index[i], 0, aber_rad_tt, 0.3 * aber_rad_tt)
for i in [4, 6, 26]:
hsm.set_segment_actuators(i, 0, -aber_rad_tt / 2, 0)
psm.set_actuator(hcipy_index_to_poppy_index[i], 0, -aber_rad_tt, 0)
for i in [34, 31, 7]:
hsm.set_segment_actuators(i, 0, 0, 1.3 * aber_rad_tt / 2)
psm.set_actuator(hcipy_index_to_poppy_index[i], 0, 0, 1.3 * aber_rad_tt)
# Display both segmented mirrors in OPD
# HCIPy
plt.figure(figsize=(8,8))
plt.title('OPD for HCIPy SM')
hcipy.imshow_field(hsm.surface * 2, mask=aper, cmap='RdBu_r', vmin=-5e-7, vmax=5e-7)
plt.colorbar()
plt.show()
# Poppy
plt.figure(figsize=(8,8))
psm.display(what='opd')
plt.show()
### Propagate to image plane
## HCIPy
# Propagate from pupil plane through SM to image plane
im_pistoned_hc = prop(hsm(wf)).intensity
## Poppy
# Calculate the PSF
psf = osys.calc_psf(wavelength)
# Get the PSF as an array
im_pistoned_pop = psf[0].data
### Display intensity of both cases image plane
plt.figure(figsize=(18, 9))
plt.suptitle('Image plane after SM forrandom arangement')
plt.subplot(1, 2, 1)
hcipy.imshow_field(np.log10(im_pistoned_hc / norm_hc), cmap='inferno', vmin=-9)
plt.title('HCIPy random arangement')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(np.log10(im_pistoned_pop / norm_pop), origin='lower', cmap='inferno', vmin=-9)
plt.title('Poppy tipped arangement')
plt.colorbar()
plt.show()
```
| true |
code
| 0.540378 | null | null | null | null |
|
The most common analytical task is to take a bunch of numbers in dataset and summarise it with fewer numbers, preferably a single number. Enter the 'average', sum all the numbers and divide by the count of the numbers. In mathematical terms this is known as the 'arithmetic mean', and doesn't always summarise a dataset correctly. This post looks into the other types of ways that we can summarise a dataset.
> The proper term for this method of summarising is determining the central tendency of the dataset.
## Generate The Data
First step is to generate a dataset to summarise, to do this we use the `random` package from the standard library. Using matplotlib we can plot our 'number line'.
```
import random
import typing
random.seed(42)
dataset: typing.List = []
for _ in range(50):
dataset.append(random.randint(1,100))
print(dataset)
import matplotlib.pyplot as plt
def plot_1d_data(arr:typing.List, val:float, **kwargs):
constant_list = [val for _ in range(len(arr))]
plt.plot(arr, constant_list, 'x', **kwargs)
plot_1d_data(dataset,5)
```
## Median
The median is the middle number of the sorted list, in the quite literal sense. For example the median of 1,2,3,4,5 is 3; as is the same for 3,2,4,1,5. The median can be more descriptive of the dataset over the arithmetic mean whenever there are significant outliers in the data that skew the arithmetic mean.
> If there is an even amount of numbers in the data, the median becomes the arithmetic mean of the two middle numbers. For example, the median for 1,2,3,4,5,6 is 3.5 (3+4/2).
### When to use
Use the median whenever there is a large spread of numbers across the domain
```
import statistics
print(f"Median: {statistics.median(dataset)}")
plot_1d_data(dataset,5)
plt.plot(statistics.median(dataset),5,'x',color='red',markersize=50)
plt.annotate('Median',(statistics.median(dataset),5),(statistics.median(dataset),5.1),arrowprops={'width':0.1})
```
## Mode
The mode of a dataset is the number the appears most in the dataset. It is to be noted that this is the least used method of demonstrating central tendency.
### When to use
Mode is best used with nominal data, meaning if the data you are trying to summarise has no quantitative metrics behind it, then mode would be useful. Eg, if you are looking through textual data, finding the most used word is a significant way of summarising the data.
```
import statistics
print(f"Mode: {statistics.mode(dataset)}")
plot_1d_data(dataset,5)
plt.plot(statistics.mode(dataset),5,'x',color='red',markersize=50)
plt.annotate('Mode',(statistics.mode(dataset),5),(statistics.mode(dataset),5.1),arrowprops={'width':0.1})
```
## Arithmetic Mean
This is the most used way of representing central tendency. It is done by summing all the points in the dataset, and then dividing by the number of points (to scale back into the original domain). This is the best way of representing central tendency if the data does not containing outliers that will skew the outcome (which can be overcome by normalisation).
### When to use
If the dataset is normally distributed, this is the ideal measure.
```
def arithmetic_mean(dataset: typing.List):
return sum(dataset) / len(dataset)
print(f"Arithmetic Mean: {arithmetic_mean(dataset)}")
plot_1d_data(dataset,5)
plt.plot(arithmetic_mean(dataset),5,'x',color='red',markersize=50)
plt.annotate('Arithmetic Mean',(arithmetic_mean(dataset),5),(arithmetic_mean(dataset),5.1),arrowprops={'width':0.1})
```
## Geometric Mean
The geometric mean is calculated by multiplying all numbers in a set, and then calculating the `nth` root of the multiplied figure, when n is the count of numbers. Since this using the `multiplicative` nature of the dataset to find a figure to summarise by, rather than an `additive` figure of the arithmetic mean, thus making it more suitable for datasets with a multiplicative relationship.
> We calculate the nth root by raising to the power of the reciprocal.
### When to use
If the dataset has a multiplicative nature (eg, growth in population, interest rates, etc), then geometric mean will be a more suitable way of summarising the dataset. The geometric mean is also useful when trying to summarise data with differenting scales or units as the geometric mean is technically unitless.
```
def multiply_list(dataset:typing.List) :
# Multiply elements one by one
result = 1
for x in dataset:
result = result * x
return result
def geometric_mean(dataset:typing.List):
if 0 in dataset:
dataset = [x + 1 for x in dataset]
return multiply_list(dataset)**(1/len(dataset))
print(f"Geometric Mean: {geometric_mean(dataset)}")
plot_1d_data(dataset,5)
plt.plot(geometric_mean(dataset),5,'x',color='red',markersize=50)
plt.annotate('Geometric Mean',(geometric_mean(dataset),5),(geometric_mean(dataset),5.1),arrowprops={'width':0.1})
```
## Harmonic Mean
Harmonic mean is calculated by:
- taking the reciprocal of all the numbers in the set
- calculating the arithmetic mean of this reciprocal set
- taking the reciprocal of the calculated mean
### When to use
The harmonic mean is very useful when trying to summarise datasets that are in rates or ratios. For example if you were trying to determine the average rate of travel over a trip with many legs.
```
def reciprocal_list(dataset:typing.List):
reciprocal_list = []
for x in dataset:
reciprocal_list.append(1/x)
return reciprocal_list
def harmonic_mean(dataset:typing.List):
return 1/arithmetic_mean(reciprocal_list(dataset))
print(f"Harmonic Mean: {harmonic_mean(dataset)}")
plot_1d_data(dataset,5)
plt.plot(harmonic_mean(dataset),5,'x',color='red',markersize=50)
plt.annotate('Harmonic Mean',(harmonic_mean(dataset),5),(harmonic_mean(dataset),5.1),arrowprops={'width':0.1})
print(f"Mode: {statistics.mode(dataset)}")
print(f"Median: {statistics.median(dataset)}")
print(f"Arithmetic Mean: {arithmetic_mean(dataset)}")
print(f"Geometric Mean: {geometric_mean(dataset)}")
print(f"Harmonic Mean: {harmonic_mean(dataset)}")
```
> Thank you to Andrew Goodwin over on Twitter: <https://twitter.com/ndrewg/status/1296773835585236997> for suggesting some extremely interesting further reading on [Anscombe's Quartet](https://en.m.wikipedia.org/wiki/Anscombe%27s_quartet) and [The Datasaurus Dozen](https://www.autodeskresearch.com/publications/samestats), which are examples of why summary statistics matter of exactly the meaning of this post!
| true |
code
| 0.500793 | null | null | null | null |
|
```
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
from shapely.geometry import Point
from sklearn.neighbors import KNeighborsRegressor
import rasterio as rst
from rasterstats import zonal_stats
%matplotlib inline
path = r"[CHANGE THIS PATH]\Wales\\"
data = pd.read_csv(path + "final_data.csv", index_col = 0)
```
# Convert to GeoDataFrame
```
geo_data = gpd.GeoDataFrame(data = data,
crs = {'init':'epsg:27700'},
geometry = data.apply(lambda geom: Point(geom['oseast1m'],geom['osnrth1m']),axis=1))
geo_data.head()
f, (ax1, ax2, ax3) = plt.subplots(1,3, figsize = (16,6), sharex = True, sharey = True)
geo_data[geo_data['Year'] == 2016].plot(column = 'loneills', scheme = 'quantiles', cmap = 'Reds', marker = '.', ax = ax1);
geo_data[geo_data['Year'] == 2017].plot(column = 'loneills', scheme = 'quantiles', cmap = 'Reds', marker = '.', ax = ax2);
geo_data[geo_data['Year'] == 2018].plot(column = 'loneills', scheme = 'quantiles', cmap = 'Reds', marker = '.', ax = ax3);
```
## k-nearest neighbour interpolation
Non-parametric interpolation of loneliness based on local set of _k_ nearest neighbours for each cell in our evaluation grid.
Effectively becomes an inverse distance weighted (idw) interpolation when weights are set to be distance based.
```
def idw_model(k, p):
def _inv_distance_index(weights, index=p):
return (test==0).astype(int) if np.any(weights == 0) else 1. / weights**index
return KNeighborsRegressor(k, weights=_inv_distance_index)
def grid(xmin, xmax, ymin, ymax, cellsize):
# Set x and y ranges to accommodate cellsize
xmin = (xmin // cellsize) * cellsize
xmax = -(-xmax // cellsize) * cellsize # ceiling division
ymin = (ymin // cellsize) * cellsize
ymax = -(-ymax // cellsize) * cellsize
# Make meshgrid
x = np.linspace(xmin,xmax,(xmax-xmin)/cellsize)
y = np.linspace(ymin,ymax,(ymax-ymin)/cellsize)
return np.meshgrid(x,y)
def reshape_grid(xx,yy):
return np.append(xx.ravel()[:,np.newaxis],yy.ravel()[:,np.newaxis],1)
def reshape_image(z, xx):
return np.flip(z.reshape(np.shape(xx)),0)
def idw_surface(locations, values, xmin, xmax, ymin, ymax, cellsize, k=5, p=2):
# Make and fit the idw model
idw = idw_model(k,p).fit(locations, values)
# Make the grid to estimate over
xx, yy = grid(xmin, xmax, ymin, ymax, cellsize)
# reshape the grid for estimation
xy = reshape_grid(xx,yy)
# Predict the grid values
z = idw.predict(xy)
# reshape to image array
z = reshape_image(z, xx)
return z
```
## 2016 data
```
# Get point locations and values from data
points = geo_data[geo_data['Year'] == 2016][['oseast1m','osnrth1m']].values
vals = geo_data[geo_data['Year'] == 2016]['loneills'].values
surface2016 = idw_surface(points, vals, 90000,656000,10000,654000,250,7,2)
# Look at surface
f, ax = plt.subplots(figsize = (8,10))
ax.imshow(surface2016, cmap='Reds')
ax.set_aspect('equal')
```
## 2017 Data
```
# Get point locations and values from data
points = geo_data[geo_data['Year'] == 2017][['oseast1m','osnrth1m']].values
vals = geo_data[geo_data['Year'] == 2017]['loneills'].values
surface2017 = idw_surface(points, vals, 90000,656000,10000,654000,250,7,2)
# Look at surface
f, ax = plt.subplots(figsize = (8,10))
ax.imshow(surface2017, cmap='Reds')
ax.set_aspect('equal')
```
## 2018 Data
Get minimum and maximum bounds from the data. Round these down (in case of the 'min's) and up (in case of the 'max's) to get the values for `idw_surface()`
```
print("xmin = ", geo_data['oseast1m'].min(), "\n\r",
"xmax = ", geo_data['oseast1m'].max(), "\n\r",
"ymin = ", geo_data['osnrth1m'].min(), "\n\r",
"ymax = ", geo_data['osnrth1m'].max())
xmin = 175000
xmax = 357000
ymin = 167000
ymax = 393000
# Get point locations and values from data
points = geo_data[geo_data['Year'] == 2018][['oseast1m','osnrth1m']].values
vals = geo_data[geo_data['Year'] == 2018]['loneills'].values
surface2018 = idw_surface(points, vals, xmin,xmax,ymin,ymax,250,7,2)
# Look at surface
f, ax = plt.subplots(figsize = (8,10))
ax.imshow(surface2018, cmap='Reds')
ax.set_aspect('equal')
```
# Extract Values to MSOAs
Get 2011 MSOAs from the Open Geography Portal: http://geoportal.statistics.gov.uk/
```
# Get MSOAs which we use to aggregate the loneills variable.
#filestring = './Data/MSOAs/Middle_Layer_Super_Output_Areas_December_2011_Full_Clipped_Boundaries_in_England_and_Wales.shp'
filestring = r'[CHANGE THIS PATH]\Data\Boundaries\England and Wales\Middle_Layer_Super_Output_Areas_December_2011_Super_Generalised_Clipped_Boundaries_in_England_and_Wales.shp'
msoas = gpd.read_file(filestring)
msoas.to_crs({'init':'epsg:27700'})
# keep the Wales MSOAs
msoas = msoas[msoas['msoa11cd'].str[:1] == 'W'].copy()
# Get GB countries data to use for representation
#gb = gpd.read_file('./Data/GB/Countries_December_2017_Generalised_Clipped_Boundaries_in_UK_WGS84.shp')
#gb = gb.to_crs({'init':'epsg:27700'})
# get England
#eng = gb[gb['ctry17nm'] == 'England'].copy()
# Make affine transform for raster
trans = rst.Affine.from_gdal(xmin-125,250,0,ymax+125,0,-250)
# NB This process is slooow - write bespoke method?
# 2016
#msoa_zones = zonal_stats(msoas['geometry'], surface2016, affine = trans, stats = 'mean', nodata = np.nan)
#msoas['loneills_2016'] = list(map(lambda x: x['mean'] , msoa_zones))
# 2017
#msoa_zones = zonal_stats(msoas['geometry'], surface2017, affine = trans, stats = 'mean', nodata = np.nan)
#msoas['loneills_2017'] = list(map(lambda x: x['mean'] , msoa_zones))
# 2018
msoa_zones = zonal_stats(msoas['geometry'], surface2018, affine = trans, stats = 'mean', nodata = np.nan)
msoas['loneills_2018'] = list(map(lambda x: x['mean'] , msoa_zones))
# Check out the distributions of loneills by MSOA
f, [ax1, ax2, ax3] = plt.subplots(1,3, figsize=(14,5), sharex = True, sharey=True)
#ax1.hist(msoas['loneills_2016'], bins = 30)
#ax2.hist(msoas['loneills_2017'], bins = 30)
ax3.hist(msoas['loneills_2018'], bins = 30)
ax1.set_title("2016")
ax2.set_title("2017")
ax3.set_title("2018");
bins = [-10, -5, -3, -2, -1, 1, 2, 3, 5, 10, 22]
labels = ['#01665e','#35978f', '#80cdc1','#c7eae5','#f5f5f5','#f6e8c3','#dfc27d','#bf812d','#8c510a','#543005']
#msoas['loneills_2016_class'] = pd.cut(msoas['loneills_2016'], bins, labels = labels)
#msoas['loneills_2017_class'] = pd.cut(msoas['loneills_2017'], bins, labels = labels)
msoas['loneills_2018_class'] = pd.cut(msoas['loneills_2018'], bins, labels = labels)
msoas['loneills_2018_class'] = msoas.loneills_2018_class.astype(str) # convert categorical to string
f, (ax1, ax2, ax3) = plt.subplots(1,3,figsize = (16,10))
#msoas.plot(color = msoas['loneills_2016_class'], ax=ax1)
#msoas.plot(color = msoas['loneills_2017_class'], ax=ax2)
msoas.plot(color = msoas['loneills_2018_class'], ax=ax3)
#gb.plot(edgecolor = 'k', linewidth = 0.5, facecolor='none', ax=ax1)
#gb.plot(edgecolor = 'k', linewidth = 0.5, facecolor='none', ax=ax2)
#gb.plot(edgecolor = 'k', linewidth = 0.5, facecolor='none', ax=ax3)
# restrict to England
#ax1.set_xlim([82672,656000])
#ax1.set_ylim([5342,658000])
#ax2.set_xlim([82672,656000])
#ax2.set_ylim([5342,658000])
#ax3.set_xlim([82672,656000])
#ax3.set_ylim([5342,658000])
# Make a legend
# make bespoke legend
from matplotlib.patches import Patch
handles = []
ranges = ["-10, -5","-5, -3","-3, -2","-2, -1","-1, 1","1, 2","3, 3","3, 5","5, 10","10, 22"]
for color, label in zip(labels,ranges):
handles.append(Patch(facecolor = color, label = label))
ax1.legend(handles = handles, loc = 2);
# Save out msoa data as shapefile and geojson
msoas.to_file(path + "msoa_loneliness.shp", driver = 'ESRI Shapefile')
# msoas.to_file(path + "msoa_loneliness.geojson", driver = 'GeoJSON')
# save out msoa data as csv
msoas.to_csv(path + "msoa_loneliness.csv")
```
| true |
code
| 0.588771 | null | null | null | null |
|
```
%matplotlib inline
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc as pm
import scipy as sp
import seaborn as sns
sns.set(context='notebook', font_scale=1.2, rc={'figure.figsize': (12, 5)})
plt.style.use(['seaborn-colorblind', 'seaborn-darkgrid'])
RANDOM_SEED = 8927
np.random.seed(286)
# Helper function
def stdz(series: pd.Series):
"""Standardize the given pandas Series"""
return (series - series.mean())/series.std()
```
### 12E1.
*Which of the following priors will produce more shrinkage in the estimates?*
- $\alpha_{TANK} \sim Normal(0, 1)$
- $\alpha_{TANK} \sim Normal(0, 2)$
The first option will produce more shrinkage, because the prior is more concentrated: the standard deviation is smaller, so the density piles up more mass around zero and will pull extreme values closer to zero.
### 12E2.
*Make the following model into a multilevel model:*
$y_{i} \sim Binomial(1, p_{i})$
$logit(p_{i}) = \alpha_{GROUP[i]} + \beta x_{i}$
$\alpha_{GROUP} \sim Normal(0, 10)$
$\beta \sim Normal(0, 1)$
All that is really required to convert the model to a multilevel model is to take the prior for the vector of intercepts, $\alpha_{GROUP}$, and make it adaptive. This means we define parameters for its mean and standard deviation. Then we assign these two new parameters their own priors, *hyperpriors*. This is what it looks like:
$y_{i} \sim Binomial(1, p_{i})$
$logit(p_{i}) = \alpha_{GROUP[i]} + \beta x_{i}$
$\alpha_{GROUP} \sim Normal(\mu_{\alpha}, \sigma_{\alpha})$
$\beta \sim Normal(0, 1)$
$\mu_{\alpha} \sim Normal(0, 10)$
$\sigma_{\alpha} \sim HalfCauchy(1)$
The exact hyperpriors you assign don’t matter here. Since this problem has no data context, it isn’t really possible to say what sensible priors would be. Note also that an exponential prior on $\sigma_{\alpha}$ is just as sensible, absent context, as the half-Cauchy prior.
### 12E3.
*Make the following model into a multilevel model:*
$y_{i} \sim Normal(\mu_{i}, \sigma)$
$\mu_{i} = \alpha_{GROUP[i]} + \beta x_{i}$
$\alpha_{GROUP} \sim Normal(0, 10)$
$\beta \sim Normal(0, 1)$
$\sigma \sim HalfCauchy(2)$
This is very similar to the previous problem. The only trick here is to notice that there is already a standard deviation parameter, σ. But that standard deviation is for the residuals, at the top level. We’ll need yet another standard deviation for the varying intercepts:
$y_{i} \sim Normal(\mu_{i}, \sigma)$
$\mu_{i} = \alpha_{GROUP[i]} + \beta x_{i}$
$\alpha_{GROUP} \sim Normal(\mu_{\alpha}, \sigma_{\alpha})$
$\beta \sim Normal(0, 1)$
$\sigma \sim HalfCauchy(2)$
$\mu_{\alpha} \sim Normal(0, 10)$
$\sigma_{\alpha} \sim HalfCauchy(1)$
### 12E4.
*Write an example mathematical model formula for a Poisson regression with varying intercepts*
You can just copy the answer from problem 12E2 and swap out the binomial likelihood for a Poisson, taking care to change the link function from logit to log:
$y_{i} \sim Poisson(\lambda_{i})$
$log(\lambda_{i}) = \alpha_{GROUP[i]} + \beta x_{i}$
$\alpha_{GROUP} \sim Normal(\mu_{\alpha}, \sigma_{\alpha})$
$\beta \sim Normal(0, 1)$
$\mu_{\alpha} \sim Normal(0, 10)$
$\sigma_{\alpha} \sim HalfCauchy(1)$
Under the hood, all multilevel models are alike. It doesn’t matter which likelihood function rests at the top. Take care, however, to reconsider priors. The scale of the data and parameters is likely quite different for a Poisson model. Absent any particular context in this problem, you can’t recommend better priors. But in real work, it’s good to think about reasonable values and provide regularizing priors on the relevant scale.
### 12E5.
*Write an example mathematical model formula for a Poisson regression with two different kinds of varying intercepts - a cross-classified model*
The cross-classified model adds another varying intercept type. This is no harder than duplicating the original varying intercepts structure. But you have to take care now not to over-parameterize the model by having a hyperprior mean for both intercept types. You can do this by just assigning one of the adaptive priors a mean of zero. Suppose for example that the second cluster type is day:
$y_{i} \sim Poisson(\lambda_{i})$
$log(\lambda_{i}) = \alpha_{GROUP[i]} + \alpha_{DAY[i]} + \beta x_{i}$
$\alpha_{GROUP} \sim Normal(\mu_{\alpha}, \sigma_{GROUP})$
$\alpha_{DAY} \sim Normal(0, \sigma_{DAY})$
$\beta \sim Normal(0, 1)$
$\mu_{\alpha} \sim Normal(0, 10)$
$\sigma_{GROUP}, \sigma_{DAY} \sim HalfCauchy(1)$
Or you can just pull the mean intercept out of both priors and put it in the linear model:
$y_{i} \sim Poisson(\lambda_{i})$
$log(\lambda_{i}) = \alpha + \alpha_{GROUP[i]} + \alpha_{DAY[i]} + \beta x_{i}$
$\alpha \sim Normal(0, 10)$
$\alpha_{GROUP} \sim Normal(0, \sigma_{GROUP})$
$\alpha_{DAY} \sim Normal(0, \sigma_{DAY})$
$\beta \sim Normal(0, 1)$
$\sigma_{GROUP}, \sigma_{DAY} \sim HalfCauchy(1)$
These are exactly the same model. Although as you’ll see later in Chapter 13, these different forms might be more or less efficient in sampling.
### 12M1.
*Revisit the Reed frog survival data, reedfrogs.csv, and add the $predation$ and $size$ treatment variables to the varying intercepts model. Consider models with either main effect alone, both main effects, as well as a model including both and their interaction. Instead of focusing on inferences about these two predictor variables, focus on the inferred variation across tanks. Explain why it changes as it does across models.*
```
frogs = pd.read_csv('../Data/reedfrogs.csv', sep=",")
# Switch predictors to dummies
frogs["size"] = pd.Categorical(frogs["size"]).reorder_categories(["small", "big"]).codes
frogs["pred"] = pd.Categorical(frogs["pred"]).codes
# make the tank cluster variable
tank = np.arange(frogs.shape[0])
print(frogs.shape)
frogs.head(8)
frogs.describe()
pred = frogs["pred"].values
size = frogs["size"].values
n_samples, tuning = 1000, 2000
with pm.Model() as m_itcpt:
a = pm.Normal('a', 0., 10.)
sigma_tank = pm.HalfCauchy('sigma_tank', 1.)
a_tank = pm.Normal('a_tank', a, sigma_tank, shape=frogs.shape[0])
p = pm.math.invlogit(a_tank[tank])
surv = pm.Binomial('surv', n=frogs.density, p=p, observed=frogs.surv)
trace_itcpt = pm.sample(n_samples, tune=tuning, cores=2)
with pm.Model() as m_p:
a = pm.Normal('a', 0., 10.)
sigma_tank = pm.HalfCauchy('sigma_tank', 1.)
a_tank = pm.Normal('a_tank', a, sigma_tank, shape=frogs.shape[0])
bp = pm.Normal('bp', 0., 1.)
p = pm.math.invlogit(a_tank[tank] + bp*pred)
surv = pm.Binomial('surv', n=frogs.density, p=p, observed=frogs.surv)
trace_p = pm.sample(n_samples, tune=tuning, cores=2)
with pm.Model() as m_s:
a = pm.Normal('a', 0., 10.)
sigma_tank = pm.HalfCauchy('sigma_tank', 1.)
a_tank = pm.Normal('a_tank', a, sigma_tank, shape=frogs.shape[0])
bs = pm.Normal('bs', 0., 1.)
p = pm.math.invlogit(a_tank[tank] + bs*size)
surv = pm.Binomial('surv', n=frogs.density, p=p, observed=frogs.surv)
trace_s = pm.sample(n_samples, tune=tuning, cores=2)
with pm.Model() as m_p_s:
a = pm.Normal('a', 0., 10.)
sigma_tank = pm.HalfCauchy('sigma_tank', 1.)
a_tank = pm.Normal('a_tank', a, sigma_tank, shape=frogs.shape[0])
bp = pm.Normal('bp', 0., 1.)
bs = pm.Normal('bs', 0., 1.)
p = pm.math.invlogit(a_tank[tank] + bp*pred + bs*size)
surv = pm.Binomial('surv', n=frogs.density, p=p, observed=frogs.surv)
trace_p_s = pm.sample(n_samples, tune=tuning, cores=2)
with pm.Model() as m_p_s_ps:
a = pm.Normal('a', 0., 10.)
sigma_tank = pm.HalfCauchy('sigma_tank', 1.)
a_tank = pm.Normal('a_tank', a, sigma_tank, shape=frogs.shape[0])
bp = pm.Normal('bp', 0., 1.)
bs = pm.Normal('bs', 0., 1.)
bps = pm.Normal('bps', 0., 1.)
p = pm.math.invlogit(a_tank[tank] + bp*pred + bs*size + bps*pred*size)
surv = pm.Binomial('surv', n=frogs.density, p=p, observed=frogs.surv)
trace_p_s_ps = pm.sample(n_samples, tune=tuning, cores=2)
```
Now we’d like to inspect how the estimated variation across tanks changes from model to model. This means comparing posterior distributions for $\sigma_{tank}$ across the models:
```
az.plot_forest([trace_itcpt, trace_p, trace_s, trace_p_s, trace_p_s_ps],
model_names=["m_itcpt", "m_p", "m_s", "m_p_s", "m_p_s_ps"],
var_names=["sigma_tank"], credible_interval=.89, figsize=(9,4), combined=True);
```
Note that adding a predictor always decreased the posterior mean variation across tanks. Why? Because the predictors are, well, predicting variation. This leaves less variation for the varying intercepts to mop up. In theory, if we had in the form of predictor variables all of the relevant information that determined the survival outcomes, there would be zero variation across tanks.
You might also notice that the $size$ treatment variable reduces the variation much less than does $predation$. The predictor $size$, in these models, doesn’t help prediction very much, so accounting for it has minimal impact on the estimated variation across tanks.
### 12M2.
*Compare the models you fit just above, using WAIC. Can you reconcile the differences in WAIC with the posterior distributions of the models?*
```
az.compare({"m_itcpt": trace_itcpt, "m_p": trace_p, "m_s": trace_s, "m_p_s": trace_p_s, "m_p_s_ps": trace_p_s_ps},
method="pseudo-BMA")
```
The models are extremely close, but m_s seems to be the last one, suggesting that $size$ accounts for very little. Can we see this in the coefficients?
```
def get_coefs(est_summary: pd.DataFrame) -> dict:
mean_est = est_summary["mean"].to_dict()
coefs = {}
coefs['sigma_tank'] = mean_est.get('sigma_tank', np.nan)
coefs['bp'] = mean_est.get('bp', np.nan)
coefs['bs'] = mean_est.get('bs', np.nan)
coefs['bps'] = mean_est.get('bps', np.nan)
return coefs
pd.DataFrame.from_dict({"m_itcpt": get_coefs(az.summary(trace_itcpt, credible_interval=0.89)),
"m_p": get_coefs(az.summary(trace_p, credible_interval=0.89)),
"m_s": get_coefs(az.summary(trace_s, credible_interval=0.89)),
"m_p_s": get_coefs(az.summary(trace_p_s, credible_interval=0.89)),
"m_p_s_ps": get_coefs(az.summary(trace_p_s_ps, credible_interval=0.89))})
```
The posterior means for $b_{s}$ are smaller in absolute value than those for $b_{p}$. This is consistent with the WAIC comparison. In fact, the standard deviations on these coefficients are big enough that the $b_{s}$ posterior distributions overlap zero quite a bit. Consider for example the model m_s:
```
az.summary(trace_s, var_names=["a", "bs", "sigma_tank"], credible_interval=0.89)
```
But before you conclude that tadpole size doesn’t matter, remember that other models, perhaps including additional predictors, might find new life for $size$. Inference is always conditional on the model.
### 12M3.
*Re-estimate the basic Reed frog varying intercept model, but now using a Cauchy distribution in place of the Gaussian distribution for the varying intercepts. That is, fit this model:*
$s_{i} \sim Binomial(n_{i}, p_{i})$
$logit(p_{i}) = \alpha_{TANK[i]}$
$\alpha_{TANK} \sim Cauchy(\alpha, \sigma)$
$\alpha \sim Normal(0, 1)$
$\sigma \sim HalfCauchy(1)$
*Compare the posterior means of the intercepts, $\alpha_{TANK}$, to the posterior means produced in the chapter, using the customary Gaussian prior. Can you explain the pattern of differences?*
```
with pm.Model() as m_itcpt_cauch:
a = pm.Normal('a', 0., 1.)
sigma_tank = pm.HalfCauchy('sigma_tank', 1.)
a_tank = pm.Cauchy('a_tank', a, sigma_tank, shape=frogs.shape[0])
p = pm.math.invlogit(a_tank[tank])
surv = pm.Binomial('surv', n=frogs.density, p=p, observed=frogs.surv)
trace_itcpt_cauch = pm.sample(3000, tune=3000, cores=2, nuts_kwargs={"target_accept": .99})
```
You might have some trouble sampling efficiently from this posterior, on account of the long tails of the Cauchy. This results in the intercepts a_tank being poorly identifed. You saw a simple example of this problem in Chapter 8, when you met MCMC and learned about diagnosing bad chains. To help the sampler explore the space more efficiently, we've increase the target_accept ratio to 0.99. This topic will come up in more detail in Chapter 13. In any event, be sure to check the chains carefully and sample more if you need to.
The problem asked you to compare the posterior means of the a_tank parameters. Plotting the posterior means will be a lot more meaningful than just looking at the values:
```
post_itcpt = pm.trace_to_dataframe(trace_itcpt)
a_tank_m = post_itcpt.drop(["a", "sigma_tank"], axis=1).mean()
post_itcpt_cauch = pm.trace_to_dataframe(trace_itcpt_cauch)
a_tank_mC = post_itcpt_cauch.drop(["a", "sigma_tank"], axis=1).mean()
plt.figure(figsize=(10,5))
plt.scatter(x=a_tank_m, y=a_tank_mC)
plt.plot([a_tank_m.min()-0.5, a_tank_m.max()+0.5], [a_tank_m.min()-0.5, a_tank_m.max()+0.5], "k--")
plt.xlabel("under Gaussian prior")
plt.ylabel("under Cauchy prior")
plt.title("Posterior mean of each tank's intercept");
```
The dashed line shows the values for which the intercepts are equal in the two models. You can see that for the majority of tank intercepts, the Cauchy model actually produces posterior means that are essentially the same as those from the Gaussian model. But the large intercepts, under the Gaussian prior, are very much more extreme under the Cauchy prior.
For those tanks on the righthand side of the plot, all of the tadpoles survived. So using only the data from each tank alone, the log-odds of survival are infinite. The adaptive prior applies pooling that shrinks those log-odds inwards from infinity, thankfully. But the Gaussian prior causes more shrinkage of the extreme values than the Cauchy prior does. That is what accounts for those 5 extreme points on the right of the plot above.
### 12M4.
*Fit the following cross-classified multilevel model to the chimpanzees data:*
$L_{i} \sim Binomial(1, p_{i})$
$logit(p_{i}) = \alpha_{ACTOR[i]} + \alpha_{BLOCK[i]} + (\beta_{P} + \beta_{PC} C_{i}) P_{i}$
$\alpha_{ACTOR} \sim Normal(\alpha, \sigma_{ACTOR})$
$\alpha_{BLOCK} \sim Normal(\gamma, \sigma_{BLOCK})$
$\alpha, \gamma, \beta_{P}, \beta_{PC} \sim Normal(0, 10)$
$\sigma_{ACTOR}, \sigma_{BLOCK} \sim HalfCauchy(1)$
*Compare the posterior distribution to that produced by the similar cross-classified model from the chapter. Also compare the number of effective samples. Can you explain the differences?*
```
chimp = pd.read_csv('../Data/chimpanzees.csv', sep=";")
# we change "actor" and "block" to zero-index
chimp.actor = (chimp.actor - 1).astype(int)
chimp.block = (chimp.block - 1).astype(int)
Nactor = len(chimp.actor.unique())
Nblock = len(chimp.block.unique())
chimp.head()
with pm.Model() as m_chapter:
sigma_actor = pm.HalfCauchy('sigma_actor', 1.)
sigma_block = pm.HalfCauchy('sigma_block', 1.)
a_actor = pm.Normal('a_actor', 0., sigma_actor, shape=Nactor)
a_block = pm.Normal('a_block', 0., sigma_block, shape=Nblock)
a = pm.Normal('a', 0., 10.)
bp = pm.Normal('bp', 0., 10.)
bpc = pm.Normal('bpc', 0., 10.)
p = pm.math.invlogit(a + a_actor[chimp.actor.values] + a_block[chimp.block.values]
+ (bp + bpc * chimp.condition) * chimp.prosoc_left)
pulled_left = pm.Binomial('pulled_left', 1, p, observed=chimp.pulled_left)
trace_chapter= pm.sample(1000, tune=3000, cores=2)
with pm.Model() as m_exerc:
alpha = pm.Normal("alpha", 0., 10.)
gamma = pm.Normal("gamma", 0., 10.)
sigma_actor = pm.HalfCauchy('sigma_actor', 1.)
sigma_block = pm.HalfCauchy('sigma_block', 1.)
a_actor = pm.Normal('a_actor', alpha, sigma_actor, shape=Nactor)
a_block = pm.Normal('a_block', gamma, sigma_block, shape=Nblock)
bp = pm.Normal('bp', 0., 10.)
bpc = pm.Normal('bpc', 0., 10.)
p = pm.math.invlogit(a_actor[chimp.actor.values] + a_block[chimp.block.values]
+ (bp + bpc * chimp.condition) * chimp.prosoc_left)
pulled_left = pm.Binomial('pulled_left', 1, p, observed=chimp.pulled_left)
trace_exerc= pm.sample(1000, tune=3000, cores=2)
```
This is much like the model in the chapter, just with the two varying intercept means inside the two priors, instead of one mean outside both priors (inside the linear model). Since there are two parameters for the means, one inside each adaptive prior, this model is over-parameterized: an infinite number of different values of $\alpha$ and $\gamma$ will produce the same sum $\alpha + \gamma$. In other words, the $\gamma$ parameter is redundant.
This will produce a poorly-identified posterior. It’s best to avoid specifying a model like this. As a matter of fact, you probably noticed the second model took a lot more time to sample than the first one (about 10x more time), which is usually a sign of a poorly parametrized model. Remember the folk theorem of statistical computing: "*When you have computational problems, often there’s a problem with your model*".
Now let's look at each model's parameters:
```
az.summary(trace_chapter, var_names=["a", "bp", "bpc", "sigma_actor", "sigma_block"], credible_interval=0.89)
az.summary(trace_exerc, var_names=["alpha", "gamma", "bp", "bpc", "sigma_actor", "sigma_block"], credible_interval=0.89)
```
Look at these awful effective sample sizes (ess) and R-hat values for trace_exerc! In a nutshell, the new model (m_exerc) samples quite poorly. This is what happens when you over-parameterize the intercept. Notice however that the inferences about the slopes are practically identical. So even though the over-parameterized model is inefficient, it has identified the slope parameters.
### 12H1.
*In 1980, a typical Bengali woman could have 5 or more children in her lifetime. By the year 2000, a typical Bengali woman had only 2 or 3 children. You're going to look at a historical set of data, when contraception was widely available but many families chose not to use it. These data reside in bangladesh.csv and come from the 1988 Bangladesh Fertility Survey. Each row is one of 1934 women. There are six variables, but you can focus on three of them for this practice problem:*
- $district$: ID number of administrative district each woman resided in
- $use.contraception$: An indicator (0/1) of whether the woman was using contraception
- $urban$: An indicator (0/1) of whether the woman lived in a city, as opposed to living in a rural area
*The first thing to do is ensure that the cluster variable, $district$, is a contiguous set of integers. Recall that these values will be index values inside the model. If there are gaps, you’ll have parameters for which there is no data to inform them. Worse, the model probably won’t run. Let's look at the unique values of the $district$ variable:*
```
d = pd.read_csv('../Data/bangladesh.csv', sep=";")
d.head()
d.describe()
d.district.unique()
```
District 54 is absent. So $district$ isn’t yet a good index variable, because it’s not contiguous. This is easy to fix. Just make a new variable that is contiguous:
```
d["district_id"], _ = pd.factorize(d.district, sort=True)
district_id = d.district_id.values
Ndistricts = len(d.district_id.unique())
d.district_id.unique()
```
Now there are 60 values, contiguous integers 0 to 59.
Now, focus on predicting $use.contraception$, clustered by district ID. Fit both (1) a traditional fixed-effects model that uses an index variable for district and (2) a multilevel model with varying intercepts for district. Plot the predicted proportions of women in each district using contraception, for both the fixed-effects model and the varying-effects model. That is, make a plot in which district_id is on the horizontal axis and expected proportion using contraception is on the vertical. Make one plot for each model, or layer them on the same plot, as you prefer.
How do the models disagree? Can you explain the pattern of disagreement? In particular, can you explain the most extreme cases of disagreement, both why they happen, where they do and why the models reach different inferences?
```
with pm.Model() as m_fixed:
a_district = pm.Normal('a_district', 0., 10., shape=Ndistricts)
p = pm.math.invlogit(a_district[district_id])
used = pm.Bernoulli('used', p=p, observed=d["use.contraception"])
trace_fixed = pm.sample(1000, tune=2000, cores=2)
with pm.Model() as m_varying:
a = pm.Normal('a', 0., 10.)
sigma_district = pm.Exponential('sigma_district', 1.)
a_district = pm.Normal('a_district', 0., sigma_district, shape=Ndistricts)
p = pm.math.invlogit(a + a_district[district_id])
used = pm.Bernoulli('used', p=p, observed=d["use.contraception"])
trace_varying = pm.sample(1000, tune=2000, cores=2)
```
Sampling was smooth and quick, so the traces should be ok. We can confirm by plotting them:
```
az.plot_trace(trace_fixed, compact=True);
az.plot_trace(trace_varying, compact=True);
```
The chains are indeed fine. These models have a lot of parameters, so the summary dataframe we are used to is not really convenient here. Let's use forest plots instead:
```
fig, axes = az.plot_forest([trace_fixed, trace_varying], model_names=["Fixed", "Varying"],
credible_interval=0.89, combined=True, figsize=(8,35))
axes[0].grid();
```
We can already see that some estimates are particularly uncertain in some districts, but only for the fixed-effects model. Chances are these districts are extreme compared to the others, and/or the sample sizes are very small. This would be a case where the varying-effects model's estimates would be better and less volatile in those districts, because it is pooling information - information flows across districts thanks to the higher level common distribution of districts.
```
post_fixed = pm.trace_to_dataframe(trace_fixed)
p_mean_fixed = sp.special.expit(post_fixed.mean())
post_varying = pm.trace_to_dataframe(trace_varying)
# add a_district to a (because they are offsets of the global intercept), then convert to probabilities with logistic
p_mean_varying = sp.special.expit(post_varying.drop(["a", "sigma_district"], axis=1).add(post_varying["a"], axis="index").mean())
global_a = sp.special.expit(post_varying["a"].mean())
plt.figure(figsize=(11,5))
plt.hlines(d["use.contraception"].mean(), -1, Ndistricts, linestyles="dotted", label="Empirical global mean", alpha=.6, lw=2)
plt.hlines(global_a, -1, Ndistricts, linestyles="dashed", label="Estimated global mean", alpha=.6, lw=2)
plt.plot(np.arange(Ndistricts), p_mean_fixed, "o", ms=6, alpha=.8, label="Fixed-effects estimates")
plt.plot(np.arange(Ndistricts), p_mean_varying, "o", fillstyle="none", ms=6, markeredgewidth=1.5, alpha=.8, label="Varying-effects estimates")
plt.xlabel("District")
plt.ylabel("Probability contraception")
plt.legend(ncol=2);
```
The blue points are the fixed-effects estimates, and the open green ones are the varying effects. The dotted line is the observed average proportion of women using contraception, in the entire sample. The dashed line is the average proportion of women using contraception, in the entire sample, *as estimated by the varying effects model*.
Notice first that the green points are always closer to the dashed line, as was the case with the tadpole example in lecture. This results from shrinkage, which results from pooling information. There are cases with rather extreme disagreements, though. The most obvious is district 2, which has a fixed (blue) estimate of 1 but a varying (green) estimate of only 0.44. There are also two districts (10 and 48) for which the fixed estimates are zero, but the varying estimates are 0.18 and 0.30. If you go back to the forest plot above, these are exactly the three districts whose fixed-effects parameters were both far from zero and very uncertain.
So what’s going on here? As we suspected, these districts presented extreme results: either all sampled women used contraception or none did. As a result, the fixed-effects estimates were silly. The varying-effects model was able to produce more rational estimates, because it pooled information from other districts.
But note that the intensity of pooling was different for these three extreme districts. As we intuited too, depending upon how many women were sampled in each district, there was more or less shrinkage (pooling) towards the grand mean. So for example in the case of district 2, there were only 2 women in the sample, and so there is a lot of distance between the blue and green points. In contrast, district 10 had 21 women in the sample, and so while pooling pulls the estimate off of zero to 0.18, it doesn’t pull it nearly as far as district 2.
Another way to think of this phenomenon is to view the same estimates arranged by number of women in the sampled district, on the horizontal axis. Then on the vertical we can plot the distance (absolute value of the difference) between the fixed and varying estimates. Here’s what that looks like:
```
nbr_women = d.groupby("district_id").count()["woman"]
abs_dist = (p_mean_fixed - p_mean_varying).abs()
plt.figure(figsize=(11,5))
plt.plot(nbr_women, abs_dist, 'o', fillstyle="none", ms=7, markeredgewidth=2, alpha=.6)
plt.xlabel("Number of women sampled")
plt.ylabel("Shrinkage by district");
```
You can think of the vertical axis as being the amount of shrinkage. The districts with fewer women sampled show a lot more shrinkage, because there is less information in them. As a result, they are expected to overfit more, and so they are shrunk more towards the overall mean.
### 12H2.
*Return to the Trolley data from Chapter 11. Define and fit a varying intercepts model for these data. By this I mean to add an intercept parameter for the individuals to the linear model. Cluster the varying intercepts on individual participants, as indicated by the unique values in the id variable. Include $action$, $intention$, and $contact$ as before. Compare the varying intercepts model and a model that ignores individuals, using both WAIC/LOO and posterior predictions. What is the impact of individual variation in these data?*
**This will be adressed in a later pull request, as there is currently an issue with PyMC's OrderedLogistic implementation**
### 12H3.
*The Trolley data are also clustered by $story$, which indicates a unique narrative for each vignette. Define and fit a cross-classified varying intercepts model with both $id$ and $story$. Use the same ordinary terms as in the previous problem. Compare this model to the previous models. What do you infer about the impact of different stories on responses?*
**This will be adressed in a later pull request, as there is currently an issue with PyMC's OrderedLogistic implementation**
```
import platform
import sys
import IPython
import matplotlib
import scipy
print(f"This notebook was created on a computer {platform.machine()}, using: "
f"\nPython {sys.version[:5]}\nIPython {IPython.__version__}\nPyMC {pm.__version__}\nArviz {az.__version__}\nNumPy {np.__version__}"
f"\nPandas {pd.__version__}\nSciPy {scipy.__version__}\nMatplotlib {matplotlib.__version__}\n")
```
| true |
code
| 0.598019 | null | null | null | null |
|
# ResNet-101 on CIFAR-10
### Imports
```
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
```
### Settings and Dataset
```
# Device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 1
learning_rate = 0.001
num_epochs = 10
batch_size = 128
torch.manual_seed(random_seed)
# Architecture
num_features = 784
num_classes = 10
# Data
train_dataset = datasets.CIFAR10(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.CIFAR10(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
### Model
```
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes, grayscale):
self.inplanes = 64
if grayscale:
in_dim = 1
else:
in_dim = 3
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(in_dim, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AvgPool2d(7, stride=1, padding=2)
self.fc = nn.Linear(2048, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, (2. / n)**.5)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion))
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = x.view(x.size(0), -1)
logits = self.fc(x)
probas = F.softmax(logits, dim=1)
return logits, probas
def ResNet101(num_classes):
model = ResNet(block=Bottleneck,
layers=[3, 4, 23, 3],
num_classes=num_classes,
grayscale=False)
return model
model = ResNet101(num_classes)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
### Training
```
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
# Forward and Backprop
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
# update model paramets
optimizer.step()
# Logging
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model.eval()
with torch.set_grad_enabled(False):
print('Epoch: %03d/%03d | Train: %.3f%% ' %(
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
```
### Evaluation
```
with torch.set_grad_enabled(False):
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
```
| true |
code
| 0.777067 | null | null | null | null |
|
# Optimization of CNN - TPE
In this notebook, we will optimize the hyperparameters of a CNN using the define-by-run model from Optuna.
```
# For reproducible results.
# See:
# https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development
import os
os.environ['PYTHONHASHSEED'] = '0'
import numpy as np
import tensorflow as tf
import random as python_random
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(123)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
python_random.seed(123)
# The below set_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/random/set_seed
tf.random.set_seed(1234)
import itertools
from functools import partial
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical
from keras.models import Sequential, load_model
from keras.layers import Dense, Flatten, Conv2D, MaxPool2D
from keras.optimizers import Adam, RMSprop
import optuna
```
# Data Preparation
The dataset contains information about images, each image is a hand-written digit. The aim is to have the computer predict which digit was written by the person, automatically, by "looking" at the image.
Each image is 28 pixels in height and 28 pixels in width (28 x 28), making a total of 784 pixels. Each pixel value is an integer between 0 and 255, indicating the darkness in a gray-scale of that pixel.
The data is stored in a dataframe where each each pixel is a column (so it is flattened and not in the 28 x 28 format).
The data set the has 785 columns. The first column, called "label", is the digit that was drawn by the user. The rest of the columns contain the pixel-values of the associated image.
```
# Load the data
data = pd.read_csv("../mnist.csv")
# first column is the target, the rest of the columns
# are the pixels of the image
# each row is 1 image
data.head()
# split dataset into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['label'], axis=1), # the images
data['label'], # the target
test_size = 0.1,
random_state=0)
X_train.shape, X_test.shape
# number of images for each digit
g = sns.countplot(x=y_train)
plt.xlabel('Digits')
plt.ylabel('Number of images')
```
There are roughly the same amount of images for each of the 10 digits.
## Image re-scaling
We re-scale data for the CNN, between 0 and 1.
```
# Re-scale the data
# 255 is the maximum value a pixel can take
X_train = X_train / 255
X_test = X_test / 255
```
## Reshape
The images were stored in a pandas dataframe as 1-D vectors of 784 values. For a CNN with Keras, we need tensors with the following dimensions: width x height x channel.
Thus, we reshape all data to 28 x 2 8 x 1, 3-D matrices.
The 3rd dimension corresponds to the channel. RGB images have 3 channels. MNIST images are in gray-scale, thus they have only one channel in the 3rd dimension.
```
# Reshape image in 3 dimensions:
# height: 28px X width: 28px X channel: 1
X_train = X_train.values.reshape(-1,28,28,1)
X_test = X_test.values.reshape(-1,28,28,1)
```
## Target encoding
```
# the target is 1 variable with the 9 different digits
# as values
y_train.unique()
# For Keras, we need to create 10 dummy variables,
# one for each digit
# Encode labels to one hot vectors (ex : digit 2 -> [0,0,1,0,0,0,0,0,0,0])
y_train = to_categorical(y_train, num_classes = 10)
y_test = to_categorical(y_test, num_classes = 10)
# the new target
y_train
```
Let's print some example images.
```
# Some image examples
g = plt.imshow(X_train[0][:,:,0])
# Some image examples
g = plt.imshow(X_train[10][:,:,0])
```
# Define-by-Run design
We create the CNN and add the sampling space for the hyperparameters as we go. This is the Desing-by-run concept.
```
# we will save the model with this name
path_best_model = 'cnn_model_2.h5'
# starting point for the optimization
best_accuracy = 0
# function to create the CNN
def objective(trial):
# Start construction of a Keras Sequential model.
model = Sequential()
# Convolutional layers.
# We add the different number of conv layers in the following loop:
num_conv_layers = trial.suggest_int('num_conv_layers', 1, 3)
for i in range(num_conv_layers):
# Note, with this configuration, we sample different filters, kernels
# stride etc, for each convolutional layer that we add
model.add(Conv2D(
filters=trial.suggest_categorical('filters_{}'.format(i), [16, 32, 64]),
kernel_size=trial.suggest_categorical('kernel_size{}'.format(i), [3, 5]),
strides=trial.suggest_categorical('strides{}'.format(i), [1, 2]),
activation=trial.suggest_categorical(
'activation{}'.format(i), ['relu', 'tanh']),
padding='same',
))
# we could also optimize these parameters if we wanted:
model.add(MaxPool2D(pool_size=2, strides=2))
# Flatten the 4-rank output of the convolutional layers
# to 2-rank that can be input to a fully-connected Dense layer.
model.add(Flatten())
# Add fully-connected Dense layers.
# The number of layers is a hyper-parameter we want to optimize.
# We add the different number of layers in the following loop:
num_dense_layers = trial.suggest_int('num_dense_layers', 1, 3)
for i in range(num_dense_layers):
# Add the dense fully-connected layer to the model.
# This has two hyper-parameters we want to optimize:
# The number of nodes (neurons) and the activation function.
model.add(Dense(
units=trial.suggest_int('units{}'.format(i), 5, 512),
activation=trial.suggest_categorical(
'activation{}'.format(i), ['relu', 'tanh']),
))
# Last fully-connected dense layer with softmax-activation
# for use in classification.
model.add(Dense(10, activation='softmax'))
# Use the Adam method for training the network.
optimizer_name = trial.suggest_categorical(
'optimizer_name', ['Adam', 'RMSprop'])
if optimizer_name == 'Adam':
optimizer = Adam(lr=trial.suggest_float('learning_rate', 1e-6, 1e-2))
else:
optimizer = RMSprop(
lr=trial.suggest_float('learning_rate', 1e-6, 1e-2),
momentum=trial.suggest_float('momentum', 0.1, 0.9),
)
# In Keras we need to compile the model so it can be trained.
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
# train the model
# we use 3 epochs to be able to run the notebook in a "reasonable"
# time. If we increase the epochs, we will have better performance
# this could be another parameter to optimize in fact.
history = model.fit(
x=X_train,
y=y_train,
epochs=3,
batch_size=128,
validation_split=0.1,
)
# Get the classification accuracy on the validation-set
# after the last training-epoch.
accuracy = history.history['val_accuracy'][-1]
# Save the model if it improves on the best-found performance.
# We use the global keyword so we update the variable outside
# of this function.
global best_accuracy
# If the classification accuracy of the saved model is improved ...
if accuracy > best_accuracy:
# Save the new model to harddisk.
# Training CNNs is costly, so we want to avoid having to re-train
# the network with the best found parameters. We save it instead
# as we search for the best hyperparam space.
model.save(path_best_model)
# Update the classification accuracy.
best_accuracy = accuracy
# Delete the Keras model with these hyper-parameters from memory.
del model
# Remember that Scikit-optimize always minimizes the objective
# function, so we need to negate the accuracy (because we want
# the maximum accuracy)
return accuracy
# we need this to store the search
# we will use it in the following notebook
study_name = "cnn_study_2" # unique identifier of the study.
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.create_study(
direction='maximize',
study_name=study_name,
storage=storage_name,
load_if_exists=True,
)
study.optimize(objective, n_trials=30)
```
# Analyze results
```
study.best_params
study.best_value
results = study.trials_dataframe()
results['value'].sort_values().reset_index(drop=True).plot()
plt.title('Convergence plot')
plt.xlabel('Iteration')
plt.ylabel('Accuracy')
results.head()
```
# Evaluate the model
```
# load best model
model = load_model(path_best_model)
model.summary()
# make predictions in test set
result = model.evaluate(x=X_test,
y=y_test)
# print evaluation metrics
for name, value in zip(model.metrics_names, result):
print(name, value)
```
## Confusion matrix
```
# Predict the values from the validation dataset
y_pred = model.predict(X_test)
# Convert predictions classes to one hot vectors
y_pred_classes = np.argmax(y_pred, axis = 1)
# Convert validation observations to one hot vectors
y_true = np.argmax(y_test, axis = 1)
# compute the confusion matrix
cm = confusion_matrix(y_true, y_pred_classes)
cm
# let's make it more colourful
classes = 10
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Confusion matrix')
plt.colorbar()
tick_marks = np.arange(classes)
plt.xticks(tick_marks, range(classes), rotation=45)
plt.yticks(tick_marks, range(classes))
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > 100 else "black",
)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
Here we can see that our CNN performs very well on all digits.
| true |
code
| 0.717259 | null | null | null | null |
|
# Applying Customizations
```
import pandas as pd
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh', 'matplotlib')
```
As introduced in the [Customization](../getting_started/2-Customization.ipynb) section of the 'Getting Started' guide, HoloViews maintains a strict separation between your content (your data and declarations about your data) and its presentation (the details of how this data is represented visually). This separation is achieved by maintaining sets of keyword values ("options") that specify how elements are to appear, stored outside of the element itself. Option keywords can be specified for individual element instances, for all elements of a particular type, or for arbitrary user-defined sets of elements that you give a certain ``group`` and ``label`` (see [Annotating Data](../user_guide/01-Annotating_Data.ipynb)).
The options system controls how individual plots appear, but other important settings are made more globally using the "output" system, which controls HoloViews plotting and rendering code (see the [Plots and Renderers](Plots_and_Renderers.ipynb) user guide). In this guide we will show how to customize the visual styling with the options and output systems, focusing on the mechanisms rather than the specific choices available (which are covered in other guides such as [Style Mapping](04-Style_Mapping.ipynb)).
## Core concepts
This section offers an overview of some core concepts for customizing visual representation, focusing on how HoloViews keeps content and presentation separate. To start, we will revisit the simple introductory example in the [Customization](../getting_started/2-Customization.ipynb) getting-started guide (which might be helpful to review first).
```
spike_train = pd.read_csv('../assets/spike_train.csv.gz')
curve = hv.Curve(spike_train, 'milliseconds', 'Hertz')
spikes = hv.Spikes(spike_train, 'milliseconds', [])
```
And now we display the ``curve`` and a ``spikes`` elements together in a layout as we did in the getting-started guide:
```
curve = hv.Curve( spike_train, 'milliseconds', 'Hertz')
spikes = hv.Spikes(spike_train, 'milliseconds', [])
layout = curve + spikes
layout.opts(
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover']),
opts.Spikes(height=150, width=900, yaxis=None, line_width=0.25, color='grey')).cols(1)
```
This example illustrates a number of key concepts, as described below.
### Content versus presentation
In the getting-started guide [Introduction](../getting_started/1-Introduction.ipynb), we saw that we can print the string representation of HoloViews objects such as `layout`:
```
print(layout)
```
In the [Customization](../getting_started/2-Customization.ipynb) getting-started guide, the `.opts.info()` method was introduced that lets you see the options *associated* with (though not stored on) the objects:
```
layout.opts.info()
```
If you inspect all the state of the `Layout`, `Curve`, or `Spikes` objects you will not find any of these keywords, because they are stored in an entirely separate data structure. HoloViews assigns a unique ID per HoloViews object that lets arbitrarily specific customization be associated with that object if needed, while also making it simple to define options that apply to entire classes of objects by type (or group and label if defined). The HoloViews element is thus *always* a thin wrapper around your data, without any visual styling information or plotting state, even though it *seems* like the object includes the styling information. This separation between content and presentation is by design, so that you can work with your data and with its presentation entirely independently.
If you wish to clear the options that have been associated with an object `obj`, you can call `obj.opts.clear()`.
## Option builders
The [Customization](../getting_started/2-Customization.ipynb) getting-started guide also introduces the notion of *option builders*. One of the option builders in the visualization shown above is:
```
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover'])
```
An *option builder* takes a collection of keywords and returns an `Options` object that stores these keywords together. Why should you use option builders and how are they different from a vanilla dictionary?
1. The option builder specifies which type of HoloViews object the options are for, which is important because each type accepts different options.
2. Knowing the type, the options builder does *validation* against that type for the currently loaded plotting extensions. Try introducing a typo into one of the keywords above; you should get a helpful error message. Separately, try renaming `line_width` to `linewidth`, and you'll get a different message because the latter is a valid matplotlib keyword.
3. The option builder allows *tab-completion* in the notebook. This is useful for discovering available keywords for that type of object, which helps prevent mistakes and makes it quicker to specify a set of keywords.
In the cell above, the specified options are applicable to `Curve` elements, and different validation and tab completion will be available for other types.
The returned `Options` object is different from a dictionary in the following ways:
1. An optional *spec* is recorded, where this specification is normally just the element name. Above this is simply 'Curve'. Later, in section [Using `group` and `label`](#Using-group-and-label), we will see how this can also specify the `group` and `label`.
2. The keywords are alphanumerically sorted, making it easier to compare `Options` objects.
## Inlining options
When customizing a single element, the use of an option builder is not mandatory. If you have a small number of keywords that are common (e.g `color`, `cmap`, `title`, `width`, `height`) it can be clearer to inline them into the `.opts` method call if tab-completion and validation isn't required:
```
np.random.seed(42)
array = np.random.random((10,10))
im1 = hv.Image(array).opts(opts.Image(cmap='Reds')) # Using an option builder
im2 = hv.Image(array).opts(cmap='Blues') # Without an option builder
im1 + im2
```
You cannot inline keywords for composite objects such as `Layout` or `Overlay` objects. For instance, the `layout` object is:
```
print(layout)
```
To customize this layout, you need to use an option builder to associate your keywords with either the `Curve` or the `Spikes` object, or else you would have had to apply the options to the individual elements before you built the composite object. To illustrate setting by type, note that in the first example, both the `Curve` and the `Spikes` have different `height` values provided.
You can also target options by the `group` and `label` as described in section on [using `group` and `label`](#Using-group-and-label).
## Session-specific options
One other common need is to set some options for a Python session, whether using Jupyter notebook or not. For this you can set the default options that will apply to all objects created subsequently:
```
opts.defaults(
opts.HeatMap(cmap='Summer', colorbar=True, toolbar='above'))
```
The `opt.defaults` method has now set the style used for all `HeatMap` elements used in this session:
```
data = [(chr(65+i), chr(97+j), i*j) for i in range(5) for j in range(5) if i!=j]
heatmap = hv.HeatMap(data).sort()
heatmap
```
## Discovering options
Using tab completion in the option builders is one convenient and easy way of discovering the available options for an element. Another approach is to use `hv.help`.
For instance, if you run `hv.help(hv.Curve)` you will see a list of the 'style' and 'plot' options applicable to `Curve`. The distinction between these two types of options can often be ignored for most purposes, but the interested reader is encouraged to read more about them in more detail [below](#Split-into-style,-plot-and-norm-options).
For the purposes of discovering the available options, the keywords listed under the 'Style Options' section of the help output is worth noting. These keywords are specific to the active plotting extension and are part of the API for that plotting library. For instance, running `hv.help(hv.Curve)` in the cell below would give you the keywords in the Bokeh documentation that you can reference for customizing the appearance of `Curve` objects.
## Maximizing readability
There are many ways to specify options in your code using the above tools, but for creating readable, maintainable code, we recommend making the separation of content and presentation explicit. Someone reading your code can then understand your visualizations in two steps 1) what your data *is* in terms of the applicable elements and containers 2) how this data is to be presented visually.
The following guide details the approach we have used through out the examples and guides on holoviews.org. We have found that following these rules makes code involving holoviews easier to read and more consistent.
The core principle is as follows: ***avoid mixing declarations of data, elements and containers with details of their visual appearance***.
### Two contrasting examples
One of the best ways to do this is to declare all your elements, compose them and then apply all the necessary styling with the `.opts` method before the visualization is rendered to disk or to the screen. For instance, the example from the getting-started guide could have been written sub-optimally as follows:
***Sub-optimal***
```python
curve = hv.Curve( spike_train, 'milliseconds', 'Hertz').opts(
height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover'])
spikes = hv.Spikes(spike_train, 'milliseconds', vdims=[]).opts(
height=150, width=900, yaxis=None, line_width=0.25, color='grey')
(curve + spikes).cols(1)
```
Code like that is very difficult to read because it mixes declarations of the data and its dimensions with details about how to present it. The recommended version declares the `Layout`, then separately applies all the options together where it's clear that they are just hints for the visualization:
***Recommended***
```python
curve = hv.Curve( spike_train, 'milliseconds', 'Hertz')
spikes = hv.Spikes(spike_train, 'milliseconds', [])
layout = curve + spikes
layout.opts(
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover']),
opts.Spikes(height=150, width=900, yaxis=None, line_width=0.25, color='grey')).cols(1)
```
By grouping the options in this way and applying them at the end, you can see the definition of `layout` without being distracted by visual concerns declared later. Conversely, you can modify the visual appearance of `layout` easily without needing to know exactly how it was defined. The [coding style guide](#Coding-style-guide) section below offers additional advice for keeping things readable and consistent.
### When to use multiple`.opts` calls
The above coding style applies in many case, but sometimes you have multiple elements of the same type that you need to distinguish visually. For instance, you may have a set of curves where using the `dim` or `Cycle` objects (described in the [Style Mapping](04-Style_Mapping.ipynb) user guide) is not appropriate and you want to customize the appearance of each curve individually. Alternatively, you may be generating elements in a list comprehension for use in `NdOverlay` and have a specific style to apply to each one.
In these situations, it is often appropriate to use the inline style of `.opts` locally. In these instances, it is often best to give the individually styled objects a suitable named handle as illustrated by the [legend example](../gallery/demos/bokeh/legend_example.ipynb) of the gallery.
### General advice
As HoloViews is highly compositional by design, you can always build long expressions mixing the data and element declarations, the composition of these elements, and their customization. Even though such expressions can be terse they can also be difficult to read.
The simplest way to avoid long expressions is to keep some level of separation between these stages:
1. declaration of the data
2. declaration of the elements, including `.opts` to distinguish between elements of the same type if necessary
3. composition with `+` and `*` into layouts and overlays, and
4. customization of the composite object, either with a final call to the `.opts` method, or by declaring such settings as the default for your entire session as described [above](#Session-specific-options).
When stages are simple enough, it can be appropriate to combine them. For instance, if the declaration of the data is simple enough, you can fold in the declaration of the element. In general, any expression involving three or more of these stages will benefit from being broken up into several steps.
These general principles will help you write more readable code. Maximizing readability will always require some level of judgement, but you can maximize consistency by consulting the [coding style guide](#Coding-style-guide) section for more tips.
# Customizing display output
The options system controls most of the customizations you might want to do, but there are a few settings that are controlled at a more general level that cuts across all HoloViews object types: the active plotting extension (e.g. Bokeh or Matplotlib), the output display format (PNG, SVG, etc.), the output figure size, and other similar options. The `hv.output` utility allows you to modify these more global settings, either for all subsequent objects or for one particular object:
* `hv.output(**kwargs)`: Customize how the output appears for the rest of the notebook session.
* `hv.output(obj, **kwargs)`: Temporarily affect the display of an object `obj` using the keyword `**kwargs`.
The `hv.output` utility only has an effect in contexts where HoloViews objects can be automatically displayed, which currently is limited to the Jupyter Notebook (in either its classic or JupyterLab variants). In any other Python context, using `hv.output` has no effect, as there is no automatically displayed output; see the [hv.save() and hv.render()](Plots_and_Renderers.ipynb#Saving-and-rendering) utilities for explicitly creating output in those other contexts.
To start with `hv.output`, let us define a `Path` object:
```
lin = np.linspace(0, np.pi*2, 200)
def lissajous(t, a, b, delta):
return (np.sin(a * t + delta), np.sin(b * t), t)
path = hv.Path([lissajous(lin, 3, 5, np.pi/2)])
path.opts(opts.Path(color='purple', line_width=3, line_dash='dotted'))
```
Now, to illustrate, let's use `hv.output` to switch our plotting extension to matplotlib:
```
hv.output(backend='matplotlib', fig='svg')
```
We can now display our `path` object with some option customization:
```
path.opts(opts.Path(linewidth=2, color='red', linestyle='dotted'))
```
Our plot is now rendered with Matplotlib, in SVG format (try right-clicking the image in the web browser and saving it to disk to confirm). Note that the `opts.Path` option builder now tab completes *Matplotlib* keywords because we activated the Matplotlib plotting extension beforehand. Specifically, `linewidth` and `linestyle` don't exist in Bokeh, where the corresponding options are called `line_width` and `line_dash` instead.
You can see the custom output options that are currently active using `hv.output.info()`:
```
hv.output.info()
```
The info method will always show which backend is active as well as any other custom settings you have specified. These settings apply to the subsequent display of all objects unless you customize the output display settings for a single object.
To illustrate how settings are kept separate, let us switch back to Bokeh in this notebook session:
```
hv.output(backend='bokeh')
hv.output.info()
```
With Bokeh active, we can now declare options on `path` that we want to apply only to matplotlib:
```
path = path.opts(
opts.Path(linewidth=3, color='blue', backend='matplotlib'))
path
```
Now we can supply `path` to `hv.output` to customize how it is displayed, while activating matplotlib to generate that display. In the next cell, we render our path at 50% size as an SVG using matplotlib.
```
hv.output(path, backend='matplotlib', fig='svg', size=50)
```
Passing `hv.output` an object will apply the specified settings only for the subsequent display. If you were to view `path` now in the usual way, you would see that it is still being displayed with Bokeh with purple dotted lines.
One thing to note is that when we set the options with `backend='matplotlib'`, the active plotting extension was Bokeh. This means that `opts.Path` will tab complete *bokeh* keywords, and not the matplotlib ones that were specified. In practice you will want to set the backend appropriately before building your options settings, to ensure that you get the most appropriate tab completion.
### Available `hv.output` settings
You can see the available settings using `help(hv.output)`. For reference, here are the most commonly used ones:
* **backend**: *The backend used by HoloViews*. If the necessary libraries are installed this can be `'bokeh'`, `'matplotlib'` or `'plotly'`.
* **fig** : *The static figure format*. The most common options are `'svg'` and `'png'`.
* **holomap**: *The display type for holomaps*. With matplotlib and the necessary support libraries, this may be `'gif'` or `'mp4'`. The JavaScript `'scrubber'` widgets as well as the regular `'widgets'` are always supported.
* **fps**: *The frames per second used for animations*. This setting is used for GIF output and by the scrubber widget.
* **size**: *The percentage size of displayed output*. Useful for making all display larger or smaller.
* **dpi**: *The rendered dpi of the figure*. This setting affects raster output such as PNG images.
In `help(hv.output)` you will see a few other, less common settings. The `filename` setting particular is not recommended and will be deprecated in favor of `hv.save` in future.
## Coding style guide
Using `hv.output` plus option builders with the `.opts` method and `opts.default` covers the functionality required for most HoloViews code written by users. In addition to these recommended tools, HoloViews supports [Notebook Magics](Notebook_Magics.ipynb) (not recommended because they are Jupyter-specific) and literal (nested dictionary) formats useful for developers, as detailed in the [Extending HoloViews](#Extending-HoloViews) section.
This section offers further recommendations for how users can structure their code. These are generally tips based on the important principles described in the [maximizing readability](#Maximizing-readability) section that are often helpful but optional.
* Use as few `.opts` calls as necessary to style the object the way you want.
* You can inline keywords without an option builder if you only have a few common keywords. For instance, `hv.Image(...).opts(cmap='Reds')` is clearer to read than `hv.Image(...).opts(opts.Image(cmap='Reds'))`.
* Conversely, you *should* use an option builder if you have more than four keywords.
* When you have multiple option builders, it is often clearest to list them on separate lines with a single intentation in both `.opts` and `opts.defaults`:
**Not recommended**
```
layout.opts(opts.VLine(color='white'), opts.Image(cmap='Reds'), opts.Layout(width=500), opts.Curve(color='blue'))
```
**Recommended**
```
layout.opts(
opts.Curve(color='blue'),
opts.Image(cmap='Reds'),
opts.Layout(width=500),
opts.VLine(color='white'))
```
* The latter is recommended for another reason: if possible, list your element option builders in alphabetical order, before your container option builders in alphabetical order.
* Keep the expression before the `.opts` method simple so that the overall expression is readable.
* Don't mix `hv.output` and use of the `.opts` method in the same expression.
## What is `.options`?
If you tab complete a HoloViews object, you'll notice there is an `.options` method as well as a `.opts` method. So what is the difference?
The `.options` method was introduced in HoloViews 1.10 and was the first time HoloViews allowed users to ignore the distinction between 'style', 'plot' and 'norm' options described in the next section. It is largely equivalent to the `.opts` method except that it applies the options on a returned clone of the object.
In other words, you have `clone = obj.options(**kwargs)` where `obj` is unaffected by the keywords supplied while `clone` will be customized. Both `.opts` and `.options` support an explicit `clone` keyword, so:
* `obj.opts(**kwargs, clone=True)` is equivalent to `obj.options(**kwargs)`, and conversely
* `obj.options(**kwargs, clone=False)` is equivalent to `obj.opts(**kwargs)`
For this reason, users only ever need to use `.opts` and occasionally supply `clone=True` if required. The only other difference between these methods is that `.opts` supports the full literal specification that allows splitting into [style, plot and norm options](#Split-into-style,-plot-and-norm-options) (for developers) whereas `.options` does not.
## When should I use `clone=True`?
The 'Persistent styles' section of the [customization](../getting_started/2-Customization.ipynb) user guide shows how HoloViews remembers options set for an object (per plotting extension). For instance, we never customized the `spikes` object defined at the start of the notebook but we did customize it when it was part of a `Layout` called `layout`. Examining this `spikes` object, we see the options were applied to the underlying object, not just a copy of it in the layout:
```
spikes
```
This is because `clone=False` by default in the `.opts` method. To illustrate `clone=True`, let's view some purple spikes *without* affecting the original `spikes` object:
```
purple_spikes = spikes.opts(color='purple', clone=True)
purple_spikes
```
Now if you were to look at `spikes` again, you would see it is still looks like the grey version above and only `purple_spikes` is purple. This means that `clone=True` is useful when you want to keep different styles for some HoloViews object (by making styled clones of it) instead of overwriting the options each time you call `.opts`.
## Extending HoloViews
In addition to the formats described above for use by users, additional option formats are supported that are less user friendly for data exploration but may be more convenient for library authors building on HoloViews.
The first of these is the *`Option` list syntax* which is typically most useful outside of notebooks, a *literal syntax* that avoids the need to import `opts`, and then finally a literal syntax that keeps *style* and *plot* options separate.
### `Option` list syntax
If you find yourself using `obj.opts(*options)` where `options` is a list of `Option` objects, use `obj.opts(options)` instead as list input is also supported:
```
options = [
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='grey', tools=['hover']),
opts.Spikes(height=150, width=900, yaxis=None, line_width=0.25, color='orange')]
layout.opts(options).cols(1)
```
This approach is often best in regular Python code where you are dynamically building up a list of options to apply. Using the option builders early also allows for early validation before use in the `.opts` method.
### Literal syntax
This syntax has the advantage of being a pure Python literal but it is harder to work with directly (due to nested dictionaries), is less readable, lacks tab completion support and lacks validation at the point where the keywords are defined:
```
layout.opts(
{'Curve': dict(height=200, width=900, xaxis=None, line_width=2, color='blue', tools=['hover']),
'Spikes': dict(height=150, width=900, yaxis=None, line_width=0.25, color='green')}).cols(1)
```
The utility of this format is you don't need to import `opts` and it is easier to dynamically add or remove keywords using Python or if you are storing options in a text file like YAML or JSON and only later applying them in Python code. This format should be avoided when trying to maximize readability or make the available keyword options easy to explore.
### Using `group` and `label`
The notion of an element `group` and `label` was introduced in [Annotating Data](./01-Annotating_Data.ipynb). This type of metadata is helpful for organizing large collections of elements with shared styling, such as automatically generated objects from some external software (e.g. a simulator). If you have a large set of elements with semantically meaningful `group` and `label` parameters set, you can use this information to appropriately customize large numbers of visualizations at once.
To illustrate, here are four overlaid curves where three have the `group` of 'Sinusoid' and one of these also has the label 'Squared':
```
xs = np.linspace(-np.pi,np.pi,100)
curve = hv.Curve((xs, xs/3))
group_curve1 = hv.Curve((xs, np.sin(xs)), group='Sinusoid')
group_curve2 = hv.Curve((xs, np.sin(xs+np.pi/4)), group='Sinusoid')
label_curve = hv.Curve((xs, np.sin(xs)**2), group='Sinusoid', label='Squared')
curves = curve * group_curve1 * group_curve2 * label_curve
curves
```
We can now use the `.opts` method to make all curves blue unless they are in the 'Sinusoid' group in which case they are red. Additionally, if a curve in the 'Sinusoid' group also has the label 'Squared', we can make sure that curve is green with a custom interpolation option:
```
curves.opts(
opts.Curve(color='blue'),
opts.Curve('Sinusoid', color='red'),
opts.Curve('Sinusoid.Squared', interpolation='steps-mid', color='green'))
```
By using `opts.defaults` instead of the `.opts` method, we can use this type of customization to apply options to many elements, including elements that haven't even been created yet. For instance, if we run:
```
opts.defaults(opts.Area('Error', alpha=0.5, color='grey'))
```
Then any `Area` element with a `group` of 'Error' will then be displayed as a semi-transparent grey:
```
X = np.linspace(0,2,10)
hv.Area((X, np.random.rand(10), -np.random.rand(10)), vdims=['y', 'y2'], group='Error')
```
## Split into `style`, `plot` and `norm` options
In `HoloViews`, an element such as `Curve` actually has three semantic distinct categories of options: `style`, `plot`, and `norm` options. Normally, a user doesn't need to worry about the distinction if they spend most of their time working with a single plotting extension.
When trying to build a system that consistently needs to generate visualizations across different plotting libraries, it can be useful to make this distinction explicit:
##### ``style`` options:
``style`` options are passed directly to the underlying rendering backend that actually draws the plots, allowing you to control the details of how it behaves. Each backend has its own options (e.g. the [``bokeh``](Bokeh_Backend) or plotly backends).
For whichever backend has been selected, HoloViews can tell you which options are supported, but you will need to read the corresponding documentation (e.g. [matplotlib](http://matplotlib.org/contents.html), [bokeh](http://bokeh.pydata.org)) for the details of their use. For listing available options, see the ``hv.help`` as described in the [Discovering options](#Discovering-options) section.
HoloViews has been designed to be easily extensible to additional backends in the future and each backend would have its own set of style options.
##### ``plot`` options:
Each of the various HoloViews plotting classes declares various [Parameters](http://param.pyviz.org) that control how HoloViews builds the visualization for that type of object, such as plot sizes and labels. HoloViews uses these options internally; they are not simply passed to the underlying backend. HoloViews documents these options fully in its online help and in the [Reference Manual](http://holoviews.org/Reference_Manual). These options may vary for different backends in some cases, depending on the support available both in that library and in the HoloViews interface to it, but we try to keep any options that are meaningful for a variety of backends the same for all of them. For listing available options, see the output of ``hv.help``.
##### ``norm`` options:
``norm`` options are a special type of plot option that are applied orthogonally to the above two types, to control normalization. Normalization refers to adjusting the properties of one plot relative to those of another. For instance, two images normalized together would appear with relative brightness levels, with the brightest image using the full range black to white, while the other image is scaled proportionally. Two images normalized independently would both cover the full range from black to white. Similarly, two axis ranges normalized together are effectively linked and will expand to fit the largest range of either axis, while those normalized separately would cover different ranges. For listing available options, see the output of ``hv.help``.
You can preserve the semantic distinction between these types of option in an augmented form of the [Literal syntax](#Literal-syntax) as follows:
```
full_literal_spec = {
'Curve': {'style':dict(color='orange')},
'Curve.Sinusoid': {'style':dict(color='grey')},
'Curve.Sinusoid.Squared': {'style':dict(color='black'),
'plot':dict(interpolation='steps-mid')}}
curves.opts(full_literal_spec)
```
This specification is what HoloViews uses internally, but it is awkward for people to use and is not ever recommended for normal users. That said, it does offer the maximal amount of flexibility and power for integration with other software.
For instance, a simulator that can output visualization using either Bokeh or Matplotlib via HoloViews could use this format. By keeping the 'plot' and 'style' options separate, the 'plot' options could be set regardless of the plotting library while the 'style' options would be conditional on the backend.
## Onwards
This section of the user guide has described how you can discover and set customization options in HoloViews. Using `hv.help` and the option builders, you should be able to find the options available for any given object you want to display.
What *hasn't* been explored are some of the facilities HoloViews offers to map the dimensions of your data to style options. This important topic is explored in the next user guide [Style Mapping](04-Style_Mapping.ipynb), where you will learn of the `dim` object as well as about the `Cycle` and `Palette` objects.
| true |
code
| 0.389401 | null | null | null | null |
|
# Applying the Expected Context Framework to the Switchboard Corpus
### Using `DualContextWrapper`
This notebook demonstrates how our implementation of the Expected Context Framework can be applied to the Switchboard dataset. See [this dissertation](https://tisjune.github.io/research/dissertation) for more details about the framework, and more comments on the below analyses.
This notebook will show how to apply `DualContextWrapper`, a wrapper transformer that keeps track of two instances of `ExpectedContextModelTransformer`. For a version of this demo that initializes two separate instances of `ExpectedContextModelTransformer` instead, and that more explicitly demonstrates that functionality, see [this notebook](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/ecf/convokit/expected_context_framework/demos/switchboard_exploration_demo.ipynb).
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import math
import os
```
## 1. Loading and preprocessing the dataset
For this demo, we'll use the Switchboard corpus---a collection of telephone conversations which have been annotated with various dialog acts. More information on the dataset, as it exists in ConvoKit format, can be found [here](https://convokit.cornell.edu/documentation/switchboard.html); the original data is described [here](https://web.stanford.edu/~jurafsky/ws97/CL-dialog.pdf).
We will actually use a preprocessed version of the Switchboard corpus, which we can access below. Since Switchboard consists of transcribed telephone conversations, there are many disfluencies and backchannels, that make utterances messier, and that make it hard to identify what counts as an actual turn. In the version of the corpus we consider, for the purpose of demonstration, we remove the disfluencies and backchannels (acknowledging that we're discarding important parts of the conversations).
```
from convokit import Corpus
from convokit import download
# OPTION 1: DOWNLOAD CORPUS
# UNCOMMENT THESE LINES TO DOWNLOAD CORPUS
# DATA_DIR = '<YOUR DIRECTORY>'
# SW_CORPUS_PATH = download('switchboard-processed-corpus', data_dir=DATA_DIR)
# OPTION 2: READ PREVIOUSLY-DOWNLOADED CORPUS FROM DISK
# UNCOMMENT THIS LINE AND REPLACE WITH THE DIRECTORY WHERE THE TENNIS-CORPUS IS LOCATED
# SW_CORPUS_PATH = '<YOUR DIRECTORY>'
sw_corpus = Corpus(SW_CORPUS_PATH)
sw_corpus.print_summary_stats()
utt_eg_id = '3496-79'
```
as input, we use a preprocessed version of the utterance that only contains alphabetical words, found in the `alpha_text` metadata field.
```
sw_corpus.get_utterance(utt_eg_id).meta['alpha_text']
```
In order to avoid capturing topic-specific information, we restrict our analyses to a vocabulary of unigrams that occurs across many topics, and across many conversations:
```
from collections import defaultdict
topic_counts = defaultdict(set)
for ut in sw_corpus.iter_utterances():
topic = sw_corpus.get_conversation(ut.conversation_id).meta['topic']
for x in set(ut.meta['alpha_text'].lower().split()):
topic_counts[x].add(topic)
topic_counts = {x: len(y) for x, y in topic_counts.items()}
word_convo_counts = defaultdict(set)
for ut in sw_corpus.iter_utterances():
for x in set(ut.meta['alpha_text'].lower().split()):
word_convo_counts[x].add(ut.conversation_id)
word_convo_counts = {x: len(y) for x, y in word_convo_counts.items()}
min_topic_words = set(x for x,y in topic_counts.items() if y >= 33)
min_convo_words = set(x for x,y in word_convo_counts.items() if y >= 200)
vocab = sorted(min_topic_words.intersection(min_convo_words))
len(vocab)
from convokit.expected_context_framework import ColNormedTfidfTransformer, DualContextWrapper
```
## 2. Applying the Expected Context Framework
To apply the Expected Context Framework, we start by converting the input utterance text to an input vector representation. Here, we represent utterances in a term-document matrix that's _normalized by columns_ (empirically, we found that this ensures that the representations derived by the framework aren't skewed by the relative frequency of utterances). We use `ColNormedTfidfTransformer` transformer to do this:
```
tfidf_obj = ColNormedTfidfTransformer(input_field='alpha_text', output_field='col_normed_tfidf', binary=True, vocabulary=vocab)
_ = tfidf_obj.fit(sw_corpus)
_ = tfidf_obj.transform(sw_corpus)
```
We now use the Expected Context Framework. In short, the framework derives vector representations, and other characterizations, of terms and utterances that are based on their _expected conversational context_---i.e., the replies we expect will follow a term or utterance, or the preceding utterances that we expect the term/utterance will reply to.
We are going to derive characterizations based both on the _forwards_ context, i.e., the expected replies, and the _backwards_ context, i.e., the expected predecessors. We'll apply the framework in each direction, and then compare the characterizations that result. To take care of both interlocked models, we use the `DualContextWrapper` transformer, which will keep track of two `ExpectedContextModelTransformer`s: one that relates utterances to predecessors (`reply_to`), and that outputs utterance-level attributes with the prefix `bk`; the other that relates utterances to replies (`next_id`) and outputs utterance-level attributes with the prefix `fw`. These parameters are specified via the `context_fields` and `output_prefixes` arguments.
Other arguments passed:
* `vect_field` and `context_vect_field` respectively denote the input vector representations of utterances and context utterances that `ec_fw` will work with. Here, we'll use the same tf-idf representations that we just computed above.
* `n_svd_dims` denotes the dimensionality of the vector representations that `ec_fw` will output. This is something that you can play around with---for this dataset, we found that more dimensions resulted in messier output, and a coarser, lower-dimensional representation was slightly more interpretable. (Technical note: technically, `ec_fw` produces vector representations of dimension `n_svd_dims`-1, since by default, it removes the first latent dimension, which we find tends to strongly reflect term frequency.)
* `n_clusters` denotes the number of utterance types that `ec_fw` will infer, given the representations it computes. Note that this is an interpretative step: looking at clusters of utterances helps us get a sense of what information the representations are capturing; this value does not actually impact the representations and other characterizations we derive.
* `random_state` and `cluster_random_state` are fixed for this demo, so we produce deterministic output.
```
dual_context_model = DualContextWrapper(context_fields=['reply_to','next_id'], output_prefixes=['bk','fw'],
vect_field='col_normed_tfidf', context_vect_field='col_normed_tfidf',
n_svd_dims=15, n_clusters=2,
random_state=1000, cluster_random_state=1000)
```
We'll fit the transformer on the subset of utterances and replies that have at least 5 unigrams from our vocabulary.
```
dual_context_model.fit(sw_corpus,selector=lambda x: x.meta.get('col_normed_tfidf__n_feats',0)>=5,
context_selector=lambda x: x.meta.get('col_normed_tfidf__n_feats',0)>= 5)
```
### Interpreting derived representations
Before applying the two transformers, `ec_fw` and `ec_bk` to transform the corpus, we can examine the representations and characterizations it's derived over the training data (note that in this case, the training data is also the corpus that we analyze, but this needn't be the case in general---see [this demo](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/convokit/expected_context_framework/demos/wiki_awry_demo.ipynb) for an example).
First, to interpret the representations derived by each model, we can inspect the clusters of representations that we've inferred, for both the forwards and backwards direction. We can access the forwards and backwards models as elements of the `ec_models` attribute. The following function calls print out representative terms and utterances, as well as context terms and utterances, per cluster (next two cells; note that the output is quite long).
```
dual_context_model.ec_models[0].print_clusters(corpus=sw_corpus)
dual_context_model.ec_models[1].print_clusters(corpus=sw_corpus)
```
demo continues below
We can see that in each case, two clusters emerge that roughly correspond to utterances recounting personal experiences, and those providing commentary, generally not about personal matters. We'll label them as such, noting that there's a roughly 50-50 split with slightly more "personal" utterances than "commentary" ones:
```
dual_context_model.ec_models[0].set_cluster_names(['personal', 'commentary'])
dual_context_model.ec_models[1].set_cluster_names(['commentary', 'personal'])
```
### Interpreting derived characterizations
The transformer also computes some term-level statistics, which we can return as a Pandas dataframe:
* forwards and backwards ranges (`fw_range` and `bk_range` respectively): we roughly interpret these as modeling the strengths of our forwards expectations of the replies that a term tends to get, or the backwards expectations of the predecessors that the term tends to follow.
* shift: this statistic corresponds to the distance between the backwards and forwards representations for each term; we interpret it as the extent to which a term shifts the focus of a conversation.
* orientation (`orn`): this statistic compares the relative magnitude of forwards and backwards ranges. In a [counseling conversation setting](https://www.cs.cornell.edu/~cristian/Orientation_files/orientation-forwards-backwards.pdf) we interpreted orientation as a measure of the relative extent to which an interlocutor aims to advance the conversation forwards with a term, versus address existing content.
```
term_df = dual_context_model.get_term_df()
term_df.head()
k=10
print('low orientation')
display(term_df.sort_values('orn').head(k)[['orn']])
print('high orientation')
display(term_df.sort_values('orn').tail(k)[['orn']])
print('\nlow shift')
display(term_df.sort_values('shift').head(k)[['shift']])
print('high shift')
display(term_df.sort_values('shift').tail(k)[['shift']])
```
### Deriving utterance-level representations
We now use the transformer to derive utterance-level characterizations, by transforming the corpus with it. Again, we focus on utterances that are sufficiently long:
```
_ = dual_context_model.transform(sw_corpus, selector=lambda x: x.meta.get('col_normed_tfidf__n_feats',0)>=5)
```
The `transform` function does the following.
First, it (or rather, its constituent `ExpectedContextModelTransformer`s) derives vector representations of utterances, stored as `fw_repr` and `bk_repr`:
```
sw_corpus.vectors
```
Next, it derives ranges of utterances, stored in the metadata as `fw_range` and `bk_range`:
```
eg_ut = sw_corpus.get_utterance(utt_eg_id)
print('Forwards range:', eg_ut.meta['fw_range'])
print('Backwards range:', eg_ut.meta['bk_range'])
```
It also assigns utterances to inferred types:
```
print('Forwards cluster:', eg_ut.meta['fw_clustering.cluster'])
print('Backwards cluster:', eg_ut.meta['bk_clustering.cluster'])
```
And computes orientations and shifts:
```
print('shift:', eg_ut.meta['shift'])
print('orientation:', eg_ut.meta['orn'])
```
## 3. Analysis: correspondence to discourse act labels
We explore the relation between the characterizations we've derived, and the various annotations that the utterances are labeled with (for more information on the annotation scheme, see the [manual here](https://web.stanford.edu/~jurafsky/ws97/manual.august1.html)). See [this dissertation](https://tisjune.github.io/research/dissertation) for further explanation of the analyses and findings below. A high-level comment is that this is a tough dataset for the framework to work with, given the relative lack of structure---something future work could think more carefully about.
To facilitate the analysis, we extract relevant utterance attributes into a Pandas dataframe:
```
df = sw_corpus.get_attribute_table('utterance',
['bk_clustering.cluster', 'fw_clustering.cluster',
'orn', 'shift', 'tags'])
df = df[df['bk_clustering.cluster'].notnull()]
```
We will stick to examining the 9 most common tags in the data:
```
tag_subset = ['aa', 'b', 'ba', 'h', 'ny', 'qw', 'qy', 'sd', 'sv']
for tag in tag_subset:
df['has_' + tag] = df.tags.apply(lambda x: tag in x.split())
```
To start, we explore how the forwards and backwards vector representations correspond to these labels. To do this, we will compute log-odds ratios between the inferred utterance clusters and these labels:
```
def compute_log_odds(col, bool_col, val_subset=None):
if val_subset is not None:
col_vals = val_subset
else:
col_vals = col.unique()
log_odds_entries = []
for val in col_vals:
val_true = sum((col == val) & bool_col)
val_false = sum((col == val) & ~bool_col)
nval_true = sum((col != val) & bool_col)
nval_false = sum((col != val) & ~bool_col)
log_odds_entries.append({'val': val, 'log_odds': np.log((val_true/val_false)/(nval_true/nval_false))})
return log_odds_entries
bk_log_odds = []
for tag in tag_subset:
entry = compute_log_odds(df['bk_clustering.cluster'],df['has_' + tag], ['commentary'])[0]
entry['tag'] = tag
bk_log_odds.append(entry)
bk_log_odds_df = pd.DataFrame(bk_log_odds).set_index('tag').sort_values('log_odds')[['log_odds']]
fw_log_odds = []
for tag in tag_subset:
entry = compute_log_odds(df['fw_clustering.cluster'],df['has_' + tag], ['commentary'])[0]
entry['tag'] = tag
fw_log_odds.append(entry)
fw_log_odds_df = pd.DataFrame(fw_log_odds).set_index('tag').sort_values('log_odds')[['log_odds']]
print('forwards types vs labels')
display(fw_log_odds_df.T)
print('--------------------------')
print('backwards types vs labels')
display(bk_log_odds_df.T)
```
Tags further towards the right of the above tables (more positive log-odds) are those that co-occur more with the `commentary` than the `personal` utterance type. We briefly note that both forwards and backwards representations seem to draw a distinction between `sv` (opinion statements) and `sd` (non-opinion statements).
Next, we explore how the orientation and shift statistics relate to these labels. To do this, we compare statistics for utterances with a particular label, to statistics for utterances without that label.
```
from scipy import stats
def cohend(d1, d2):
n1, n2 = len(d1), len(d2)
s1, s2 = np.var(d1, ddof=1), np.var(d2, ddof=1)
s = np.sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2))
u1, u2 = np.mean(d1), np.mean(d2)
return (u1 - u2) / s
def get_pstars(p):
if p < 0.001:
return '***'
elif p < 0.01:
return '**'
elif p < 0.05:
return '*'
else: return ''
stat_col = 'orn'
entries = []
for tag in tag_subset:
has = df[df['has_' + tag]][stat_col]
hasnt = df[~df['has_' + tag]][stat_col]
entry = {'tag': tag, 'pval': stats.mannwhitneyu(has, hasnt)[1],
'cd': cohend(has, hasnt)}
entry['ps'] = get_pstars(entry['pval'] * len(tag_subset))
entries.append(entry)
orn_stat_df = pd.DataFrame(entries).set_index('tag').sort_values('cd')
orn_stat_df = orn_stat_df[np.abs(orn_stat_df.cd) >= .1]
stat_col = 'shift'
entries = []
for tag in tag_subset:
has = df[df['has_' + tag]][stat_col]
hasnt = df[~df['has_' + tag]][stat_col]
entry = {'tag': tag, 'pval': stats.mannwhitneyu(has, hasnt)[1],
'cd': cohend(has, hasnt)}
entry['ps'] = get_pstars(entry['pval'] * len(tag_subset))
entries.append(entry)
shift_stat_df = pd.DataFrame(entries).set_index('tag').sort_values('cd')
shift_stat_df = shift_stat_df[np.abs(shift_stat_df.cd) >= .1]
```
(We'll only show labels for which there's a sufficiently large difference, in cohen's delta, between utterances with and without the label)
```
print('orientation vs labels')
display(orn_stat_df.T)
print('--------------------------')
print('shift vs labels')
display(shift_stat_df.T)
```
We note that utterances containing questions (`qw`, `qy`) have higher shifts than utterances which do not. If you're familiar with the DAMSL designations for forwards and backwards looking communicative functions, the output for orientation might look a little puzzling/informative that our view of what counts as forwards/backwards is different from the view espoused by the annotation scheme. We discuss this further in [this dissertation](https://tisjune.github.io/research/dissertation).
## 4. Model persistence
Finally, we briefly demonstrate how the model can be saved and loaded for later use
```
DUAL_MODEL_PATH = os.path.join(SW_CORPUS_PATH, 'dual_model')
dual_context_model.dump(DUAL_MODEL_PATH)
```
We dump latent context representations, clustering information, and various input parameters, for each constituent `ExpectedContextModelTransformer`, in separate directories under `DUAL_MODEL_PATH`:
```
ls $DUAL_MODEL_PATH
```
To load the learned model, we start by initializing a new model:
```
dual_model_new = DualContextWrapper(context_fields=['reply_to','next_id'], output_prefixes=['bk_new','fw_new'],
vect_field='col_normed_tfidf', context_vect_field='col_normed_tfidf',
wrapper_output_prefix='new',
n_svd_dims=15, n_clusters=2,
random_state=1000, cluster_random_state=1000)
dual_model_new.load(DUAL_MODEL_PATH, model_dirs=['bk','fw'])
```
We see that using the re-loaded model to transform the corpus results in the same representations and characterizations as the original one:
```
_ = dual_model_new.transform(sw_corpus, selector=lambda x: x.meta.get('col_normed_tfidf__n_feats',0)>=5)
sw_corpus.vectors
np.allclose(sw_corpus.get_vectors('bk_new_repr'), sw_corpus.get_vectors('bk_repr'))
np.allclose(sw_corpus.get_vectors('fw_new_repr'), sw_corpus.get_vectors('fw_repr'))
for ut in sw_corpus.iter_utterances(selector=lambda x: x.meta.get('col_normed_tfidf__n_feats',0)>=5):
assert ut.meta['orn'] == ut.meta['new_orn']
assert ut.meta['shift'] == ut.meta['new_shift']
```
## 5. Pipeline usage
We also implement a pipeline that handles the following:
* processes text (via a pipeline supplied by the user)
* transforms text to input representation (via `ColNormedTfidfTransformer`)
* derives framework output (via `DualContextWrapper`)
```
from convokit.expected_context_framework import DualContextPipeline
# see `demo_text_pipelines.py` in this demo's directory for details
# in short, this pipeline will either output the `alpha_text` metadata field
# of an utterance, or write the utterance's `text` attribute into the `alpha_text`
# metadata field
from demo_text_pipelines import switchboard_text_pipeline
```
We initialize the pipeline with the following arguments:
* `text_field` specifies which utterance metadata field to use as text input
* `text_pipe` specifies the pipeline used to compute the contents of `text_field`
* `tfidf_params` specifies the parameters to be passed into the underlying `ColNormedTfidfTransformer` object
* `min_terms` specifies the minimum number of terms in the vocabulary that an utterance must contain for it to be considered in fitting and transforming the underlying `DualContextWrapper` object (see the `selector` argument passed into `dual_context_model.fit` above)
All other arguments are inherited from `DualContextWrapper`.
```
pipe_obj = DualContextPipeline(context_fields=['reply_to','next_id'],
output_prefixes=['bk','fw'],
text_field='alpha_text', text_pipe=switchboard_text_pipeline(),
tfidf_params={'binary': True, 'vocabulary': vocab},
min_terms=5,
n_svd_dims=15, n_clusters=2,
random_state=1000, cluster_random_state=1000)
# note this might output a warning that `col_normed_tfidf` already exists;
# that's okay: the pipeline is just recomputing this matrix
pipe_obj.fit(sw_corpus)
```
Note that the pipeline enables us to transform ad-hoc string input:
```
eg_ut_new = pipe_obj.transform_utterance('How old were you when you left ?')
# note these attributes have the exact same values as those of eg_ut, computed above
print('shift:', eg_ut_new.meta['shift'])
print('orientation:', eg_ut_new.meta['orn'])
```
| true |
code
| 0.293139 | null | null | null | null |
|
# Underfitting and Overfitting demo using KNN
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('data_knn_classification_cleaned_titanic.csv')
data.head()
x = data.drop(['Survived'], axis=1)
y = data['Survived']
#Scaling the data
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
x = ss.fit_transform(x)
#split the data
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(x, y, random_state=96, stratify=y)
```
# implementing KNN
```
#imporing KNN classifier and f1 score
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.metrics import f1_score
#creating an instance of KNN
clf = KNN(n_neighbors = 12)
clf.fit(train_x, train_y)
train_predict = clf.predict(train_x)
k1 = f1_score(train_predict, train_y)
print("training: ",k1)
test_predict = clf.predict(test_x)
k = f1_score(test_predict, test_y)
print("testing: ",k)
def f1score(k):
train_f1 = []
test_f1 = []
for i in k:
clf = KNN(n_neighbors = i)
clf.fit(train_x, train_y)
train_predict = clf.predict(train_x)
k1 = f1_score(train_predict, train_y)
train_f1.append(k1)
test_predict = clf.predict(test_x)
k = f1_score(test_predict, test_y)
test_f1.append(k)
return train_f1, test_f1
k = range(1,50)
train_f1, test_f1 = f1score(k)
train_f1, test_f1
score = pd.DataFrame({'train score': train_f1, 'test_score':test_f1}, index = k)
score
#visulaising
plt.plot(k, test_f1, color ='red', label ='test')
plt.plot(k, train_f1, color ='green', label ='train')
plt.xlabel('K Neighbors')
plt.ylabel('F1 score')
plt.title('f1 curve')
plt.ylim(0,4,1)
plt.legend()
#split the data
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(x, y, random_state=42, stratify=y)
k = range(1,50)
train_f1, test_f1 = f1score(k)
#visulaising
plt.plot(k, test_f1, color ='red', label ='test')
plt.plot(k, train_f1, color ='green', label ='train')
plt.xlabel('K Neighbors')
plt.ylabel('F1 score')
plt.title('f1 curve')
#plt.ylim(0,4,1)
plt.legend()
'''
here the value of k is decided by using both train and test data
, instead of (testset) that we can use validation set
types:
1. Hold-out validation
as we directly divide the data into praprotions, there might be a
case where the validation set is biased to only one class
(which mean validation set might have data of only one class,
these results in set have no idea about the other class)
in this we have different distributions
2. Stratified hold out
in this we have equal distributions
in the hold out scenario we need good amount of data to maintain,
so we need to train with lot data. if the dataset is small?
and we want to bulid the complex relations out of them?
'''
```
# Bias Variance Tradeoff
```
'''
if variance is high then bias is low
if bias is high then variance is low
error high bias high variance optimally in btw
fit underfit overfit bestfit
k range 21<k k<11 12<k<21
complexity low high optimum
Generalization error : defines the optimum model btw high bias and high varaince
High variance refers to overfitting whereas high bias
refers to underfitting and we do not want both of these scenarios.
So, the best model is said to have low bias and low variance.
'''
```
| true |
code
| 0.578924 | null | null | null | null |
|
# [ATM 623: Climate Modeling](../index.ipynb)
[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany
# Lecture 17: Ice albedo feedback in the EBM
### About these notes:
This document uses the interactive [`IPython notebook`](http://ipython.org/notebook.html) format (now also called [`Jupyter`](https://jupyter.org)). The notes can be accessed in several different ways:
- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware
- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)
- A complete snapshot of the notes as of May 2015 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).
Many of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab
## Contents
1. [Interactive snow and ice line in the EBM](#section1)
2. [Polar-amplified warming in the EBM](#section2)
3. [Effects of diffusivity in the annual mean EBM with albedo feedback](#section3)
4. [Diffusive response to a point source of energy](#section4)
____________
<a id='section1'></a>
## 1. Interactive snow and ice line in the EBM
____________
### The annual mean EBM
the equation is
$$ C(\phi) \frac{\partial T_s}{\partial t} = (1-\alpha) ~ Q - \left( A + B~T_s \right) + \frac{D}{\cos\phi } \frac{\partial }{\partial \phi} \left( \cos\phi ~ \frac{\partial T_s}{\partial \phi} \right) $$
### Temperature-dependent ice line
Let the surface albedo be larger wherever the temperature is below some threshold $T_f$:
$$ \alpha\left(\phi, T(\phi) \right) = \left\{\begin{array}{ccc}
\alpha_0 + \alpha_2 P_2(\sin\phi) & ~ & T(\phi) > T_f \\
a_i & ~ & T(\phi) \le T_f \\
\end{array} \right. $$
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
# for convenience, set up a dictionary with our reference parameters
param = {'A':210, 'B':2, 'a0':0.3, 'a2':0.078, 'ai':0.62, 'Tf':-10.}
model1 = climlab.EBM_annual( num_lat=180, D=0.55, **param )
print model1
```
Because we provided a parameter `ai` for the icy albedo, our model now contains several sub-processes contained within the process called `albedo`. Together these implement the step-function formula above.
The process called `iceline` simply looks for grid cells with temperature below $T_f$.
```
print model1.param
def ebm_plot( model, figsize=(8,12), show=True ):
'''This function makes a plot of the current state of the model,
including temperature, energy budget, and heat transport.'''
templimits = -30,35
radlimits = -340, 340
htlimits = -7,7
latlimits = -90,90
lat_ticks = np.arange(-90,90,30)
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(3,1,1)
ax1.plot(model.lat, model.Ts)
ax1.set_xlim(latlimits)
ax1.set_ylim(templimits)
ax1.set_ylabel('Temperature (deg C)')
ax1.set_xticks( lat_ticks )
ax1.grid()
ax2 = fig.add_subplot(3,1,2)
ax2.plot(model.lat, model.diagnostics['ASR'], 'k--', label='SW' )
ax2.plot(model.lat, -model.diagnostics['OLR'], 'r--', label='LW' )
ax2.plot(model.lat, model.diagnostics['net_radiation'], 'c-', label='net rad' )
ax2.plot(model.lat, model.heat_transport_convergence(), 'g--', label='dyn' )
ax2.plot(model.lat, model.diagnostics['net_radiation'].squeeze()
+ model.heat_transport_convergence(), 'b-', label='total' )
ax2.set_xlim(latlimits)
ax2.set_ylim(radlimits)
ax2.set_ylabel('Energy budget (W m$^{-2}$)')
ax2.set_xticks( lat_ticks )
ax2.grid()
ax2.legend()
ax3 = fig.add_subplot(3,1,3)
ax3.plot(model.lat_bounds, model.heat_transport() )
ax3.set_xlim(latlimits)
ax3.set_ylim(htlimits)
ax3.set_ylabel('Heat transport (PW)')
ax3.set_xlabel('Latitude')
ax3.set_xticks( lat_ticks )
ax3.grid()
return fig
model1.integrate_years(5)
f = ebm_plot(model1)
model1.diagnostics['icelat']
```
____________
<a id='section2'></a>
## 2. Polar-amplified warming in the EBM
____________
### Add a small radiative forcing
The equivalent of doubling CO2 in this model is something like
$$ A \rightarrow A - \delta A $$
where $\delta A = 4$ W m$^{-2}$.
```
deltaA = 4.
model2 = climlab.process_like(model1)
model2.subprocess['LW'].A = param['A'] - deltaA
model2.integrate_years(5, verbose=False)
plt.plot(model1.lat, model1.Ts)
plt.plot(model2.lat, model2.Ts)
```
The warming is polar-amplified: more warming at the poles than elsewhere.
Why?
Also, the current ice line is now:
```
model2.diagnostics['icelat']
```
There is no ice left!
Let's do some more greenhouse warming:
```
model3 = climlab.process_like(model1)
model3.subprocess['LW'].A = param['A'] - 2*deltaA
model3.integrate_years(5, verbose=False)
plt.plot(model1.lat, model1.Ts)
plt.plot(model2.lat, model2.Ts)
plt.plot(model3.lat, model3.Ts)
plt.xlim(-90, 90)
plt.grid()
```
In the ice-free regime, there is no polar-amplified warming. A uniform radiative forcing produces a uniform warming.
____________
<a id='section3'></a>
## 3. Effects of diffusivity in the annual mean EBM with albedo feedback
____________
### In-class investigation:
We will repeat the exercise from Lecture 14, but this time with albedo feedback included in our model.
- Solve the annual-mean EBM (integrate out to equilibrium) over a range of different diffusivity parameters.
- Make three plots:
- Global-mean temperature as a function of $D$
- Equator-to-pole temperature difference $\Delta T$ as a function of $D$
- Poleward heat transport across 35 degrees $\mathcal{H}_{max}$ as a function of $D$
- Choose a value of $D$ that gives a reasonable approximation to observations:
- $\Delta T \approx 45$ ºC
Use these parameter values:
```
param = {'A':210, 'B':2, 'a0':0.3, 'a2':0.078, 'ai':0.62, 'Tf':-10.}
print param
```
### One possible way to do this:
```
Darray = np.arange(0., 2.05, 0.05)
model_list = []
Tmean_list = []
deltaT_list = []
Hmax_list = []
for D in Darray:
ebm = climlab.EBM_annual(num_lat=360, D=D, **param )
#ebm.subprocess['insolation'].s2 = -0.473
ebm.integrate_years(5., verbose=False)
Tmean = ebm.global_mean_temperature()
deltaT = np.max(ebm.Ts) - np.min(ebm.Ts)
HT = ebm.heat_transport()
#Hmax = np.max(np.abs(HT))
ind = np.where(ebm.lat_bounds==35.5)[0]
Hmax = HT[ind]
model_list.append(ebm)
Tmean_list.append(Tmean)
deltaT_list.append(deltaT)
Hmax_list.append(Hmax)
color1 = 'b'
color2 = 'r'
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
ax1.plot(Darray, deltaT_list, color=color1, label='$\Delta T$')
ax1.plot(Darray, Tmean_list, '--', color=color1, label='$\overline{T}$')
ax1.set_xlabel('D (W m$^{-2}$ K$^{-1}$)', fontsize=14)
ax1.set_xticks(np.arange(Darray[0], Darray[-1], 0.2))
ax1.set_ylabel('Temperature ($^\circ$C)', fontsize=14, color=color1)
for tl in ax1.get_yticklabels():
tl.set_color(color1)
ax1.legend(loc='center right')
ax2 = ax1.twinx()
ax2.plot(Darray, Hmax_list, color=color2)
ax2.set_ylabel('Poleward heat transport across 35.5$^\circ$ (PW)', fontsize=14, color=color2)
for tl in ax2.get_yticklabels():
tl.set_color(color2)
ax1.set_title('Effect of diffusivity on EBM with albedo feedback', fontsize=16)
ax1.grid()
```
____________
<a id='section4'></a>
## 4. Diffusive response to a point source of energy
____________
Let's add a point heat source to the EBM and see what sets the spatial structure of the response.
We will add a heat source at about 45º latitude.
First, we will calculate the response in a model **without albedo feedback**.
```
param_noalb = {'A': 210, 'B': 2, 'D': 0.55, 'Tf': -10.0, 'a0': 0.3, 'a2': 0.078}
m1 = climlab.EBM_annual(num_lat=180, **param_noalb)
print m1
m1.integrate_years(5.)
m2 = climlab.process_like(m1)
point_source = climlab.process.energy_budget.ExternalEnergySource(state=m2.state)
ind = np.where(m2.lat == 45.5)
point_source.heating_rate['Ts'][ind] = 100.
m2.add_subprocess('point source', point_source)
print m2
m2.integrate_years(5.)
plt.plot(m2.lat, m2.Ts - m1.Ts)
plt.xlim(-90,90)
plt.grid()
```
The warming effects of our point source are felt **at all latitudes** but the effects decay away from the heat source.
Some analysis will show that the length scale of the warming is proportional to
$$ \sqrt{\frac{D}{B}} $$
so increases with the diffusivity.
Now repeat this calculate **with ice albedo feedback**
```
m3 = climlab.EBM_annual(num_lat=180, **param)
m3.integrate_years(5.)
m4 = climlab.process_like(m3)
point_source = climlab.process.energy_budget.ExternalEnergySource(state=m4.state)
point_source.heating_rate['Ts'][ind] = 100.
m4.add_subprocess('point source', point_source)
m4.integrate_years(5.)
plt.plot(m4.lat, m4.Ts - m3.Ts)
plt.xlim(-90,90)
plt.grid()
```
Now the maximum warming **does not coincide with the heat source at 45º**!
Our heat source has led to melting of snow and ice, which induces an additional heat source in the high northern latitudes.
**Heat transport communicates the external warming to the ice cap, and also commuicates the increased shortwave absorption due to ice melt globally!**
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
____________
## Credits
The author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php), offered in Spring 2015.
____________
____________
## Version information
____________
```
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, climlab
```
| true |
code
| 0.552781 | null | null | null | null |
|
# Partial Dependence Plot
## Summary
Partial dependence plots visualize the dependence between the response and a set of target features (usually one or two), marginalizing over all the other features. For a perturbation-based interpretability method, it is relatively quick. PDP assumes independence between the features, and can be misleading interpretability-wise when this is not met (e.g. when the model has many high order interactions).
## How it Works
The PDP module for `scikit-learn` {cite}`pedregosa2011scikit` provides a succinct description of the algorithm [here](https://scikit-learn.org/stable/modules/partial_dependence.html).
Christoph Molnar's "Interpretable Machine Learning" e-book {cite}`molnar2020interpretable` has an excellent overview on partial dependence that can be found [here](https://christophm.github.io/interpretable-ml-book/pdp.html).
The conceiving paper "Greedy Function Approximation: A Gradient Boosting Machine" {cite}`friedman2001greedy` provides a good motivation and definition.
## Code Example
The following code will train a blackbox pipeline for the breast cancer dataset. Aftewards it will interpret the pipeline and its decisions with Partial Dependence Plots. The visualizations provided will be for global explanations.
```
from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from interpret import show
from interpret.blackbox import PartialDependence
seed = 1
X, y = load_breast_cancer(return_X_y=True, as_frame=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)
pca = PCA()
rf = RandomForestClassifier(n_estimators=100, n_jobs=-1)
blackbox_model = Pipeline([('pca', pca), ('rf', rf)])
blackbox_model.fit(X_train, y_train)
pdp = PartialDependence(predict_fn=blackbox_model.predict_proba, data=X_train)
pdp_global = pdp.explain_global()
show(pdp_global)
```
## Further Resources
- [Paper link to conceiving paper](https://projecteuclid.org/download/pdf_1/euclid.aos/1013203451)
- [scikit-learn on their PDP module](https://scikit-learn.org/stable/modules/partial_dependence.html)
## Bibliography
```{bibliography} references.bib
:style: unsrt
:filter: docname in docnames
```
## API
### PartialDependence
```{eval-rst}
.. autoclass:: interpret.blackbox.PartialDependence
:members:
:inherited-members:
```
| true |
code
| 0.771661 | null | null | null | null |
|
# Training on Multiple GPUs
:label:`sec_multi_gpu`
So far we discussed how to train models efficiently on CPUs and GPUs. We even showed how deep learning frameworks allow one to parallelize computation and communication automatically between them in :numref:`sec_auto_para`. We also showed in :numref:`sec_use_gpu` how to list all the available GPUs on a computer using the `nvidia-smi` command.
What we did *not* discuss is how to actually parallelize deep learning training.
Instead, we implied in passing that one would somehow split the data across multiple devices and make it work. The present section fills in the details and shows how to train a network in parallel when starting from scratch. Details on how to take advantage of functionality in high-level APIs is relegated to :numref:`sec_multi_gpu_concise`.
We assume that you are familiar with minibatch stochastic gradient descent algorithms such as the ones described in :numref:`sec_minibatch_sgd`.
## Splitting the Problem
Let us start with a simple computer vision problem and a slightly archaic network, e.g., with multiple layers of convolutions, pooling, and possibly a few fully-connected layers in the end.
That is, let us start with a network that looks quite similar to LeNet :cite:`LeCun.Bottou.Bengio.ea.1998` or AlexNet :cite:`Krizhevsky.Sutskever.Hinton.2012`.
Given multiple GPUs (2 if it is a desktop server, 4 on an AWS g4dn.12xlarge instance, 8 on a p3.16xlarge, or 16 on a p2.16xlarge), we want to partition training in a manner as to achieve good speedup while simultaneously benefitting from simple and reproducible design choices. Multiple GPUs, after all, increase both *memory* and *computation* ability. In a nutshell, we have the following choices, given a minibatch of training data that we want to classify.
First, we could partition the network across multiple GPUs. That is, each GPU takes as input the data flowing into a particular layer, processes data across a number of subsequent layers and then sends the data to the next GPU.
This allows us to process data with larger networks when compared with what a single GPU could handle.
Besides,
memory footprint per GPU can be well controlled (it is a fraction of the total network footprint).
However, the interface between layers (and thus GPUs) requires tight synchronization. This can be tricky, in particular if the computational workloads are not properly matched between layers. The problem is exacerbated for large numbers of GPUs.
The interface between layers also
requires large amounts of data transfer,
such as activations and gradients.
This may overwhelm the bandwidth of the GPU buses.
Moreover, compute-intensive, yet sequential operations are nontrivial to partition. See e.g., :cite:`Mirhoseini.Pham.Le.ea.2017` for a best effort in this regard. It remains a difficult problem and it is unclear whether it is possible to achieve good (linear) scaling on nontrivial problems. We do not recommend it unless there is excellent framework or operating system support for chaining together multiple GPUs.
Second, we could split the work layerwise. For instance, rather than computing 64 channels on a single GPU we could split up the problem across 4 GPUs, each of which generates data for 16 channels.
Likewise, for a fully-connected layer we could split the number of output units.
:numref:`fig_alexnet_original` (taken from :cite:`Krizhevsky.Sutskever.Hinton.2012`)
illustrates this design, where this strategy was used to deal with GPUs that had a very small memory footprint (2 GB at the time).
This allows for good scaling in terms of computation, provided that the number of channels (or units) is not too small.
Besides,
multiple GPUs can process increasingly larger networks since the available memory scales linearly.

:label:`fig_alexnet_original`
However,
we need a *very large* number of synchronization or barrier operations since each layer depends on the results from all the other layers.
Moreover, the amount of data that needs to be transferred is potentially even larger than when distributing layers across GPUs. Thus, we do not recommend this approach due to its bandwidth cost and complexity.
Last, we could partition data across multiple GPUs. This way all GPUs perform the same type of work, albeit on different observations. Gradients are aggregated across GPUs after each minibatch of training data.
This is the simplest approach and it can be applied in any situation.
We only need to synchronize after each minibatch. That said, it is highly desirable to start exchanging gradients parameters already while others are still being computed.
Moreover, larger numbers of GPUs lead to larger minibatch sizes, thus increasing training efficiency.
However, adding more GPUs does not allow us to train larger models.

:label:`fig_splitting`
A comparison of different ways of parallelization on multiple GPUs is depicted in :numref:`fig_splitting`.
By and large, data parallelism is the most convenient way to proceed, provided that we have access to GPUs with sufficiently large memory. See also :cite:`Li.Andersen.Park.ea.2014` for a detailed description of partitioning for distributed training. GPU memory used to be a problem in the early days of deep learning. By now this issue has been resolved for all but the most unusual cases. We focus on data parallelism in what follows.
## Data Parallelism
Assume that there are $k$ GPUs on a machine. Given the model to be trained, each GPU will maintain a complete set of model parameters independently though parameter values across the GPUs are identical and synchronized.
As an example,
:numref:`fig_data_parallel` illustrates
training with
data parallelism when $k=2$.

:label:`fig_data_parallel`
In general, the training proceeds as follows:
* In any iteration of training, given a random minibatch, we split the examples in the batch into $k$ portions and distribute them evenly across the GPUs.
* Each GPU calculates loss and gradient of the model parameters based on the minibatch subset it was assigned.
* The local gradients of each of the $k$ GPUs are aggregated to obtain the current minibatch stochastic gradient.
* The aggregate gradient is re-distributed to each GPU.
* Each GPU uses this minibatch stochastic gradient to update the complete set of model parameters that it maintains.
Note that in practice we *increase* the minibatch size $k$-fold when training on $k$ GPUs such that each GPU has the same amount of work to do as if we were training on a single GPU only. On a 16-GPU server this can increase the minibatch size considerably and we may have to increase the learning rate accordingly.
Also note that batch normalization in :numref:`sec_batch_norm` needs to be adjusted, e.g., by keeping a separate batch normalization coefficient per GPU.
In what follows we will use a toy network to illustrate multi-GPU training.
```
%matplotlib inline
import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l
```
## [**A Toy Network**]
We use LeNet as introduced in :numref:`sec_lenet` (with slight modifications). We define it from scratch to illustrate parameter exchange and synchronization in detail.
```
# Initialize model parameters
scale = 0.01
W1 = torch.randn(size=(20, 1, 3, 3)) * scale
b1 = torch.zeros(20)
W2 = torch.randn(size=(50, 20, 5, 5)) * scale
b2 = torch.zeros(50)
W3 = torch.randn(size=(800, 128)) * scale
b3 = torch.zeros(128)
W4 = torch.randn(size=(128, 10)) * scale
b4 = torch.zeros(10)
params = [W1, b1, W2, b2, W3, b3, W4, b4]
# Define the model
def lenet(X, params):
h1_conv = F.conv2d(input=X, weight=params[0], bias=params[1])
h1_activation = F.relu(h1_conv)
h1 = F.avg_pool2d(input=h1_activation, kernel_size=(2, 2), stride=(2, 2))
h2_conv = F.conv2d(input=h1, weight=params[2], bias=params[3])
h2_activation = F.relu(h2_conv)
h2 = F.avg_pool2d(input=h2_activation, kernel_size=(2, 2), stride=(2, 2))
h2 = h2.reshape(h2.shape[0], -1)
h3_linear = torch.mm(h2, params[4]) + params[5]
h3 = F.relu(h3_linear)
y_hat = torch.mm(h3, params[6]) + params[7]
return y_hat
# Cross-entropy loss function
loss = nn.CrossEntropyLoss(reduction='none')
```
## Data Synchronization
For efficient multi-GPU training we need two basic operations.
First we need to have the ability to [**distribute a list of parameters to multiple devices**] and to attach gradients (`get_params`). Without parameters it is impossible to evaluate the network on a GPU.
Second, we need the ability to sum parameters across multiple devices, i.e., we need an `allreduce` function.
```
def get_params(params, device):
new_params = [p.to(device) for p in params]
for p in new_params:
p.requires_grad_()
return new_params
```
Let us try it out by copying the model parameters to one GPU.
```
new_params = get_params(params, d2l.try_gpu(0))
print('b1 weight:', new_params[1])
print('b1 grad:', new_params[1].grad)
```
Since we did not perform any computation yet, the gradient with regard to the bias parameter is still zero.
Now let us assume that we have a vector distributed across multiple GPUs. The following [**`allreduce` function adds up all vectors and broadcasts the result back to all GPUs**]. Note that for this to work we need to copy the data to the device accumulating the results.
```
def allreduce(data):
for i in range(1, len(data)):
data[0][:] += data[i].to(data[0].device)
for i in range(1, len(data)):
data[i][:] = data[0].to(data[i].device)
```
Let us test this by creating vectors with different values on different devices and aggregate them.
```
data = [torch.ones((1, 2), device=d2l.try_gpu(i)) * (i + 1) for i in range(2)]
print('before allreduce:\n', data[0], '\n', data[1])
allreduce(data)
print('after allreduce:\n', data[0], '\n', data[1])
```
## Distributing Data
We need a simple utility function to [**distribute a minibatch evenly across multiple GPUs**]. For instance, on two GPUs we would like to have half of the data to be copied to either of the GPUs.
Since it is more convenient and more concise, we use the built-in function from the deep learning framework to try it out on a $4 \times 5$ matrix.
```
data = torch.arange(20).reshape(4, 5)
devices = [torch.device('cuda:0'), torch.device('cuda:1')]
split = nn.parallel.scatter(data, devices)
print('input :', data)
print('load into', devices)
print('output:', split)
```
For later reuse we define a `split_batch` function that splits both data and labels.
```
#@save
def split_batch(X, y, devices):
"""Split `X` and `y` into multiple devices."""
assert X.shape[0] == y.shape[0]
return (nn.parallel.scatter(X, devices),
nn.parallel.scatter(y, devices))
```
## Training
Now we can implement [**multi-GPU training on a single minibatch**]. Its implementation is primarily based on the data parallelism approach described in this section. We will use the auxiliary functions we just discussed, `allreduce` and `split_and_load`, to synchronize the data among multiple GPUs. Note that we do not need to write any specific code to achieve parallelism. Since the computational graph does not have any dependencies across devices within a minibatch, it is executed in parallel *automatically*.
```
def train_batch(X, y, device_params, devices, lr):
X_shards, y_shards = split_batch(X, y, devices)
# Loss is calculated separately on each GPU
ls = [loss(lenet(X_shard, device_W), y_shard).sum()
for X_shard, y_shard, device_W in zip(
X_shards, y_shards, device_params)]
for l in ls: # Backpropagation is performed separately on each GPU
l.backward()
# Sum all gradients from each GPU and broadcast them to all GPUs
with torch.no_grad():
for i in range(len(device_params[0])):
allreduce([device_params[c][i].grad for c in range(len(devices))])
# The model parameters are updated separately on each GPU
for param in device_params:
d2l.sgd(param, lr, X.shape[0]) # Here, we use a full-size batch
```
Now, we can define [**the training function**]. It is slightly different from the ones used in the previous chapters: we need to allocate the GPUs and copy all the model parameters to all the devices.
Obviously each batch is processed using the `train_batch` function to deal with multiple GPUs. For convenience (and conciseness of code) we compute the accuracy on a single GPU, though this is *inefficient* since the other GPUs are idle.
```
def train(num_gpus, batch_size, lr):
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
devices = [d2l.try_gpu(i) for i in range(num_gpus)]
# Copy model parameters to `num_gpus` GPUs
device_params = [get_params(params, d) for d in devices]
num_epochs = 10
animator = d2l.Animator('epoch', 'test acc', xlim=[1, num_epochs])
timer = d2l.Timer()
for epoch in range(num_epochs):
timer.start()
for X, y in train_iter:
# Perform multi-GPU training for a single minibatch
train_batch(X, y, device_params, devices, lr)
torch.cuda.synchronize()
timer.stop()
# Evaluate the model on GPU 0
animator.add(epoch + 1, (d2l.evaluate_accuracy_gpu(
lambda x: lenet(x, device_params[0]), test_iter, devices[0]),))
print(f'test acc: {animator.Y[0][-1]:.2f}, {timer.avg():.1f} sec/epoch '
f'on {str(devices)}')
```
Let us see how well this works [**on a single GPU**].
We first use a batch size of 256 and a learning rate of 0.2.
```
train(num_gpus=1, batch_size=256, lr=0.2)
```
By keeping the batch size and learning rate unchanged and [**increasing the number of GPUs to 2**], we can see that the test accuracy roughly stays the same compared with
the previous experiment.
In terms of the optimization algorithms, they are identical. Unfortunately there is no meaningful speedup to be gained here: the model is simply too small; moreover we only have a small dataset, where our slightly unsophisticated approach to implementing multi-GPU training suffered from significant Python overhead. We will encounter more complex models and more sophisticated ways of parallelization going forward.
Let us see what happens nonetheless for Fashion-MNIST.
```
train(num_gpus=2, batch_size=256, lr=0.2)
```
## Summary
* There are multiple ways to split deep network training over multiple GPUs. We could split them between layers, across layers, or across data. The former two require tightly choreographed data transfers. Data parallelism is the simplest strategy.
* Data parallel training is straightforward. However, it increases the effective minibatch size to be efficient.
* In data parallelism, data are split across multiple GPUs, where each GPU executes its own forward and backward operation and subsequently gradients are aggregated and results are broadcast back to the GPUs.
* We may use slightly increased learning rates for larger minibatches.
## Exercises
1. When training on $k$ GPUs, change the minibatch size from $b$ to $k \cdot b$, i.e., scale it up by the number of GPUs.
1. Compare accuracy for different learning rates. How does it scale with the number of GPUs?
1. Implement a more efficient `allreduce` function that aggregates different parameters on different GPUs? Why is it more efficient?
1. Implement multi-GPU test accuracy computation.
[Discussions](https://discuss.d2l.ai/t/1669)
| true |
code
| 0.768907 | null | null | null | null |
|
1. Split into train and test data
2. Train model on train data normally
3. Take test data and duplicate into test prime
4. Drop first visit from test prime data
5. Get predicted delta from test prime data. Compare to delta from test data. We know the difference (epsilon) because we dropped actual visits. What percent of time is test delta < test prime delta?
6. Restrict it only to patients with lot of visits. Is this better?
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pickle
def clean_plot():
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.grid()
import matplotlib.pylab as pylab
params = {'legend.fontsize': 'x-large',
# 'figure.figsize': (10,6),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
pylab.rcParams.update(params)
import sys
import torch
sys.path.append('../data')
from load import chf
from data_utils import parse_data
from synthetic_data import load_piecewise_synthetic_data
sys.path.append('../model')
from models import Sublign
from run_experiments import get_hyperparameters
def make_test_prime(test_data_dict_raw, drop_first_T=1.):
# drop first year
test_data_dict = copy.deepcopy(test_data_dict_raw)
eps_lst = list()
X = test_data_dict['obs_t_collect']
Y = test_data_dict['Y_collect']
M = test_data_dict['mask_collect']
N_patients = X.shape[0]
N_visits = X.shape[1]
for i in range(N_patients):
eps_i = X[i,1,0] - X[i,0,0]
first_visit = X[i,1,0]
# move all visits down (essentially destroying the first visit)
for j in range(N_visits-gap):
X[i,j,0] = X[i,j+gap,0] - first_visit
Y[i,j,:] = Y[i,j+gap,:]
M[i,j,:] = M[i,j+gap,:]
for g in range(1,gap+1):
X[i,N_visits-g,0] = int(-1000)
Y[i,N_visits-g,:] = int(-1000)
M[i,N_visits-g,:] = 0.
eps_lst.append(eps_i)
return test_data_dict, eps_lst
data = chf()
max_visits = 38
shuffle = True
num_output_dims = data.shape[1] - 4
data_loader, collect_dict, unique_pid = parse_data(data.values, max_visits=max_visits)
train_data_loader, train_data_dict, test_data_loader, test_data_dict, test_pid, unique_pid = parse_data(data.values,
max_visits=max_visits, test_per=0.2,
shuffle=shuffle)
# model = Sublign(10, 20, 50, dim_biomarkers=num_output_dims, sigmoid=True, reg_type='l1', auto_delta=True,
# max_delta=5, learn_time=True, device=torch.device('cuda'))
# # model.fit(data_loader, data_loader, args.epochs, 0.01, verbose=args.verbose,fname='runs/chf.pt',eval_freq=25)
# fname='../model/chf_good.pt'
# model.load_state_dict(torch.load(fname,map_location=torch.device('cuda')))
test_p_data_dict, eps_lst = make_test_prime(test_data_dict, gap=1)
# test_deltas = model.get_deltas(test_data_dict).detach().numpy()
# test_p_deltas = model.get_deltas(test_p_data_dict).detach().numpy()
print(num_output_dims)
# def make_test_prime(test_data_dict_raw, drop_first_T=1.):
drop_first_T = 0.5
# drop first year
test_data_dict_new = copy.deepcopy(test_data_dict)
eps_lst = list()
X = test_data_dict_new['obs_t_collect']
Y = test_data_dict_new['Y_collect']
M = test_data_dict_new['mask_collect']
N_patients = X.shape[0]
N_visits = X.shape[1]
remove_idx = list()
X[X == -1000] = np.nan
for i in range(N_patients):
N_visits_under_thresh = (X[i] < 0.5).sum()
gap = N_visits_under_thresh
first_valid_visit = X[i,N_visits_under_thresh,0]
eps_i = X[i,N_visits_under_thresh,0]
for j in range(N_visits-N_visits_under_thresh):
X[i,j,0] = X[i,j+gap,0] - first_valid_visit
Y[i,j,:] = Y[i,j+gap,:]
M[i,j,:] = M[i,j+gap,:]
for g in range(1,N_visits_under_thresh+1):
X[i,N_visits-g,0] = np.nan
Y[i,N_visits-g,:] = np.nan
M[i,N_visits-g,:] = 0.
if np.isnan(X[i]).all():
remove_idx.append(i)
else:
eps_lst.append(eps_i)
keep_idx = [i for i in range(N_patients) if i not in remove_idx]
X = X[keep_idx]
Y = Y[keep_idx]
M = M[keep_idx]
print('Removed %d entries' % len(remove_idx))
X[np.isnan(X)] = -1000
# eps_lst.append(eps_i)
# return test_data_dict_new, eps_lst
eps_lst
X[0]
first_valid_visit
test_data_dict_new = copy.deepcopy(test_data_dict)
X = test_data_dict_new['obs_t_collect']
Y = test_data_dict_new['Y_collect']
M = test_data_dict_new['mask_collect']
X[X == -1000] = np.nan
i = 1
N_visits_under_thresh = (X[i] < 0.5).sum()
# for j in range(N_visits-N_visits_under_thresh):
# X[i,j,0] = X[i,j+gap,0] - first_visit
# Y[i,j,:] = Y[i,j+gap,:]
# M[i,j,:] = M[i,j+gap,:]
# for g in range(1,N_visits_under_thresh+1):
# X[i,N_visits-g,0] = np.nan
# Y[i,N_visits-g,:] = np.nan
# M[i,N_visits-g,:] = 0.
# if np.isnan(X[i]).all():
# print('yes')
# remove_idx.append(i)
(X[1] < 0.5).sum()
N_visits_under_thresh
N_visits_under_thresh
len(remove_idx)
X[X == -1000] = np.nan
for i in range(10):
print(X[i].flatten())
remove_idx
X[0][:10]
plt.hist(X.flatten())
X.max()
Y[1][:10]
test_data_dict_new['']
f = open('chf_experiment_results.pk', 'rb')
results = pickle.load(f)
test_deltas = results['test_deltas']
test_p_deltas = results['test_p_deltas']
eps_lst = results['eps_lst']
test_data_dict = results['test_data_dict']
f.close()
test_data_dict['obs_t_collect'][0].shape
# get num of visits per patient
num_visits_patient_lst = list()
for i in test_data_dict['obs_t_collect']:
num_visits = (i!=-1000).sum()
num_visits_patient_lst.append(num_visits)
num_visits_patient_lst = np.array(num_visits_patient_lst)
freq_visit_idx = np.where(num_visits_patient_lst > 10)[0]
test_p_deltas[freq_visit_idx]
test_deltas[freq_visit_idx]
np.mean(np.array(test_p_deltas - test_deltas) > 0)
test_p_deltas[:20]
clean_plot()
plt.plot(eps_lst, test_p_deltas - test_deltas, '.')
plt.xlabel('Actual eps')
plt.ylabel('Estimated eps')
# plt.savefig('')
import copy
def make_test_prime(test_data_dict_raw, gap=1):
test_data_dict = copy.deepcopy(test_data_dict_raw)
eps_lst = list()
X = test_data_dict['obs_t_collect']
Y = test_data_dict['Y_collect']
M = test_data_dict['mask_collect']
N_patients = X.shape[0]
N_visits = X.shape[1]
for i in range(N_patients):
eps_i = X[i,1,0] - X[i,0,0]
first_visit = X[i,1,0]
# move all visits down (essentially destroying the first visit)
for j in range(N_visits-gap):
X[i,j,0] = X[i,j+gap,0] - first_visit
Y[i,j,:] = Y[i,j+gap,:]
M[i,j,:] = M[i,j+gap,:]
for g in range(1,gap+1):
X[i,N_visits-g,0] = int(-1000)
Y[i,N_visits-g,:] = int(-1000)
M[i,N_visits-g,:] = 0.
eps_lst.append(eps_i)
return test_data_dict, eps_lst
t_prime_dict, eps_lst = make_test_prime(test_data_dict)
t_prime_dict['Y_collect'][1,:,0]
test_data_dict['Y_collect'][1,:,0]
```
## Plot successful model
```
import argparse
import numpy as np
import pickle
import sys
import torch
import copy
from scipy.stats import pearsonr
import matplotlib.pyplot as plt
from run_experiments import get_hyperparameters
from models import Sublign
sys.path.append('../data')
from data_utils import parse_data
from load import load_data_format
sys.path.append('../evaluation')
from eval_utils import swap_metrics
train_data_dict['Y_collect'].shape
train_data_dict['t_collect'].shape
new_Y = np.zeros((600,101,3))
val_idx_dict = {'%.1f' % j: i for i,j in enumerate(np.linspace(0,10,101))}
train_data_dict['obs_t_collect'].max()
rounded_t = np.round(train_data_dict['t_collect'],1)
N, M, _ = rounded_t.shape
for i in range(N):
for j in range(M):
val = rounded_t[i,j,0]
# try:
idx = val_idx_dict['%.1f' % val]
for k in range(3):
new_Y[i,idx,k] = train_data_dict['Y_collect'][i,j,k]
# except:
# print(val)
new_Y.shape
(new_Y == 0).sum() / (600*101*3)
# save the files for comparing against SPARTan baseline
for i in range(3):
a = new_Y[:,:,i]
np.savetxt("data1_dim%d.csv" % i, a, deliREDACTEDer=",")
true_labels = train_data_dict['s_collect'][:,0]
guess_labels = np.ones(600)
adjusted_rand_score(true_labels,guess_labels)
from sklearn.metrics import adjusted_rand_score
# a.shape
data_format_num = 1
# C, d_s, d_h, d_rnn, reg_type, lr = get_hyperparameters(data_format_num)
anneal, b_vae, C, d_s, d_h, d_rnn, reg_type, lr = get_hyperparameters(data_format_num)
C
data = load_data_format(data_format_num, 0, cache=True)
train_data_loader, train_data_dict, _, _, test_data_loader, test_data_dict, valid_pid, test_pid, unique_pid = parse_data(data.values, max_visits=4, test_per=0.2, valid_per=0.2, shuffle=False)
model = Sublign(d_s, d_h, d_rnn, dim_biomarkers=3, sigmoid=True, reg_type='l1', auto_delta=False, max_delta=0, learn_time=False, beta=0.00)
model.fit(train_data_loader, test_data_loader, 800, lr, fname='runs/data%d_chf_experiment.pt' % (data_format_num), eval_freq=25)
z = model.get_mu(train_data_dict['obs_t_collect'], train_data_dict['Y_collect'])
# fname='runs/data%d_chf_experiment.pt' % (data_format_num)
# model.load_state_dict(torch.load(fname))
nolign_results = model.score(train_data_dict, test_data_dict)
print('ARI: %.3f' % nolign_results['ari'])
print(anneal, b_vae, C, d_s, d_h, d_rnn, reg_type, lr)
data_format_num = 1
# C, d_s, d_h, d_rnn, reg_type, lr = get_hyperparameters(data_format_num)
anneal, b_vae, C, d_s, d_h, d_rnn, reg_type, lr = get_hyperparameters(data_format_num)
model = Sublign(d_s, d_h, d_rnn, dim_biomarkers=3, sigmoid=True, reg_type='l1', auto_delta=True, max_delta=5, learn_time=True, beta=0.01)
model.fit(train_data_loader, test_data_loader, 800, lr, fname='runs/data%d.pt' % (data_format_num), eval_freq=25)
z = model.get_mu(train_data_dict['obs_t_collect'], train_data_dict['Y_collect'])
# fname='runs/data%d_chf_experiment.pt' % (data_format_num)
# model.load_state_dict(torch.load(fname))
results = model.score(train_data_dict, test_data_dict)
print('ARI: %.3f' % results['ari'])
# model = Sublign(d_s, d_h, d_rnn, dim_biomarkers=3, sigmoid=True, reg_type='l1', auto_delta=True, max_delta=5, learn_time=True, b_vae=0.)
# model.fit(train_data_loader, test_data_loader, 800, lr, fname='runs/data%d_chf_experiment.pt' % (data_format_num), eval_freq=25)
# z = model.get_mu(train_data_dict['obs_t_collect'], train_data_dict['Y_collect'])
# # fname='runs/data%d_chf_experiment.pt' % (data_format_num)
# # model.load_state_dict(torch.load(fname))
# results = model.score(train_data_dict, test_data_dict)
# print('ARI: %.3f' % results['ari'])
# Visualize latent space (change configs above)
X = test_data_dict['obs_t_collect']
Y = test_data_dict['Y_collect']
M = test_data_dict['mask_collect']
test_z, _ = model.get_mu(X,Y)
test_z = test_z.detach().numpy()
test_subtypes = test_data_dict['s_collect']
from sklearn.manifold import TSNE
z_tSNE = TSNE(n_components=2).fit_transform(test_z)
test_s0_idx = np.where(test_subtypes==0)[0]
test_s1_idx = np.where(test_subtypes==1)[0]
clean_plot()
plt.plot(z_tSNE[test_s0_idx,0],z_tSNE[test_s0_idx,1],'.')
plt.plot(z_tSNE[test_s1_idx,0],z_tSNE[test_s1_idx,1],'.')
# plt.title('\nNELBO (down): %.3f, ARI (up): %.3f\n Config: %s\nColors = true subtypes' %
# (nelbo, ari, configs))
plt.show()
def sigmoid_f(x, beta0, beta1):
result = 1. / (1+np.exp(-(beta0 + beta1*x)))
return result
true_betas = [[[-4, 1],
[-1,1.],
[-8,8]
],
[
[-1,1.],
[-8,8],
[-25, 3.5]
]]
# xs = np.linspace(0,10,100)
for dim_i in range(3):
xs = np.linspace(0,10,100)
plt.figure()
clean_plot()
plt.grid(True)
ys = [sigmoid_f(xs_i, true_betas[0][dim_i][0], true_betas[0][dim_i][1]) for xs_i in xs]
plt.plot(xs,ys, ':', color='gray', linewidth=5, label='True function')
ys = [sigmoid_f(xs_i, true_betas[1][dim_i][0], true_betas[1][dim_i][1]) for xs_i in xs]
plt.plot(xs,ys, ':', color='gray', linewidth=5)
for subtype_j in range(2):
xs = np.linspace(0,10,100)
ys = [sigmoid_f(xs_i, nolign_results['cent_lst'][subtype_j,dim_i,0],
nolign_results['cent_lst'][subtype_j,dim_i,1]) for xs_i in xs]
if subtype_j == 0:
plt.plot(xs,ys,linewidth=4, label='SubNoLign subtype', linestyle='-.', color='tab:green')
else:
plt.plot(xs,ys,linewidth=4, linestyle='--', color='tab:green')
ys = [sigmoid_f(xs_i, results['cent_lst'][subtype_j,dim_i,0],
results['cent_lst'][subtype_j,dim_i,1]) for xs_i in xs]
if subtype_j == 0:
plt.plot(xs,ys,linewidth=4, label='SubLign subtype', linestyle='-', color='tab:purple')
else:
plt.plot(xs,ys,linewidth=4, linestyle='-', color='tab:purple')
plt.xlabel('Disease stage')
plt.ylabel('Biomarker')
plt.legend()
plt.savefig('subnolign_data1_subtypes_dim%d.pdf' % dim_i, bbox_inches='tight')
# # number dimensions
# fig, axs = plt.subplots(1,3, figsize=(8,4))
# for dim_i in range(3):
# ax = axs[dim_i]
# # number subtypes
# for subtype_j in range(2):
# xs = np.linspace(0,10,100)
# ys = [sigmoid_f(xs_i, model1_results['cent_lst'][subtype_j,dim_i,0],
# model1_results['cent_lst'][subtype_j,dim_i,1]) for xs_i in xs]
# ax.plot(xs,ys)
# ys = [sigmoid_f(xs_i, true_betas[0][dim_i][0], true_betas[0][dim_i][1]) for xs_i in xs]
# ax.plot(xs,ys, color='gray')
# ys = [sigmoid_f(xs_i, true_betas[1][dim_i][0], true_betas[1][dim_i][1]) for xs_i in xs]
# ax.plot(xs,ys, color='gray')
# fig.suptitle('True data generating function (gray), learned models (orange, blue)')
# plt.savefig('learned_models.pdf',bbox_inches='tight')
```
## Plot CHF Delta distributions
```
data = pickle.load(open('../clinical_runs/chf_v3_1000.pk', 'rb'))
clean_plot()
plt.hist(data['deltas'], bins=20)
plt.xlabel('Inferred Alignment $\delta_i$ Value')
plt.ylabel('Number Heart Failure Patients')
plt.savefig('Delta_dist_chf.pdf', bbox_inches='tight')
```
## Make piecewise data to measure model misspecification
```
from scipy import interpolate
x = np.arange(0, 2*np.pi+np.pi/4, 2*np.pi/8)
y = np.sin(x)
tck = interpolate.splrep(x, y, s=0)
xnew = np.arange(0, 2*np.pi, np.pi/50)
ynew = interpolate.splev(xnew, tck, der=0)
xvals = np.array([9.3578453 , 4.9814664 , 7.86530539, 8.91318433, 2.00779188])[sort_idx]
yvals = np.array([0.35722491, 0.12512101, 0.20054626, 0.38183604, 0.58836923])[sort_idx]
tck = interpolate.splrep(xvals, yvals, s=0)
y
N_subtypes,D,N_pts,_ = subtype_points.shape
fig, axes = plt.subplots(ncols=3,nrows=1)
for d, ax in enumerate(axes.flat):
# ax.set_xlim(0,10)
# ax.set_ylim(0,1)
for k in range(N_subtypes):
xs = subtype_points[k,d,:,0]
ys = subtype_points[k,d,:,1]
sort_idx = np.argsort(xs)
ax.plot(xs[sort_idx],ys[sort_idx])
plt.show()
# for d in range(D):
%%time
N_epochs = 800
N_trials = 5
use_sigmoid = True
sublign_results = {
'ari':[],
'pear': [],
'swaps': []
}
subnolign_results = {'ari': []}
for trial in range(N_trials):
data_format_num = 1
# C, d_s, d_h, d_rnn, reg_type, lr = get_hyperparameters(data_format_num)
anneal, b_vae, C, d_s, d_h, d_rnn, reg_type, lr = get_hyperparameters(data_format_num)
# C
# data = load_data_format(data_format_num, 0, cache=True)
use_sigmoid = False
data, subtype_points = load_piecewise_synthetic_data(subtypes=2, increasing=use_sigmoid,
D=3, N=2000,M=4, noise=0.25, N_pts=5)
train_data_loader, train_data_dict, _, _, test_data_loader, test_data_dict, valid_pid, test_pid, unique_pid = parse_data(data.values, max_visits=4, test_per=0.2, valid_per=0.2, shuffle=False)
model = Sublign(d_s, d_h, d_rnn, dim_biomarkers=3, sigmoid=use_sigmoid, reg_type='l1',
auto_delta=False, max_delta=5, learn_time=True, beta=1.)
model.fit(train_data_loader, test_data_loader, N_epochs, lr, fname='runs/data%d_spline.pt' % (data_format_num), eval_freq=25)
# z = model.get_mu(train_data_dict['obs_t_collect'], train_data_dict['Y_collect'])
# fname='runs/data%d_chf_experiment.pt' % (data_format_num)
# model.load_state_dict(torch.load(fname))
results = model.score(train_data_dict, test_data_dict)
print('Sublign results: ARI: %.3f; Pear: %.3f; Swaps: %.3f' % (results['ari'],results['pear'],results['swaps']))
sublign_results['ari'].append(results['ari'])
sublign_results['pear'].append(results['pear'])
sublign_results['swaps'].append(results['swaps'])
model = Sublign(d_s, d_h, d_rnn, dim_biomarkers=3, sigmoid=use_sigmoid, reg_type='l1',
auto_delta=False, max_delta=0, learn_time=False, beta=1.)
model.fit(train_data_loader, test_data_loader, N_epochs, lr, fname='runs/data%d_spline.pt' % (data_format_num), eval_freq=25)
nolign_results = model.score(train_data_dict, test_data_dict)
print('SubNoLign results: ARI: %.3f' % (nolign_results['ari']))
subnolign_results['ari'].append(nolign_results['ari'])
data_str = 'Increasing' if use_sigmoid else 'Any'
print('SubLign-%s & %.2f $\\pm$ %.2f & %.2f $\\pm$ %.2f & %.2f $\\pm$ %.2f \\\\' % (
data_str,
np.mean(sublign_results['ari']), np.std(sublign_results['ari']),
np.mean(sublign_results['pear']), np.std(sublign_results['pear']),
np.mean(sublign_results['swaps']), np.std(sublign_results['swaps'])
))
print('SubNoLign-%s & %.2f $\\pm$ %.2f & -- & -- \\\\' % (
data_str,
np.mean(sublign_results['ari']), np.std(sublign_results['ari']),
))
results = model.score(train_data_dict, test_data_dict)
print('Sublign results: ARI: %.3f; Pear: %.3f; Swaps: %.3f' % (results['ari'],results['pear'],results['swaps']))
```
| true |
code
| 0.2258 | null | null | null | null |
|
# Plots
One of the most amazing feature of hist is it's powerful plotting family. Here you can see how to plot Hist.
```
from hist import Hist
import hist
h = Hist(
hist.axis.Regular(50, -5, 5, name="S", label="s [units]", flow=False),
hist.axis.Regular(50, -5, 5, name="W", label="w [units]", flow=False),
)
import numpy as np
s_data = np.random.normal(size=100_000) + np.ones(100_000)
w_data = np.random.normal(size=100_000)
# normal fill
h.fill(s_data, w_data)
```
## Via Matplotlib
hist allows you to plot via [Matplotlib](https://matplotlib.org/) like this:
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(8, 5))
w, x, y = h.to_numpy()
mesh = ax.pcolormesh(x, y, w.T, cmap="RdYlBu")
ax.set_xlabel("s")
ax.set_ylabel("w")
fig.colorbar(mesh)
plt.show()
```
## Via Mplhep
[mplhep](https://github.com/scikit-hep/mplhep) is an important visualization tools in Scikit-Hep ecosystem. hist has integrate with mplhep and you can also plot using it. If you want more info about mplhep please visit the official repo to see it.
```
import mplhep
fig, axs = plt.subplots(1, 2, figsize=(9, 4))
mplhep.histplot(h.project("S"), ax=axs[0])
mplhep.hist2dplot(h, ax=axs[1])
plt.show()
```
## Via Plot
Hist has plotting methods for 1-D and 2-D histograms, `.plot1d()` and `.plot2d()` respectively. It also provides `.plot()` for plotting according to the its dimension. Moreover, to show the projection of each axis, you can use `.plot2d_full()`. If you have a Hist with higher dimension, you can use `.project()` to extract two dimensions to see it with our plotting suite.
Our plotting methods are all based on Matplotlib, so you can pass Matplotlib's `ax` into it, and hist will draw on it. We will create it for you if you do not pass them in.
```
# plot1d
fig, ax = plt.subplots(figsize=(6, 4))
h.project("S").plot1d(ax=ax, ls="--", color="teal", lw=3)
plt.show()
# plot2d
fig, ax = plt.subplots(figsize=(6, 6))
h.plot2d(ax=ax, cmap="plasma")
plt.show()
# plot2d_full
plt.figure(figsize=(8, 8))
h.plot2d_full(
main_cmap="coolwarm",
top_ls="--",
top_color="orange",
top_lw=2,
side_ls=":",
side_lw=2,
side_color="steelblue",
)
plt.show()
# auto-plot
fig, axs = plt.subplots(1, 2, figsize=(9, 4), gridspec_kw={"width_ratios": [5, 4]})
h.project("W").plot(ax=axs[0], color="darkviolet", lw=2, ls="-.")
h.project("W", "S").plot(ax=axs[1], cmap="cividis")
plt.show()
```
## Via Plot Pull
Pull plots are commonly used in HEP studies, and we provide a method for them with `.plot_pull()`, which accepts a `Callable` object, like the below `pdf` function, which is then fit to the histogram and the fit and pulls are shown on the plot. As Normal distributions are the generally desired function to fit the histogram data, the `str` aliases `"normal"`, `"gauss"`, and `"gaus"` are supported as well.
```
def pdf(x, a=1 / np.sqrt(2 * np.pi), x0=0, sigma=1, offset=0):
return a * np.exp(-((x - x0) ** 2) / (2 * sigma ** 2)) + offset
np.random.seed(0)
hist_1 = hist.Hist(
hist.axis.Regular(
50, -5, 5, name="X", label="x [units]", underflow=False, overflow=False
)
).fill(np.random.normal(size=1000))
fig = plt.figure(figsize=(10, 8))
main_ax_artists, sublot_ax_arists = hist_1.plot_pull(
"normal",
eb_ecolor="steelblue",
eb_mfc="steelblue",
eb_mec="steelblue",
eb_fmt="o",
eb_ms=6,
eb_capsize=1,
eb_capthick=2,
eb_alpha=0.8,
fp_c="hotpink",
fp_ls="-",
fp_lw=2,
fp_alpha=0.8,
bar_fc="royalblue",
pp_num=3,
pp_fc="royalblue",
pp_alpha=0.618,
pp_ec=None,
ub_alpha=0.2,
)
```
## Via Plot Ratio
You can also make an arbitrary ratio plot using the `.plot_ratio` API:
```
hist_2 = hist.Hist(
hist.axis.Regular(
50, -5, 5, name="X", label="x [units]", underflow=False, overflow=False
)
).fill(np.random.normal(size=1700))
fig = plt.figure(figsize=(10, 8))
main_ax_artists, sublot_ax_arists = hist_1.plot_ratio(
hist_2,
rp_ylabel=r"Ratio",
rp_num_label="hist1",
rp_denom_label="hist2",
rp_uncert_draw_type="bar", # line or bar
)
```
Ratios between the histogram and a callable, or `str` alias, are supported as well
```
fig = plt.figure(figsize=(10, 8))
main_ax_artists, sublot_ax_arists = hist_1.plot_ratio(pdf)
```
Using the `.plot_ratio` API you can also make efficiency plots (where the numerator is a strict subset of the denominator)
```
hist_3 = hist_2.copy() * 0.7
hist_2.fill(np.random.uniform(-5, 5, 600))
hist_3.fill(np.random.uniform(-5, 5, 200))
fig = plt.figure(figsize=(10, 8))
main_ax_artists, sublot_ax_arists = hist_3.plot_ratio(
hist_2,
rp_num_label="hist3",
rp_denom_label="hist2",
rp_uncert_draw_type="line",
rp_uncertainty_type="efficiency",
)
```
| true |
code
| 0.716761 | null | null | null | null |
|
# Hands-on Federated Learning: Image Classification
In their recent (and exteremly thorough!) review of the federated learning literature [*Kairouz, et al (2019)*](https://arxiv.org/pdf/1912.04977.pdf) define federated learning as a machine learning setting where multiple entities (clients) collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective.
In this tutorial we will use a federated version of the classic MNIST dataset to introduce the Federated Learning (FL) API layer of TensorFlow Federated (TFF), [`tff.learning`](https://www.tensorflow.org/federated/api_docs/python/tff/learning) - a set of high-level interfaces that can be used to perform common types of federated learning tasks, such as federated training, against user-supplied models implemented in TensorFlow or Keras.
# Preliminaries
```
import collections
import os
import typing
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow_federated as tff
# required to run TFF inside Jupyter notebooks
import nest_asyncio
nest_asyncio.apply()
tff.federated_computation(lambda: 'Hello, World!')()
```
# Preparing the data
In the IID setting the local data on each "client" is assumed to be a representative sample of the global data distribution. This is typically the case by construction when performing data parallel training of deep learning models across multiple CPU/GPU "clients".
The non-IID case is significantly more complicated as there are many ways in which data can be non-IID and different degress of "non-IIDness". Consider a supervised task with features $X$ and labels $y$. A statistical model of federated learning involves two levels of sampling:
1. Sampling a client $i$ from the distribution over available clients $Q$
2. Sampling an example $(X,y)$ from that client’s local data distribution $P_i(X,y)$.
Non-IID data in federated learning typically refers to differences between $P_i$ and $P_j$ for different clients $i$ and $j$. However, it is worth remembering that both the distribution of available clients, $Q$, and the distribution of local data for client $i$, $P_i$, may change over time which introduces another dimension of “non-IIDness”. Finally, if the local data on a client's device is insufficiently randomized, perhaps ordered by time, then independence is violated locally as well.
In order to facilitate experimentation TFF includes federated versions of several popular datasets that exhibit different forms and degrees of non-IIDness.
```
# What datasets are available?
tff.simulation.datasets.
```
This tutorial uses a version of MNIST that contains a version of the original NIST dataset that has been re-processed using [LEAF](https://leaf.cmu.edu/) so that the data is keyed by the original writer of the digits.
The federated MNIST dataset displays a particular type of non-IIDness: feature distribution skew (covariate shift). Whith feature distribution skew the marginal distributions $P_i(X)$ vary across clients, even though $P(y|X)$ is shared. In the federated MNIST dataset users are writing the same numbers but each user has a different writing style characterized but different stroke width, slant, etc.
```
tff.simulation.datasets.emnist.load_data?
emnist_train, emnist_test = (tff.simulation
.datasets
.emnist
.load_data(only_digits=True, cache_dir="../data"))
NUMBER_CLIENTS = len(emnist_train.client_ids)
NUMBER_CLIENTS
def sample_client_ids(client_ids: typing.List[str],
sample_size: typing.Union[float, int],
random_state: np.random.RandomState) -> typing.List[str]:
"""Randomly selects a subset of clients ids."""
number_clients = len(client_ids)
error_msg = "'client_ids' must be non-emtpy."
assert number_clients > 0, error_msg
if isinstance(sample_size, float):
error_msg = "Sample size must be between 0 and 1."
assert 0 <= sample_size <= 1, error_msg
size = int(sample_size * number_clients)
elif isinstance(sample_size, int):
error_msg = f"Sample size must be between 0 and {number_clients}."
assert 0 <= sample_size <= number_clients, error_msg
size = sample_size
else:
error_msg = "Type of 'sample_size' must be 'float' or 'int'."
raise TypeError(error_msg)
random_idxs = random_state.randint(number_clients, size=size)
return [client_ids[i] for i in random_idxs]
# these are what the client ids look like
_random_state = np.random.RandomState(42)
sample_client_ids(emnist_train.client_ids, 10, _random_state)
def create_tf_datasets(source: tff.simulation.ClientData,
client_ids: typing.Union[None, typing.List[str]]) -> typing.Dict[str, tf.data.Dataset]:
"""Create tf.data.Dataset instances for clients using their client_id."""
if client_ids is None:
client_ids = source.client_ids
datasets = {client_id: source.create_tf_dataset_for_client(client_id) for client_id in client_ids}
return datasets
def sample_client_datasets(source: tff.simulation.ClientData,
sample_size: typing.Union[float, int],
random_state: np.random.RandomState) -> typing.Dict[str, tf.data.Dataset]:
"""Randomly selects a subset of client datasets."""
client_ids = sample_client_ids(source.client_ids, sample_size, random_state)
client_datasets = create_tf_datasets(source, client_ids)
return client_datasets
_random_state = np.random.RandomState()
client_datasets = sample_client_datasets(emnist_train, sample_size=1, random_state=_random_state)
(client_id, client_dataset), *_ = client_datasets.items()
fig, axes = plt.subplots(1, 5, figsize=(12,6), sharex=True, sharey=True)
for i, example in enumerate(client_dataset.take(5)):
axes[i].imshow(example["pixels"].numpy(), cmap="gray")
axes[i].set_title(example["label"].numpy())
_ = fig.suptitle(x= 0.5, y=0.75, t=f"Training examples for a client {client_id}", fontsize=15)
```
## Data preprocessing
Since each client dataset is already a [`tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), preprocessing can be accomplished using Dataset transformations. Another option would be to use preprocessing operations from [`sklearn.preprocessing`](https://scikit-learn.org/stable/modules/preprocessing.html).
Preprocessing consists of the following steps:
1. `map` a function that flattens the 28 x 28 images into 784-element tensors
2. `map` a function that rename the features from pixels and label to X and y for use with Keras
3. `shuffle` the individual examples
4. `batch` the into training batches
We also throw in a `repeat` over the data set to run several epochs on each client device before sending parameters to the server for averaging.
```
AUTOTUNE = (tf.data
.experimental
.AUTOTUNE)
SHUFFLE_BUFFER_SIZE = 1000
NUMBER_TRAINING_EPOCHS = 5 # number of local updates!
TRAINING_BATCH_SIZE = 32
TESTING_BATCH_SIZE = 32
NUMBER_FEATURES = 28 * 28
NUMBER_TARGETS = 10
def _reshape(training_batch):
"""Extracts and reshapes data from a training sample """
pixels = training_batch["pixels"]
label = training_batch["label"]
X = tf.reshape(pixels, shape=[-1]) # flattens 2D pixels to 1D
y = tf.reshape(label, shape=[1])
return X, y
def create_training_dataset(client_dataset: tf.data.Dataset) -> tf.data.Dataset:
"""Create a training dataset for a client from a raw client dataset."""
training_dataset = (client_dataset.map(_reshape, num_parallel_calls=AUTOTUNE)
.shuffle(SHUFFLE_BUFFER_SIZE, seed=None, reshuffle_each_iteration=True)
.repeat(NUMBER_TRAINING_EPOCHS)
.batch(TRAINING_BATCH_SIZE)
.prefetch(buffer_size=AUTOTUNE))
return training_dataset
def create_testing_dataset(client_dataset: tf.data.Dataset) -> tf.data.Dataset:
"""Create a testing dataset for a client from a raw client dataset."""
testing_dataset = (client_dataset.map(_reshape, num_parallel_calls=AUTOTUNE)
.batch(TESTING_BATCH_SIZE))
return testing_dataset
```
## How to choose the clients included in each training round
In a typical federated training scenario there will be a very large population of user devices however only a fraction of these devices are likely to be available for training at a given point in time. For example, if the client devices are mobile phones then they might only participate in training when plugged into a power source, off a metered network, and otherwise idle.
In a simulated environment, where all data is locally available, an approach is to simply sample a random subset of the clients to be involved in each round of training so that the subset of clients involved will vary from round to round.
### How many clients to include in each round?
Updating and averaging a larger number of client models per training round yields better convergence and in a simulated training environment probably makes sense to include as many clients as is computationally feasible. However in real-world training scenario while averaging a larger number of clients improve convergence, it also makes training vulnerable to slowdown due to unpredictable tail delays in computation/communication at/with the clients.
```
def create_federated_data(training_source: tff.simulation.ClientData,
testing_source: tff.simulation.ClientData,
sample_size: typing.Union[float, int],
random_state: np.random.RandomState) -> typing.Dict[str, typing.Tuple[tf.data.Dataset, tf.data.Dataset]]:
# sample clients ids from the training dataset
client_ids = sample_client_ids(training_source.client_ids, sample_size, random_state)
federated_data = {}
for client_id in client_ids:
# create training dataset for the client
_tf_dataset = training_source.create_tf_dataset_for_client(client_id)
training_dataset = create_training_dataset(_tf_dataset)
# create the testing dataset for the client
_tf_dataset = testing_source.create_tf_dataset_for_client(client_id)
testing_dataset = create_testing_dataset(_tf_dataset)
federated_data[client_id] = (training_dataset, testing_dataset)
return federated_data
_random_state = np.random.RandomState(42)
federated_data = create_federated_data(emnist_train,
emnist_test,
sample_size=0.01,
random_state=_random_state)
# keys are client ids, values are (training_dataset, testing_dataset) pairs
len(federated_data)
```
# Creating a model with Keras
If you are using Keras, you likely already have code that constructs a Keras model. Since the model will need to be replicated on each of the client devices we wrap the model in a no-argument Python function, a representation of which, will eventually be invoked on each client to create the model on that client.
```
def create_keras_model_fn() -> keras.Model:
model_fn = keras.models.Sequential([
keras.layers.Input(shape=(NUMBER_FEATURES,)),
keras.layers.Dense(units=NUMBER_TARGETS),
keras.layers.Softmax(),
])
return model_fn
```
In order to use any model with TFF, it needs to be wrapped in an instance of the [`tff.learning.Model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model) interface, which exposes methods to stamp the model's forward pass, metadata properties, etc, and also introduces additional elements such as ways to control the process of computing federated metrics.
Once you have a Keras model like the one we've just defined above, you can have TFF wrap it for you by invoking [`tff.learning.from_keras_model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/from_keras_model), passing the model and a sample data batch as arguments, as shown below.
```
tff.learning.from_keras_model?
def create_tff_model_fn() -> tff.learning.Model:
keras_model = create_keras_model_fn()
dummy_batch = (tf.constant(0.0, shape=(TRAINING_BATCH_SIZE, NUMBER_FEATURES), dtype=tf.float32),
tf.constant(0, shape=(TRAINING_BATCH_SIZE, 1), dtype=tf.int32))
loss_fn = (keras.losses
.SparseCategoricalCrossentropy())
metrics = [
keras.metrics.SparseCategoricalAccuracy()
]
tff_model_fn = (tff.learning
.from_keras_model(keras_model, dummy_batch, loss_fn, None, metrics))
return tff_model_fn
```
Again, since our model will need to be replicated on each of the client devices we wrap the model in a no-argument Python function, a representation of which, will eventually be invoked on each client to create the model on that client.
# Training the model on federated data
Now that we have a model wrapped as `tff.learning.Model` for use with TFF, we can let TFF construct a Federated Averaging algorithm by invoking the helper function `tff.learning.build_federated_averaging_process` as follows.
Keep in mind that the argument needs to be a constructor (such as `create_tff_model_fn` above), not an already-constructed instance, so that the construction of your model can happen in a context controlled by TFF.
One critical note on the Federated Averaging algorithm below, there are 2 optimizers: a
1. `client_optimizer_fn` which is only used to compute local model updates on each client.
2. `server_optimizer_fn` applies the averaged update to the global model on the server.
N.B. the choice of optimizer and learning rate may need to be different than those you would use to train the model on a standard i.i.d. dataset. Start with stochastic gradient descent with a smaller (than normal) learning rate.
```
tff.learning.build_federated_averaging_process?
CLIENT_LEARNING_RATE = 1e-2
SERVER_LEARNING_RATE = 1e0
def create_client_optimizer(learning_rate: float = CLIENT_LEARNING_RATE,
momentum: float = 0.0,
nesterov: bool = False) -> keras.optimizers.Optimizer:
client_optimizer = (keras.optimizers
.SGD(learning_rate, momentum, nesterov))
return client_optimizer
def create_server_optimizer(learning_rate: float = SERVER_LEARNING_RATE,
momentum: float = 0.0,
nesterov: bool = False) -> keras.optimizers.Optimizer:
server_optimizer = (keras.optimizers
.SGD(learning_rate, momentum, nesterov))
return server_optimizer
federated_averaging_process = (tff.learning
.build_federated_averaging_process(create_tff_model_fn,
create_client_optimizer,
create_server_optimizer,
client_weight_fn=None,
stateful_delta_aggregate_fn=None,
stateful_model_broadcast_fn=None))
```
What just happened? TFF has constructed a pair of *federated computations* (i.e., programs in TFF's internal glue language) and packaged them into a [`tff.utils.IterativeProcess`](https://www.tensorflow.org/federated/api_docs/python/tff/utils/IterativeProcess) in which these computations are available as a pair of properties `initialize` and `next`.
It is a goal of TFF to define computations in a way that they could be executed in real federated learning settings, but currently only local execution simulation runtime is implemented. To execute a computation in a simulator, you simply invoke it like a Python function. This default interpreted environment is not designed for high performance, but it will suffice for this tutorial.
## `initialize`
A function that takes no arguments and returns the state of the federated averaging process on the server. This function is only called to initialize a federated averaging process after it has been created.
```
# () -> SERVER_STATE
print(federated_averaging_process.initialize.type_signature)
state = federated_averaging_process.initialize()
```
## `next`
A function that takes current server state and federated data as arguments and returns the updated server state as well as any training metrics. Calling `next` performs a single round of federated averaging consisting of the following steps.
1. pushing the server state (including the model parameters) to the clients
2. on-device training on their local data
3. collecting and averaging model updates
4. producing a new updated model at the server.
```
# extract the training datasets from the federated data
federated_training_data = [training_dataset for _, (training_dataset, _) in federated_data.items()]
# SERVER_STATE, FEDERATED_DATA -> SERVER_STATE, TRAINING_METRICS
state, metrics = federated_averaging_process.next(state, federated_training_data)
print(f"round: 0, metrics: {metrics}")
```
Let's run a few more rounds on the same training data (which will over-fit to a particular set of clients but will converge faster).
```
number_training_rounds = 15
for n in range(1, number_training_rounds):
state, metrics = federated_averaging_process.next(state, federated_training_data)
print(f"round:{n}, metrics:{metrics}")
```
# First attempt at simulating federated averaging
A proper federated averaging simulation would randomly sample new clients for each training round, allow for evaluation of training progress on training and testing data, and log training and testing metrics to TensorBoard for reference.
Here we define a function that randomly sample new clients prior to each training round and logs training metrics TensorBoard. We defer handling testing data until we discuss federated evaluation towards the end of the tutorial.
```
def simulate_federated_averaging(federated_averaging_process: tff.utils.IterativeProcess,
training_source: tff.simulation.ClientData,
testing_source: tff.simulation.ClientData,
sample_size: typing.Union[float, int],
random_state: np.random.RandomState,
number_rounds: int,
initial_state: None = None,
tensorboard_logging_dir: str = None):
state = federated_averaging_process.initialize() if initial_state is None else initial_state
if tensorboard_logging_dir is not None:
if not os.path.isdir(tensorboard_logging_dir):
os.makedirs(tensorboard_logging_dir)
summary_writer = (tf.summary
.create_file_writer(tensorboard_logging_dir))
with summary_writer.as_default():
for n in range(number_rounds):
federated_data = create_federated_data(training_source,
testing_source,
sample_size,
random_state)
anonymized_training_data = [dataset for _, (dataset, _) in federated_data.items()]
state, metrics = federated_averaging_process.next(state, anonymized_training_data)
print(f"Round: {n}, Training metrics: {metrics}")
for name, value in metrics._asdict().items():
tf.summary.scalar(name, value, step=n)
else:
for n in range(number_rounds):
federated_data = create_federated_data(training_source,
testing_source,
sample_size,
random_state)
anonymized_training_data = [dataset for _, (dataset, _) in federated_data.items()]
state, metrics = federated_averaging_process.next(state, anonymized_training_data)
print(f"Round: {n}, Training metrics: {metrics}")
return state, metrics
federated_averaging_process = (tff.learning
.build_federated_averaging_process(create_tff_model_fn,
create_client_optimizer,
create_server_optimizer,
client_weight_fn=None,
stateful_delta_aggregate_fn=None,
stateful_model_broadcast_fn=None))
_random_state = np.random.RandomState(42)
_tensorboard_logging_dir = "../results/logs/tensorboard"
updated_state, current_metrics = simulate_federated_averaging(federated_averaging_process,
training_source=emnist_train,
testing_source=emnist_test,
sample_size=0.01,
random_state=_random_state,
number_rounds=5,
tensorboard_logging_dir=_tensorboard_logging_dir)
updated_state
current_metrics
```
# Customizing the model implementation
Keras is the recommended high-level model API for TensorFlow and you should be using Keras models and creating TFF models using [`tff.learning.from_keras_model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/from_keras_model) whenever possible.
However, [`tff.learning`](https://www.tensorflow.org/federated/api_docs/python/tff/learning) provides a lower-level model interface, [`tff.learning.Model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model), that exposes the minimal functionality necessary for using a model for federated learning. Directly implementing this interface (possibly still using building blocks from [`keras`](https://www.tensorflow.org/guide/keras)) allows for maximum customization without modifying the internals of the federated learning algorithms.
Now we are going to repeat the above from scratch!
## Defining model variables
We start by defining a new Python class that inherits from `tff.learning.Model`. In the class constructor (i.e., the `__init__` method) we will initialize all relevant variables using TF primatives as well as define the our "input spec" which defines the shape and types of the tensors that will hold input data.
```
class MNISTModel(tff.learning.Model):
def __init__(self):
# initialize some trainable variables
self._weights = tf.Variable(
initial_value=lambda: tf.zeros(dtype=tf.float32, shape=(NUMBER_FEATURES, NUMBER_TARGETS)),
name="weights",
trainable=True
)
self._bias = tf.Variable(
initial_value=lambda: tf.zeros(dtype=tf.float32, shape=(NUMBER_TARGETS,)),
name="bias",
trainable=True
)
# initialize some variables used in computing metrics
self._number_examples = tf.Variable(0.0, name='number_examples', trainable=False)
self._total_loss = tf.Variable(0.0, name='total_loss', trainable=False)
self._number_true_positives = tf.Variable(0.0, name='number_true_positives', trainable=False)
# define the input spec
self._input_spec = collections.OrderedDict([
('X', tf.TensorSpec([None, NUMBER_FEATURES], tf.float32)),
('y', tf.TensorSpec([None, 1], tf.int32))
])
@property
def input_spec(self):
return self._input_spec
@property
def local_variables(self):
return [self._number_examples, self._total_loss, self._number_true_positives]
@property
def non_trainable_variables(self):
return []
@property
def trainable_variables(self):
return [self._weights, self._bias]
```
## Defining the forward pass
With the variables for model parameters and cumulative statistics in place we can now define the `forward_pass` method that computes loss, makes predictions, and updates the cumulative statistics for a single batch of input data.
```
class MNISTModel(tff.learning.Model):
def __init__(self):
# initialize some trainable variables
self._weights = tf.Variable(
initial_value=lambda: tf.zeros(dtype=tf.float32, shape=(NUMBER_FEATURES, NUMBER_TARGETS)),
name="weights",
trainable=True
)
self._bias = tf.Variable(
initial_value=lambda: tf.zeros(dtype=tf.float32, shape=(NUMBER_TARGETS,)),
name="bias",
trainable=True
)
# initialize some variables used in computing metrics
self._number_examples = tf.Variable(0.0, name='number_examples', trainable=False)
self._total_loss = tf.Variable(0.0, name='total_loss', trainable=False)
self._number_true_positives = tf.Variable(0.0, name='number_true_positives', trainable=False)
# define the input spec
self._input_spec = collections.OrderedDict([
('X', tf.TensorSpec([None, NUMBER_FEATURES], tf.float32)),
('y', tf.TensorSpec([None, 1], tf.int32))
])
@property
def input_spec(self):
return self._input_spec
@property
def local_variables(self):
return [self._number_examples, self._total_loss, self._number_true_positives]
@property
def non_trainable_variables(self):
return []
@property
def trainable_variables(self):
return [self._weights, self._bias]
@tf.function
def _count_true_positives(self, y_true, y_pred):
return tf.reduce_sum(tf.cast(tf.equal(y_true, y_pred), tf.float32))
@tf.function
def _linear_transformation(self, batch):
X = batch['X']
W, b = self.trainable_variables
Z = tf.matmul(X, W) + b
return Z
@tf.function
def _loss_fn(self, y_true, probabilities):
return -tf.reduce_mean(tf.reduce_sum(tf.one_hot(y_true, NUMBER_TARGETS) * tf.math.log(probabilities), axis=1))
@tf.function
def _model_fn(self, batch):
Z = self._linear_transformation(batch)
probabilities = tf.nn.softmax(Z)
return probabilities
@tf.function
def forward_pass(self, batch, training=True):
probabilities = self._model_fn(batch)
y_pred = tf.argmax(probabilities, axis=1, output_type=tf.int32)
y_true = tf.reshape(batch['y'], shape=[-1])
# compute local variables
loss = self._loss_fn(y_true, probabilities)
true_positives = self._count_true_positives(y_true, y_pred)
number_examples = tf.size(y_true, out_type=tf.float32)
# update local variables
self._total_loss.assign_add(loss)
self._number_true_positives.assign_add(true_positives)
self._number_examples.assign_add(number_examples)
batch_output = tff.learning.BatchOutput(
loss=loss,
predictions=y_pred,
num_examples=tf.cast(number_examples, tf.int32)
)
return batch_output
```
## Defining the local metrics
Next, we define a method `report_local_outputs` that returns a set of local metrics. These are the values, in addition to model updates (which are handled automatically), that are eligible to be aggregated to the server in a federated learning or evaluation process.
Finally, we need to determine how to aggregate the local metrics emitted by each device by defining `federated_output_computation`. This is the only part of the code that isn't written in TensorFlow - it's a federated computation expressed in TFF.
```
class MNISTModel(tff.learning.Model):
def __init__(self):
# initialize some trainable variables
self._weights = tf.Variable(
initial_value=lambda: tf.zeros(dtype=tf.float32, shape=(NUMBER_FEATURES, NUMBER_TARGETS)),
name="weights",
trainable=True
)
self._bias = tf.Variable(
initial_value=lambda: tf.zeros(dtype=tf.float32, shape=(NUMBER_TARGETS,)),
name="bias",
trainable=True
)
# initialize some variables used in computing metrics
self._number_examples = tf.Variable(0.0, name='number_examples', trainable=False)
self._total_loss = tf.Variable(0.0, name='total_loss', trainable=False)
self._number_true_positives = tf.Variable(0.0, name='number_true_positives', trainable=False)
# define the input spec
self._input_spec = collections.OrderedDict([
('X', tf.TensorSpec([None, NUMBER_FEATURES], tf.float32)),
('y', tf.TensorSpec([None, 1], tf.int32))
])
@property
def federated_output_computation(self):
return self._aggregate_metrics_across_clients
@property
def input_spec(self):
return self._input_spec
@property
def local_variables(self):
return [self._number_examples, self._total_loss, self._number_true_positives]
@property
def non_trainable_variables(self):
return []
@property
def trainable_variables(self):
return [self._weights, self._bias]
@tff.federated_computation
def _aggregate_metrics_across_clients(metrics):
aggregated_metrics = {
'number_examples': tff.federated_sum(metrics.number_examples),
'average_loss': tff.federated_mean(metrics.average_loss, metrics.number_examples),
'accuracy': tff.federated_mean(metrics.accuracy, metrics.number_examples)
}
return aggregated_metrics
@tf.function
def _count_true_positives(self, y_true, y_pred):
return tf.reduce_sum(tf.cast(tf.equal(y_true, y_pred), tf.float32))
@tf.function
def _linear_transformation(self, batch):
X = batch['X']
W, b = self.trainable_variables
Z = tf.matmul(X, W) + b
return Z
@tf.function
def _loss_fn(self, y_true, probabilities):
return -tf.reduce_mean(tf.reduce_sum(tf.one_hot(y_true, NUMBER_TARGETS) * tf.math.log(probabilities), axis=1))
@tf.function
def _model_fn(self, batch):
Z = self._linear_transformation(batch)
probabilities = tf.nn.softmax(Z)
return probabilities
@tf.function
def forward_pass(self, batch, training=True):
probabilities = self._model_fn(batch)
y_pred = tf.argmax(probabilities, axis=1, output_type=tf.int32)
y_true = tf.reshape(batch['y'], shape=[-1])
# compute local variables
loss = self._loss_fn(y_true, probabilities)
true_positives = self._count_true_positives(y_true, y_pred)
number_examples = tf.cast(tf.size(y_true), tf.float32)
# update local variables
self._total_loss.assign_add(loss)
self._number_true_positives.assign_add(true_positives)
self._number_examples.assign_add(number_examples)
batch_output = tff.learning.BatchOutput(
loss=loss,
predictions=y_pred,
num_examples=tf.cast(number_examples, tf.int32)
)
return batch_output
@tf.function
def report_local_outputs(self):
local_metrics = collections.OrderedDict([
('number_examples', self._number_examples),
('average_loss', self._total_loss / self._number_examples),
('accuracy', self._number_true_positives / self._number_examples)
])
return local_metrics
```
Here are a few points worth highlighting:
* All state that your model will use must be captured as TensorFlow variables, as TFF does not use Python at runtime (remember your code should be written such that it can be deployed to mobile devices).
* Your model should describe what form of data it accepts (input_spec), as in general, TFF is a strongly-typed environment and wants to determine type signatures for all components. Declaring the format of your model's input is an essential part of it.
* Although technically not required, we recommend wrapping all TensorFlow logic (forward pass, metric calculations, etc.) as tf.functions, as this helps ensure the TensorFlow can be serialized, and removes the need for explicit control dependencies.
The above is sufficient for evaluation and algorithms like Federated SGD. However, for Federated Averaging, we need to specify how the model should train locally on each batch.
```
class MNISTrainableModel(MNISTModel, tff.learning.TrainableModel):
def __init__(self, optimizer):
super().__init__()
self._optimizer = optimizer
@tf.function
def train_on_batch(self, batch):
with tf.GradientTape() as tape:
output = self.forward_pass(batch)
gradients = tape.gradient(output.loss, self.trainable_variables)
self._optimizer.apply_gradients(zip(tf.nest.flatten(gradients), tf.nest.flatten(self.trainable_variables)))
return output
```
# Simulating federated training with the new model
With all the above in place, the remainder of the process looks like what we've seen already - just replace the model constructor with the constructor of our new model class, and use the two federated computations in the iterative process you created to cycle through training rounds.
```
def create_custom_tff_model_fn():
optimizer = keras.optimizers.SGD(learning_rate=0.02)
return MNISTrainableModel(optimizer)
federated_averaging_process = (tff.learning
.build_federated_averaging_process(create_custom_tff_model_fn))
_random_state = np.random.RandomState(42)
updated_state, current_metrics = simulate_federated_averaging(federated_averaging_process,
training_source=emnist_train,
testing_source=emnist_test,
sample_size=0.01,
random_state=_random_state,
number_rounds=10)
updated_state
current_metrics
```
# Evaluation
All of our experiments so far presented only federated training metrics - the average metrics over all batches of data trained across all clients in the round. Should we be concerened about overfitting? Yes! In federated averaging algorithms there are two different ways to over-fit.
1. Overfitting the shared model (especially if we use the same set of clients on each round).
2. Over-ftting local models on the clients.
## Federated evaluation
To perform evaluation on federated data, you can construct another federated computation designed for just this purpose, using the [`tff.learning.build_federated_evaluation`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_evaluation) function, and passing in your model constructor as an argument. Note that evaluation doesn't perform gradient descent and there's no need to construct optimizers.
```
tff.learning.build_federated_evaluation?
federated_evaluation = (tff.learning
.build_federated_evaluation(create_custom_tff_model_fn))
# function type signature: SERVER_MODEL, FEDERATED_DATA -> METRICS
print(federate_evaluation.type_signature)
```
The `federated_evaluation` function is similar to `tff.utils.IterativeProcess.next` but with two important differences.
1. Function does not return the server state; since evaluation doesn't modify the model or any other aspect of state - you can think of it as stateless.
2. Function only needs the model and doesn't require any other part of server state that might be associated with training, such as optimizer variables.
```
training_metrics = federated_evaluation(updated_state.model, federated_training_data)
training_metrics
```
Note the numbers may look marginally better than what was reported by the last round of training. By convention, the training metrics reported by the iterative training process generally reflect the performance of the model at the beginning of the training round, so the evaluation metrics will always be one step ahead.
## Evaluating on client data not used in training
Since we are training a shared model for digit classication we might also want to evaluate the performance of the model on client test datasets where the corresponding training dataset was not used in training.
```
_random_state = np.random.RandomState(42)
client_datasets = sample_client_datasets(emnist_test, sample_size=0.01, random_state=_random_state)
federated_testing_data = [create_testing_dataset(client_dataset) for _, client_dataset in client_datasets.items()]
testing_metrics = federated_evaluation(updated_state.model, federated_testing_data)
testing_metrics
```
# Adding evaluation to our federated averaging simulation
```
def simulate_federated_averaging(federated_averaging_process: tff.utils.IterativeProcess,
federated_evaluation,
training_source: tff.simulation.ClientData,
testing_source: tff.simulation.ClientData,
sample_size: typing.Union[float, int],
random_state: np.random.RandomState,
number_rounds: int,
tensorboard_logging_dir: str = None):
state = federated_averaging_process.initialize()
if tensorboard_logging_dir is not None:
if not os.path.isdir(tensorboard_logging_dir):
os.makedirs(tensorboard_logging_dir)
summary_writer = (tf.summary
.create_file_writer(tensorboard_logging_dir))
with summary_writer.as_default():
for n in range(number_rounds):
federated_data = create_federated_data(training_source,
testing_source,
sample_size,
random_state)
# extract the training and testing datasets
anonymized_training_data = []
anonymized_testing_data = []
for training_dataset, testing_dataset in federated_data.values():
anonymized_training_data.append(training_dataset)
anonymized_testing_data.append(testing_dataset)
state, _ = federated_averaging_process.next(state, anonymized_training_data)
training_metrics = federated_evaluation(state.model, anonymized_training_data)
testing_metrics = federated_evaluation(state.model, anonymized_testing_data)
print(f"Round: {n}, Training metrics: {training_metrics}, Testing metrics: {testing_metrics}")
# tensorboard logging
for name, value in training_metrics._asdict().items():
tf.summary.scalar(name, value, step=n)
for name, value in testing_metrics._asdict().items():
tf.summary.scalar(name, value, step=n)
else:
for n in range(number_rounds):
federated_data = create_federated_data(training_source,
testing_source,
sample_size,
random_state)
# extract the training and testing datasets
anonymized_training_data = []
anonymized_testing_data = []
for training_dataset, testing_dataset in federated_data.values():
anonymized_training_data.append(training_dataset)
anonymized_testing_data.append(testing_dataset)
state, _ = federated_averaging_process.next(state, anonymized_training_data)
training_metrics = federated_evaluation(state.model, anonymized_training_data)
testing_metrics = federated_evaluation(state.model, anonymized_testing_data)
print(f"Round: {n}, Training metrics: {training_metrics}, Testing metrics: {testing_metrics}")
return state, (training_metrics, testing_metrics)
federated_averaging_process = (tff.learning
.build_federated_averaging_process(create_tff_model_fn,
create_client_optimizer,
create_server_optimizer,
client_weight_fn=None,
stateful_delta_aggregate_fn=None,
stateful_model_broadcast_fn=None))
federated_evaluation = (tff.learning
.build_federated_evaluation(create_tff_model_fn))
_random_state = np.random.RandomState(42)
updated_state, current_metrics = simulate_federated_averaging(federated_averaging_process,
federated_evaluation,
training_source=emnist_train,
testing_source=emnist_test,
sample_size=0.01,
random_state=_random_state,
number_rounds=15)
```
# Wrapping up
## Interesting resources
[PySyft](https://github.com/OpenMined/PySyft) is a Python library for secure and private Deep Learning created by [OpenMined](https://www.openmined.org/). PySyft decouples private data from model training, using
[Federated Learning](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html),
[Differential Privacy](https://en.wikipedia.org/wiki/Differential_privacy),
and [Multi-Party Computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation) within the main Deep Learning frameworks like PyTorch and TensorFlow.
| true |
code
| 0.791972 | null | null | null | null |
|
## 最小二乘法
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import leastsq
Xi = np.array(
[157, 162, 169, 176, 188, 200, 211, 220, 230, 237, 247, 256, 268, 287, 285, 290, 301, 311, 326, 335, 337, 345, 348,
358, 384, 396, 409, 415, 432, 440, 448, 449, 461, 467, 478, 493], dtype=np.float)
Yi = np.array(
[143, 146, 153, 160, 169, 180, 190, 196, 207, 215, 220, 228, 242, 253, 251, 257, 271, 283, 295, 302, 301, 305, 308,
324, 341, 357, 371, 382, 397, 406, 413, 411, 422, 434, 447, 458], dtype=np.float)
def func(p, x):
k, b = p
return k * x + b
def error(p, x, y):
return func(p, x) - y
# k,b的初始值,可以任意设定,经过几次试验,发现p0的值会影响cost的值:Para[1]
p0 = [1, 20]
# 把error函数中除了p0以外的参数打包到args中(使用要求)
Para = leastsq(error, p0, args=(Xi, Yi))
# 读取结果
k, b = Para[0]
# 画样本点
plt.figure(figsize=(8, 6)) ##指定图像比例: 8:6
plt.scatter(Xi, Yi, color="green", linewidth=2)
# 画拟合直线
# x = np.linspace(0, 12, 100) ##在0-15直接画100个连续点
# x = np.linspace(0, 500, int(500/12)*100) ##在0-15直接画100个连续点
# y = k * x + b ##函数式
plt.plot(Xi, k * Xi + b, color="red", linewidth=2)
plt.legend(loc='lower right') # 绘制图例
plt.show()
```
## 梯度下降法
```
import numpy as np
import matplotlib.pyplot as plt
x = np.array(
[157, 162, 169, 176, 188, 200, 211, 220, 230, 237, 247, 256, 268, 287, 285, 290, 301, 311, 326, 335, 337, 345, 348,
358, 384, 396, 409, 415, 432, 440, 448, 449, 461, 467, 478, 493], dtype=np.float)
y = np.array(
[143, 146, 153, 160, 169, 180, 190, 196, 207, 215, 220, 228, 242, 253, 251, 257, 271, 283, 295, 302, 301, 305, 308,
324, 341, 357, 371, 382, 397, 406, 413, 411, 422, 434, 447, 458], dtype=np.float)
def GD(x, y, learning_rate, iteration_num=10000):
theta = np.random.rand(2, 1) # 初始化参数
x = np.hstack((np.ones((len(x), 1)), x.reshape(len(x), 1)))
y = y.reshape(len(y), 1)
for i in range(iteration_num):
# 计算梯度
grad = np.dot(x.T, (np.dot(x, theta) - y)) / x.shape[0]
# 更新参数
theta -= learning_rate * grad
# 计算 MSE
# loss = np.linalg.norm(np.dot(x, theta) - y)
plt.figure()
plt.title('Learning rate: {}, iteration_num: {}'.format(learning_rate, iteration_num))
plt.scatter(x[:, 1], y.reshape(len(y)))
plt.plot(x[:, 1], np.dot(x, theta), color='red', linewidth=3)
GD(x, y, learning_rate=0.00001, iteration_num=1)
GD(x, y, learning_rate=0.00001, iteration_num=3)
GD(x, y, learning_rate=0.00001, iteration_num=10)
GD(x, y, learning_rate=0.00001, iteration_num=100)
GD(x, y, learning_rate=0.000001, iteration_num=1)
GD(x, y, learning_rate=0.000001, iteration_num=3)
GD(x, y, learning_rate=0.000001, iteration_num=10)
GD(x, y, learning_rate=0.000001, iteration_num=100)
```
| true |
code
| 0.440951 | null | null | null | null |
|
```
import warnings
warnings.filterwarnings('ignore') # 実行に影響のない warninig を非表示にします. 非推奨.
```
# Chapter 5: 機械学習 回帰問題
## 5-1. 回帰問題を Pythonで解いてみよう
1. データセットの用意
2. モデル構築
### 5-1-1. データセットの用意
今回はwine-quality datasetを用いる.
wine-quality dataset はワインのアルコール濃度や品質などの12要素の数値データ.
赤ワインと白ワイン両方あります。赤ワインの含まれるデータ数は1600ほど.
まずはデータセットをダウンロードする.
proxy下ではjupyter notebookに設定をしないと以下は動作しない.
```
! wget https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv ./data/winequality-red.csv
```
jupyter notebook の設定が面倒な人へ.
proxyの設定をしたshell、もしくはブラウザなどで以下のURIからダウンロードしてください.
https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/
```
import pandas as pd
wine = pd.read_csv("./data/winequality-red.csv", sep=";") # sepは区切り文字の指定
display(wine.head(5))
```
まずは説明変数1つで回帰を行ってみよう. 今回はalcoholを目的変数 $t$ に, densityを説明変数 $x$ にする.
```
X = wine[["density"]].values
T = wine["alcohol"].values
```
#### 前処理
データを扱いやすいように中心化する.
```
X = X - X.mean()
T = T - T.mean()
```
trainとtestに分割する.
```
X_train = X[:1000, :]
T_train = T[:1000]
X_test = X[1000:, :]
T_test = T[1000:]
import matplotlib.pyplot as plt
%matplotlib inline
fig, axes = plt.subplots(ncols=2, figsize=(12, 4))
axes[0].scatter(X_train, T_train, marker=".")
axes[0].set_title("train")
axes[1].scatter(X_test, T_test, marker=".")
axes[1].set_title("test")
fig.show()
```
train と test の分布がかなり違う.
予め shuffle して train と test に分割する必要があるようだ.
XとTの対応関係を崩さず shuffle する方法は多々あるが、その1つが以下.
```
import numpy as np
np.random.seed(0) # random の挙動を固定
p = np.random.permutation(len(X)) # random な index のリスト
X = X[p]
T = T[p]
X_train = X[:1000, :]
T_train = T[:1000]
X_test = X[1000:, :]
T_test = T[1000:]
fig, axes = plt.subplots(ncols=2, figsize=(12, 4))
axes[0].scatter(X_train, T_train, marker=".")
axes[0].set_title("train")
axes[1].scatter(X_test, T_test, marker=".")
axes[1].set_title("test")
fig.show()
```
### 5-1-2. モデルの構築
**今回は**, 目的変数 $t$ を以下の回帰関数で予測する.
$$y=ax+b$$
この時、損失が最小になるように, パラメータ$a,b$を定める必要がある. ここでは二乗損失関数を用いる.
$$\mathrm{L}\left(a, b\right)
=\sum^{N}_{n=1}\left(t_n - y_n\right)^2
=\sum^{N}_{n=1}\left(t_n - ax_x-b\right)^2$$
<span style="color: gray; ">※これは, 目的変数 $t$ が上記の回帰関数 $y$ を中心としたガウス分布に従うという仮定を置いて最尤推定することと等価.</span>
```
class MyLinearRegression(object):
def __init__(self):
"""
Initialize a coefficient and an intercept.
"""
self.a =
self.b =
def fit(self, X, y):
"""
X: data, array-like, shape (n_samples, n_features)
y: array, shape (n_samples,)
Estimate a coefficient and an intercept from data.
"""
return self
def predict(self, X):
"""
Calc y from X
"""
return y
```
上記の単回帰のクラスを完成させ, 以下の実行によって図の回帰直線が得られるはずだ.
```
clf = MyLinearRegression()
clf.fit(X_train, T_train)
# 回帰係数
print("係数: ", clf.a)
# 切片
print("切片: ", clf.b)
fig, axes = plt.subplots(ncols=2, figsize=(12, 4))
axes[0].scatter(X_train, T_train, marker=".")
axes[0].plot(X_train, clf.predict(X_train), color="red")
axes[0].set_title("train")
axes[1].scatter(X_test, T_test, marker=".")
axes[1].plot(X_test, clf.predict(X_test), color="red")
axes[1].set_title("test")
fig.show()
```
もしdatasetをshuffleせずに上記の学習を行った時, 得られる回帰直線はどうなるだろう?
試してみてください.
## 5-2. scikit-learnについて
### 5-2-1. モジュールの概要
[scikit-learn](http://scikit-learn.org/stable/)のホームページに詳しい情報がある.
実は scikit-learn に線形回帰のモジュールがすでにある.
#### scikit-learn の特徴
- scikit-learn(sklearn)には,多くの機械学習アルゴリズムが入っており,統一した形式で書かれているため利用しやすい.
- 各手法をコードで理解するだけでなく,その元となる論文も紹介されている.
- チュートリアルやどのように利用するのかをまとめたページもあり,似た手法が列挙されている.
```
import sklearn
print(sklearn.__version__)
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
# 予測モデルを作成
clf.fit(X_train, T_train)
# 回帰係数
print("係数: ", clf.coef_)
# 切片
print("切片: ", clf.intercept_)
# 決定係数
print("決定係数: ", clf.score(X_train, T_train))
fig, axes = plt.subplots(ncols=2, figsize=(12, 4))
axes[0].scatter(X_train, T_train, marker=".")
axes[0].plot(X_train, clf.predict(X_train), color="red")
axes[0].set_title("train")
axes[1].scatter(X_test, T_test, marker=".")
axes[1].plot(X_test, clf.predict(X_test), color="red")
axes[1].set_title("test")
fig.show()
```
自分のコードと同じ結果が出ただろうか?
また, データを shuffle せず得られた回帰直線のスコアと, shuffleした時の回帰直線のスコアの比較もしてみよう.
scikit-learn の linear regression のコードは [github][1] で公開されている.
コーディングの参考になると思うので眺めてみるといいだろう.
### 5-2-2. 回帰モデルの評価
性能を測るといっても,その目的によって指標を変える必要がある.
どのような問題で,どのような指標を用いることが一般的か?という問いに対しては,先行研究を確認することを勧める.
また,指標それぞれの特性(数学的な意味)を知っていることもその役に立つだろう.
[参考][2]
回帰モデルの評価に用いられる指標は一般にMAE, MSE, 決定係数などが存在する.
1. MAE
2. MSE
3. 決定係数
scikit-learn はこれらの計算をするモジュールも用意されている.
[1]:https://github.com/scikit-learn/scikit-learn/blob/1495f69242646d239d89a5713982946b8ffcf9d9/sklearn/linear_model/base.py#L367
[2]:https://scikit-learn.org/stable/modules/model_evaluation.html
```
from sklearn import metrics
T_pred = clf.predict(X_test)
print("MAE: ", metrics.mean_absolute_error(T_test, T_pred))
print("MSE: ", metrics.mean_squared_error(T_test, T_pred))
print("決定係数: ", metrics.r2_score(T_test, T_pred))
```
### 5-2-3. scikit-learn の他モデルを使ってみよう
```
# 1. データセットを用意する
from sklearn import datasets
iris = datasets.load_iris() # ここではIrisデータセットを読み込む
print(iris.data[0], iris.target[0]) # 1番目のサンプルのデータとラベル
# 2.学習用データとテスト用データに分割する
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target)
# 3. 線形SVMという手法を用いて分類する
from sklearn.svm import SVC, LinearSVC
clf = LinearSVC()
clf.fit(X_train, y_train) # 学習
# 4. 分類器の性能を測る
y_pred = clf.predict(X_test) # 予測
print(metrics.classification_report(y_true=y_test, y_pred=y_pred)) # 予測結果の評価
```
### 5-2-4. 分類モデルの評価
分類問題に対する指標について考えてみよう.一般的な指標だけでも以下の4つがある.
1. 正解率(accuracy)
2. 精度(precision)
3. 再現率(recall)
4. F値(F1-score)
(精度,再現率,F値にはmacro, micro, weightedなどがある)
今回の実験でのそれぞれの値を見てみよう.
```
print('accuracy: ', metrics.accuracy_score(y_test, y_pred))
print('precision:', metrics.precision_score(y_test, y_pred, average='macro'))
print('recall: ', metrics.recall_score(y_test, y_pred, average='macro'))
print('F1 score: ', metrics.f1_score(y_test, y_pred, average='macro'))
```
## 5-3. 問題に合わせたコーディング
### 5-3-1. Irisデータの可視化
Irisデータは4次元だったので,直接可視化することはできない.
4次元のデータをPCAによって圧縮して,2次元にし可視化する.
```
from sklearn.decomposition import PCA
from sklearn import datasets
iris = datasets.load_iris()
pca = PCA(n_components=2)
X, y = iris.data, iris.target
X_pca = pca.fit_transform(X) # 次元圧縮
print(X_pca.shape)
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y);
# 次元圧縮したデータを用いて分類してみる
X_train, X_test, y_train, y_test = train_test_split(X_pca, iris.target)
clf = LinearSVC()
clf.fit(X_train, y_train)
y_pred2 = clf.predict(X_test)
from sklearn import metrics
print(metrics.classification_report(y_true=y_test, y_pred=y_pred2)) # 予測結果の評価
```
### 5-3-2. テキストに対する処理
#### テキストから特徴量を設計
テキストのカウントベクトルを作成し,TF-IDFを用いて特徴ベクトルを作る.
いくつかの設計ができるが,例題としてこの手法を用いる.
ここでは,20newsgroupsというデータセットを利用する.
```
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
news_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
count_vec = CountVectorizer()
X_train_counts = count_vec.fit_transform(news_train.data)
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
```
#### Naive Bayseによる学習
```
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tf, news_train.target)
docs = ["God is love.", "I study about Computer Science."]
X_test_counts = count_vec.transform(docs)
X_test_tf = tf_transformer.transform(X_test_counts)
preds = clf.predict(X_test_tf)
for d, label_id in zip(docs, preds):
print("{} -> {}".format(d, news_train.target_names[label_id]))
```
このように文に対して,categoriesのうちのどれに対応するかを出力する学習器を作ることができた.
この技術を応用することで,ある文がポジティブかネガティブか,スパムか否かなど自然言語の文に対する分類問題を解くことができる.
### 5-3-3. Pipelineによる結合
```
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('countvec', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
text_clf.fit(news_train.data, news_train.target)
for d, label_id in zip(docs, text_clf.predict(docs)):
print("{} -> {}".format(d, news_train.target_names[label_id]))
```
## 5.4 scikit-learn 準拠コーディング
scikit-learn 準拠でコーディングするメリットは多数存在する.
1. scikit-learn の用意するgrid search や cross validation を使える.
2. 既存のscikit-learn の他手法と入れ替えが容易になる.
3. 他の人にみてもらいやすい。使ってもらいやすい.
4. <span style="color: gray; ">本家のコミッターになれるかも?</span>
詳しくは [Developer’s Guide][1] に書いてある.
[1]:https://scikit-learn.org/stable/developers/#rolling-your-own-estimator
scikit-learn ではモデルは以下の4つのタイプに分類されている.
- Classifer
- Naive Bayes Classifer などの分類モデル
- Clusterring
- K-mearns 等のクラスタリングモデル
- Regressor
- Lasso, Ridge などの回帰モデル
- Transformer
- PCA などの変数の変換モデル
***準拠コーディングでやるべきことは、***
- sklearn.base.BaseEstimatorを継承する
- 上記タイプに応じたMixinを多重継承する
(予測モデルの場合)
- fitメソッドを実装する
- initでパラメータをいじる操作を入れるとgrid searchが動かなくなる(後述)
- predictメソッドを実装する
### 5-4-1. リッジ回帰のscikit-learn 準拠コーディング
試しに今までにコーディングした MyLinearRegression を改造し, scikit-learn 準拠にコーディングし直してみよう.
ついでにリッジ回帰の選択ができるようにもしてみよう.
```
from sklearn.base import BaseEstimator, RegressorMixin
from sklearn.utils.validation import check_X_y, check_is_fitted, check_array
```
回帰なので BaseEstimator と RegressorMixin の継承をする.
さらにリッジ回帰のオプションも追加するため, initにハイパーパラメータも追加する.
入力のshapeやdtypeを整えるために```check_X_y```や```check_array```を用いる(推奨).
```
class MyLinearRegression(BaseEstimator, RegressorMixin):
def __init__(self, lam = 0):
"""
Initialize a coefficient and an intercept.
"""
self.a =
self.b =
self.lam = lam
def fit(self, X, y):
"""
X: array-like, shape (n_samples, n_features)
y: array, shape (n_samples,)
Estimate a coefficient and an intercept from data.
"""
X, y = check_X_y(X, y, y_numeric=True)
if self.lam != 0:
pass
else:
pass
self.a_ =
self.b_ =
return self
def predict(self, X):
"""
Calc y from X
"""
check_is_fitted(self, "a_", "b_") # 学習済みかチェックする(推奨)
X = check_array(X)
return y
```
***制約***
- initで宣言する変数に全て初期値を定める
- また引数の変数名とクラス内の変数名は一致させる
- initにデータは与えない。データの加工なども(必要なら)fit内で行う
- データから推定された値はアンダースコアをつけて区別する. 今回なら、a_と b_をfit関数内で新しく定義する.
- アンダースコアで終わる変数をinit内では宣言しないこと.
- init内で引数の確認, 加工をしてはいけない. 例えば```self.lam=2*lam```などをするとgrid searchができなくなる. [参考][1]
> As model_selection.GridSearchCV uses set_params to apply parameter setting to estimators, it is essential that calling set_params has the same effect as setting parameters using the __init__ method. The easiest and recommended way to accomplish this is to not do any parameter validation in __init__. All logic behind estimator parameters, like translating string arguments into functions, should be done in fit.
[github][2]のコードをお手本にしてみるのもいいだろう.
[1]:https://scikit-learn.org/stable/developers/contributing.html#coding-guidelines
[2]:https://github.com/scikit-learn/scikit-learn/blob/1495f69242646d239d89a5713982946b8ffcf9d9/sklearn/linear_model/base.py#L367
### 5-4-2. scikit-learn 準拠かどうか確認
自作のコードがちゃんとscikit-learn準拠かどうか確かめるには以下を実行する.
```
from sklearn.utils.estimator_checks import check_estimator
check_estimator(MyLinearRegression)
```
問題があれば指摘してくれるはずだ. なお上記を必ずパスする必要はない.
#### Grid Search
準拠モデルを作ったなら, ハイパーパラメータの決定をscikit-learnでやってみよう.
```
import numpy as np
from sklearn.model_selection import GridSearchCV
np.random.seed(0)
# Grid search
parameters = {'lam':np.exp([i for i in range(-30,1)])}
reg = GridSearchCV(MyLinearRegression(),parameters,cv=5)
reg.fit(X_train,T_train)
best = reg.best_estimator_
# 決定係数
print("決定係数: ", best.score(X_train, T_train)) # BaseEstimatorを継承しているため使える
# lambda
print("lam: ", best.lam)
fig, axes = plt.subplots(ncols=2, figsize=(12, 4))
axes[0].scatter(X_train, T_train, marker=".")
axes[0].plot(X_train, best.predict(X_train), color="red")
axes[0].set_title("train")
axes[1].scatter(X_test, T_test, marker=".")
axes[1].plot(X_test, best.predict(X_test), color="red")
axes[1].set_title("test")
fig.show()
```
## [練習問題](./../exercise/questions.md#chapter-5)
| true |
code
| 0.629604 | null | null | null | null |
|
# Chapter 12 - Principal Components Analysis with scikit-learn
This notebook contains code accompanying Chapter 12 Principal Components Analysis with scikit-learn in *Practical Discrete Mathematics* by Ryan T. White and Archana Tikayat Ray.
## Eigenvalues and eigenvectors, orthogonal bases
### Example: Pizza nutrition
```
import pandas as pd
dataset = pd.read_csv('pizza.csv')
dataset.head()
```
### Example: Computing eigenvalues and eigenvectors
```
import numpy as np
A = np.array([[3,1], [1,3]])
l, v = np.linalg.eig(A)
print("The eigenvalues are:\n ",l)
print("The eigenvectors are:\n ", v)
```
## The scikit-learn implementation of PCA
We will start by importing the dataset and then dropping the brand column from it. This is done to make sure that all our feature variables are numbers and hence can be scaled/normalized. We will then create another variable called target which will contain the names of the brands of pizzas.
```
import pandas as pd
dataset = pd.read_csv('pizza.csv')
#Dropping the brand name column before standardizing the data
df_num = dataset.drop(["brand"], axis=1)
# Setting the brand name column as the target variable
target = dataset['brand']
```
Now that we have the dataset in order, we will then normalize the columns of the dataset to make sure that the mean for a variable is 0 and the variance is 1 and then we will run PCA on the dataset.
```
#Scaling the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df_num)
scaled_data = scaler.transform(df_num)
#Applying PCA to the scaled data
from sklearn.decomposition import PCA
#Reducing the dimesions to 2 components so that we can have a 2D visualization
pca = PCA(n_components = 2)
pca.fit(scaled_data)
#Applying to our scaled dataset
scaled_data_pca = pca.transform(scaled_data)
#Check the shape of the original dataset and the new dataset
print("The dimensions of the original dataset is: ", scaled_data.shape)
print("The dimensions of the dataset after performing PCA is: ", scaled_data_pca.shape)
```
Now we have reduced our 7-dimensional dataset to its 2 principal components as can be seen from the dimensions shown above. We will move forward with plotting the principal components to check whether 2 principal components were enough to capture the variability in the dataset – the different nutritional content of pizzas produced by different companies.
```
#Plotting the principal components
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(scaled_data_pca[:,0], scaled_data_pca[:,1], target)
plt.legend(loc="best")
plt.gca().set_aspect("equal")
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.show()
```
Now, we will move on to perform PCA in a way where we do not choose the number of desired principal components, rather we choose the number of principal components that add up to a certain desired variance. The Python implementation of this is very similar to the previous way with very slight changes to the code as shown below.
```
import pandas as pd
dataset = pd.read_csv('pizza.csv')
#Dropping the brand name column before standardizing the data
df_num = dataset.drop(["brand"], axis=1)
# Setting the brand name column as the target variable
target = dataset['brand']
#Scaling the data (Step 1)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df_num)
scaled_data = scaler.transform(df_num)
#Applying PCA to the scaled data
from sklearn.decomposition import PCA
#Setting the variance to 0.95
pca = PCA(n_components = 0.95)
pca.fit(scaled_data)
#Applying to our scaled dataset
scaled_data_pca = pca.transform(scaled_data)
#Check the shape of the original dataset and the new dataset
print("The dimensions of the original dataset are: ", scaled_data.shape)
print("The dimensions of the dataset after performing PCA is: ", scaled_data_pca.shape)
```
As we can see from the above output, 3 principal components are required to capture 95% of the variance in the dataset. This means that by choosing 2 principal directions previously, we were capturing < 95% of the variance in the dataset. Despite capturing < 95% of the variance, we were able to visualize the fact that the pizzas produced by different companies have different nutritional contents.
## An application to real-world data
The first step is to import the data as shown below. It is going to take some time since it is a big dataset, hence hang tight. The dataset contains images of 70000 digits (0-9) where each image has 784 features.
```
#Importing the dataset
from sklearn.datasets import fetch_openml
mnist_data = fetch_openml('mnist_784', version = 1)
# Choosing the independent (X) and dependent variables (y)
X,y = mnist_data["data"], mnist_data["target"]
```
Now that we have the dataset imported, we will move on to visualize the image of a digit to get familiar with the dataset. For visualization, we will use the `matplotlib` library. We will visualize the 50000th digit image. Feel free to check out other digit images of your choice – make sure to use an index between 0 and 69999. We will set colormap to "binary" to output a grayscale image.
```
#Plotting one of the digits
import matplotlib.pyplot as plt
plt.figure(1)
#Plotting the 50000th digit
digit = X[50000]
#Reshaping the 784 features into a 28x28 matrix
digit_image = digit.reshape(28,28)
plt.imshow(digit_image, cmap='binary')
plt.show()
```
Next, we will apply PCA to this dataset to reduce its dimension from $28*28=784$ to a lower number. We will plot the proportion of the variation that is reflected by PCA-reduced dimensional data of different dimensions.
```
#Scaling the data
from sklearn.preprocessing import StandardScaler
scaled_mnist_data = StandardScaler().fit_transform(X)
print(scaled_mnist_data.shape)
#Applying PCA to ur dataset
from sklearn.decomposition import PCA
pca = PCA(n_components=784)
mnist_data_pca = pca.fit_transform(scaled_mnist_data)
#Calculating cumulative variance captured by PCs
import numpy as np
variance_percentage = pca.explained_variance_/np.sum(pca.explained_variance_)
#Calculating cumulative variance
cumulative_variance = np.cumsum(variance_percentage)
#Plotting cumalative variance
import matplotlib.pyplot as plt
plt.figure(2)
plt.plot(cumulative_variance)
plt.xlabel('Number of principal components')
plt.ylabel('Cumulative variance explained by PCs')
plt.grid()
plt.show()
```
| true |
code
| 0.629689 | null | null | null | null |
|
# Global Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.pyplot import subplots
```
### External Package Imports
```
import os as os
import pickle as pickle
import pandas as pd
```
### Module Imports
Here I am using a few of my own packages, they are availible on Github under [__theandygross__](https://github.com/theandygross) and should all be instalable by <code>python setup.py</code>.
```
from Stats.Scipy import *
from Stats.Survival import *
from Helpers.Pandas import *
from Helpers.LinAlg import *
from Figures.FigureHelpers import *
from Figures.Pandas import *
from Figures.Boxplots import *
from Figures.Regression import *
#from Figures.Survival import draw_survival_curve, survival_and_stats
#from Figures.Survival import draw_survival_curves
#from Figures.Survival import survival_stat_plot
import Data.Firehose as FH
from Data.Containers import get_run
```
### Import Global Parameters
* These need to be changed before you will be able to sucessfully run this code
```
import NotebookImport
from Global_Parameters import *
```
### Tweaking Display Parameters
```
pd.set_option('precision', 3)
pd.set_option('display.width', 300)
plt.rcParams['font.size'] = 12
'''Color schemes for paper taken from http://colorbrewer2.org/'''
colors = plt.rcParams['axes.color_cycle']
colors_st = ['#CA0020', '#F4A582', '#92C5DE', '#0571B0']
colors_th = ['#E66101', '#FDB863', '#B2ABD2', '#5E3C99']
import seaborn as sns
sns.set_context('paper',font_scale=1.5)
sns.set_style('white')
```
### Read in All of the Expression Data
This reads in data that was pre-processed in the [./Preprocessing/init_RNA](../Notebooks/init_RNA.ipynb) notebook.
```
codes = pd.read_hdf(RNA_SUBREAD_STORE, 'codes')
matched_tn = pd.read_hdf(RNA_SUBREAD_STORE, 'matched_tn')
rna_df = pd.read_hdf(RNA_SUBREAD_STORE, 'all_rna')
data_portal = pd.read_hdf(RNA_STORE, 'matched_tn')
genes = data_portal.index.intersection(matched_tn.index)
pts = data_portal.columns.intersection(matched_tn.columns)
rna_df = rna_df.ix[genes]
matched_tn = matched_tn.ix[genes, pts]
```
### Read in Gene-Sets for GSEA
```
from Data.Annotations import unstack_geneset_csv
gene_sets = unstack_geneset_csv(GENE_SETS)
gene_sets = gene_sets.ix[rna_df.index].fillna(0)
```
Initialize function for calling model-based gene set enrichment
```
from rpy2 import robjects
from rpy2.robjects import pandas2ri
pandas2ri.activate()
mgsa = robjects.packages.importr('mgsa')
gs_r = robjects.ListVector({i: robjects.StrVector(list(ti(g>0))) for i,g in
gene_sets.iteritems()})
def run_mgsa(vec):
v = robjects.r.c(*ti(vec))
r = mgsa.mgsa(v, gs_r)
res = pandas2ri.ri2pandas(mgsa.setsResults(r))
return res
```
### Function Tweaks
Running the binomial test across 450k probes in the same test space, we rerun the same test a lot. Here I memoize the function to cache results and not recompute them. This eats up a couple GB of memory but should be reasonable.
```
from scipy.stats import binom_test
def memoize(f):
memo = {}
def helper(x,y,z):
if (x,y,z) not in memo:
memo[(x,y,z)] = f(x,y,z)
return memo[(x,y,z)]
return helper
binom_test_mem = memoize(binom_test)
def binomial_test_screen(df, fc=1.5, p=.5):
"""
Run a binomial test on a DataFrame.
df:
DataFrame of measurements. Should have a multi-index with
subjects on the first level and tissue type ('01' or '11')
on the second level.
fc:
Fold-chance cutoff to use
"""
a, b = df.xs('01', 1, 1), df.xs('11', 1, 1)
dx = a - b
dx = dx[dx.abs() > np.log2(fc)]
n = dx.count(1)
counts = (dx > 0).sum(1)
cn = pd.concat([counts, n], 1)
cn = cn[cn.sum(1) > 0]
b_test = cn.apply(lambda s: binom_test_mem(s[0], s[1], p), axis=1)
dist = (1.*cn[0] / cn[1])
tab = pd.concat([cn[0], cn[1], dist, b_test],
keys=['num_ox', 'num_dx', 'frac', 'p'],
axis=1)
return tab
```
Added linewidth and number of bins arguments. This should get pushed eventually.
```
def draw_dist(vec, split=None, ax=None, legend=True, colors=None, lw=2, bins=300):
"""
Draw a smooth distribution from data with an optional splitting factor.
"""
_, ax = init_ax(ax)
if split is None:
split = pd.Series('s', index=vec.index)
colors = {'s': colors} if colors is not None else None
for l,v in vec.groupby(split):
if colors is None:
smooth_dist(v, bins=bins).plot(label=l, lw=lw, ax=ax)
else:
smooth_dist(v, bins=bins).plot(label=l, lw=lw, ax=ax, color=colors[l])
if legend and len(split.unique()) > 1:
ax.legend(loc='upper left', frameon=False)
```
Some helper functions for fast calculation of odds ratios on matricies.
```
def odds_ratio_df(a,b):
a = a.astype(int)
b = b.astype(int)
flip = lambda v: (v == 0).astype(int)
a11 = (a.add(b) == 2).sum(axis=1)
a10 = (a.add(flip(b)) == 2).sum(axis=1)
a01 = (flip(a).add(b) == 2).sum(axis=1)
a00 = (flip(a).add(flip(b)) == 2).sum(axis=1)
odds_ratio = (1.*a11 * a00) / (1.*a10 * a01)
df = pd.concat([a00, a01, a10, a11], axis=1,
keys=['00','01','10','11'])
return odds_ratio, df
def fet(s):
odds, p = stats.fisher_exact([[s['00'],s['01']],
[s['10'],s['11']]])
return p
```
#### filter_pathway_hits
```
def filter_pathway_hits(hits, gs, cutoff=.00001):
'''
Takes a vector of p-values and a DataFrame of binary defined gene-sets.
Uses the ordering defined by hits to do a greedy filtering on the gene sets.
'''
l = [hits.index[0]]
for gg in hits.index:
flag = 0
for g2 in l:
if gg in l:
flag = 1
break
elif (chi2_cont_test(gs[gg], gs[g2])['p'] < cutoff):
flag = 1
break
if flag == 0:
l.append(gg)
hits_filtered = hits.ix[l]
return hits_filtered
```
| true |
code
| 0.430207 | null | null | null | null |
|
# Advanced usage
This notebook shows some more advanced features of `skorch`. More examples will be added with time.
<table align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/skorch-dev/skorch/blob/master/notebooks/Advanced_Usage.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/skorch-dev/skorch/blob/master/notebooks/Advanced_Usage.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
### Table of contents
* [Setup](#Setup)
* [Callbacks](#Callbacks)
* [Writing your own callback](#Writing-a-custom-callback)
* [Accessing callback parameters](#Accessing-callback-parameters)
* [Working with different data types](#Working-with-different-data-types)
* [Working with datasets](#Working-with-Datasets)
* [Working with dicts](#Working-with-dicts)
* [Multiple return values](#Multiple-return-values-from-forward)
* [Implementing a simple autoencoder](#Implementing-a-simple-autoencoder)
* [Training the autoencoder](#Training-the-autoencoder)
* [Extracting the decoder and the encoder output](#Extracting-the-decoder-and-the-encoder-output)
```
! [ ! -z "$COLAB_GPU" ] && pip install torch skorch
import torch
from torch import nn
import torch.nn.functional as F
torch.manual_seed(0)
torch.cuda.manual_seed(0)
```
## Setup
### A toy binary classification task
We load a toy classification task from `sklearn`.
```
import numpy as np
from sklearn.datasets import make_classification
np.random.seed(0)
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X, y = X.astype(np.float32), y.astype(np.int64)
X.shape, y.shape, y.mean()
```
### Definition of the `pytorch` classification `module`
We define a vanilla neural network with two hidden layers. The output layer should have 2 output units since there are two classes. In addition, it should have a softmax nonlinearity, because later, when calling `predict_proba`, the output from the `forward` call will be used.
```
from skorch import NeuralNetClassifier
class ClassifierModule(nn.Module):
def __init__(
self,
num_units=10,
nonlin=F.relu,
dropout=0.5,
):
super(ClassifierModule, self).__init__()
self.num_units = num_units
self.nonlin = nonlin
self.dropout = dropout
self.dense0 = nn.Linear(20, num_units)
self.nonlin = nonlin
self.dropout = nn.Dropout(dropout)
self.dense1 = nn.Linear(num_units, 10)
self.output = nn.Linear(10, 2)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = self.dropout(X)
X = F.relu(self.dense1(X))
X = F.softmax(self.output(X), dim=-1)
return X
```
## Callbacks
Callbacks are a powerful and flexible way to customize the behavior of your neural network. They are all called at specific points during the model training, e.g. when training starts, or after each batch. Have a look at the `skorch.callbacks` module to see the callbacks that are already implemented.
### Writing a custom callback
Although `skorch` comes with a handful of useful callbacks, you may find that you would like to write your own callbacks. Doing so is straightforward, just remember these rules:
* They should inherit from `skorch.callbacks.Callback`.
* They should implement at least one of the `on_`-methods provided by the parent class (e.g. `on_batch_begin` or `on_epoch_end`).
* As argument, the `on_`-methods first get the `NeuralNet` instance, and, where appropriate, the local data (e.g. the data from the current batch). The method should also have `**kwargs` in the signature for potentially unused arguments.
* *Optional*: If you have attributes that should be reset when the model is re-initialized, those attributes should be set in the `initialize` method.
Here is an example of a callback that remembers at which epoch the validation accuracy reached a certain value. Then, when training is finished, it calls a mock Twitter API and tweets that epoch. We proceed as follows:
* We set the desired minimum accuracy during `__init__`.
* We set the critical epoch during `initialize`.
* After each epoch, if the critical accuracy has not yet been reached, we check if it was reached.
* When training finishes, we send a tweet informing us whether our training was successful or not.
```
from skorch.callbacks import Callback
def tweet(msg):
print("~" * 60)
print("*tweet*", msg, "#skorch #pytorch")
print("~" * 60)
class AccuracyTweet(Callback):
def __init__(self, min_accuracy):
self.min_accuracy = min_accuracy
def initialize(self):
self.critical_epoch_ = -1
def on_epoch_end(self, net, **kwargs):
if self.critical_epoch_ > -1:
return
# look at the validation accuracy of the last epoch
if net.history[-1, 'valid_acc'] >= self.min_accuracy:
self.critical_epoch_ = len(net.history)
def on_train_end(self, net, **kwargs):
if self.critical_epoch_ < 0:
msg = "Accuracy never reached {} :(".format(self.min_accuracy)
else:
msg = "Accuracy reached {} at epoch {}!!!".format(
self.min_accuracy, self.critical_epoch_)
tweet(msg)
```
Now we initialize a `NeuralNetClassifier` and pass your new callback in a list to the `callbacks` argument. After that, we train the model and see what happens.
```
net = NeuralNetClassifier(
ClassifierModule,
max_epochs=15,
lr=0.02,
warm_start=True,
callbacks=[AccuracyTweet(min_accuracy=0.7)],
)
net.fit(X, y)
```
Oh no, our model never reached a validation accuracy of 0.7. Let's train some more (this is possible because we set `warm_start=True`):
```
net.fit(X, y)
assert net.history[-1, 'valid_acc'] >= 0.7
```
Finally, the validation score exceeded 0.7. Hooray!
### Accessing callback parameters
Say you would like to use a learning rate schedule with your neural net, but you don't know what parameters are best for that schedule. Wouldn't it be nice if you could find those parameters with a grid search? With `skorch`, this is possible. Below, we show how to access the parameters of your callbacks.
To simplify the access to your callback parameters, it is best if you give your callback a name. This is achieved by passing the `callbacks` parameter a list of *name*, *callback* tuples, such as:
callbacks=[
('scheduler', LearningRateScheduler)),
...
],
This way, you can access your callbacks using the double underscore semantics (as, for instance, in an `sklearn` `Pipeline`):
callbacks__scheduler__epoch=50,
So if you would like to perform a grid search on, say, the number of units in the hidden layer and the learning rate schedule, it could look something like this:
param_grid = {
'module__num_units': [50, 100, 150],
'callbacks__scheduler__epoch': [10, 50, 100],
}
*Note*: If you would like to refresh your knowledge on grid search, look [here](http://scikit-learn.org/stable/modules/grid_search.html#grid-search), [here](http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_text_feature_extraction.html), or in the *Basic_Usage* notebok.
Below, we show how accessing the callback parameters works our `AccuracyTweet` callback:
```
net = NeuralNetClassifier(
ClassifierModule,
max_epochs=10,
lr=0.1,
warm_start=True,
callbacks=[
('tweet', AccuracyTweet(min_accuracy=0.7)),
],
callbacks__tweet__min_accuracy=0.6,
)
net.fit(X, y)
```
As you can see, by passing `callbacks__tweet__min_accuracy=0.6`, we changed that parameter. The same can be achieved by calling the `set_params` method with the corresponding arguments:
```
net.set_params(callbacks__tweet__min_accuracy=0.75)
net.fit(X, y)
```
## Working with different data types
### Working with `Dataset`s
We encourage you to not pass `Dataset`s to `net.fit` but to let skorch handle `Dataset`s internally. Nonetheless, there are situations where passing `Dataset`s to `net.fit` is hard to avoid (e.g. if you want to load the data lazily during the training). This is supported by skorch but may have some unwanted side-effects relating to sklearn. For instance, `Dataset`s cannot split into train and validation in a stratified fashion without explicit knowledge of the classification targets.
Below we show what happens when you try to fit with `Dataset` and the stratified split fails:
```
class MyDataset(torch.utils.data.Dataset):
def __init__(self, X, y):
self.X = X
self.y = y
assert len(X) == len(y)
def __len__(self):
return len(self.X)
def __getitem__(self, i):
return self.X[i], self.y[i]
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X, y = X.astype(np.float32), y.astype(np.int64)
dataset = MyDataset(X, y)
net = NeuralNetClassifier(ClassifierModule)
try:
net.fit(dataset, y=None)
except ValueError as e:
print("Error:", e)
net.train_split.stratified
```
As you can see, the stratified split fails since `y` is not known. There are two solutions to this:
* turn off stratified splitting ( `net.train_split.stratified=False`)
* pass `y` explicitly (if possible), even if it is implicitely contained in the `Dataset`
The second solution is shown below:
```
net.fit(dataset, y=y)
```
### Working with dicts
#### The standard case
skorch has built-in support for dictionaries as data containers. Here we show a somewhat contrived example of how to use dicts, but it should get the point across. First we create data and put it into a dictionary `X_dict` with two keys `X0` and `X1`:
```
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X, y = X.astype(np.float32), y.astype(np.int64)
X0, X1 = X[:, :10], X[:, 10:]
X_dict = {'X0': X0, 'X1': X1}
```
When skorch passes the dict to the pytorch module, it will pass the data as keyword arguments to the forward call. That means that we should accept the two keys `XO` and `X1` in the forward method, as shown below:
```
class ClassifierWithDict(nn.Module):
def __init__(
self,
num_units0=50,
num_units1=50,
nonlin=F.relu,
dropout=0.5,
):
super(ClassifierWithDict, self).__init__()
self.num_units0 = num_units0
self.num_units1 = num_units1
self.nonlin = nonlin
self.dropout = dropout
self.dense0 = nn.Linear(10, num_units0)
self.dense1 = nn.Linear(10, num_units1)
self.nonlin = nonlin
self.dropout = nn.Dropout(dropout)
self.output = nn.Linear(num_units0 + num_units1, 2)
# NOTE: We accept X0 and X1, the keys from the dict, as arguments
def forward(self, X0, X1, **kwargs):
X0 = self.nonlin(self.dense0(X0))
X0 = self.dropout(X0)
X1 = self.nonlin(self.dense1(X1))
X1 = self.dropout(X1)
X = torch.cat((X0, X1), dim=1)
X = F.relu(X)
X = F.softmax(self.output(X), dim=-1)
return X
```
As long as we keep this in mind, we are good to go.
```
net = NeuralNetClassifier(ClassifierWithDict, verbose=0)
net.fit(X_dict, y)
```
#### Working with sklearn `Pipeline` and `GridSearchCV`
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.model_selection import GridSearchCV
```
sklearn makes the assumption that incoming data should be numpy/sparse arrays or something similar. This clashes with the use of dictionaries. Unfortunately, it is sometimes impossible to work around that for now (for instance using skorch with `BaggingClassifier`). Other times, there are possibilities.
When we have a preprocessing pipeline that involves `FunctionTransformer`, we have to pass the parameter `validate=False` (which is the default value now) so that sklearn allows the dictionary to pass through. Everything else works:
```
pipe = Pipeline([
('do-nothing', FunctionTransformer(validate=False)),
('net', net),
])
pipe.fit(X_dict, y)
```
When trying a grid or randomized search, it is not that easy to pass a dict. If we try, we will get an error:
```
param_grid = {
'net__module__num_units0': [10, 25, 50],
'net__module__num_units1': [10, 25, 50],
'net__lr': [0.01, 0.1],
}
grid_search = GridSearchCV(pipe, param_grid, scoring='accuracy', verbose=1, cv=3)
try:
grid_search.fit(X_dict, y)
except Exception as e:
print(e)
```
The error above occurs because sklearn gets the length of the input data, which is 2 for the dict, and believes that is inconsistent with the length of the target (1000).
To get around that, skorch provides a helper class called `SliceDict`. It allows us to wrap our dictionaries so that they also behave like a numpy array:
```
from skorch.helper import SliceDict
X_slice_dict = SliceDict(X0=X0, X1=X1) # X_slice_dict = SliceDict(**X_dict) would also work
```
The SliceDict shows the correct length, shape, and is sliceable across values:
```
print("Length of dict: {}, length of SliceDict: {}".format(len(X_dict), len(X_slice_dict)))
print("Shape of SliceDict: {}".format(X_slice_dict.shape))
print("Slicing the SliceDict slices across values: {}".format(X_slice_dict[:2]))
```
With this, we can call `GridSearchCV` just as expected:
```
grid_search.fit(X_slice_dict, y)
grid_search.best_score_, grid_search.best_params_
```
## Multiple return values from `forward`
Often, we want our `Module.forward` method to return more than just one value. There can be several reasons for this. Maybe, the criterion requires not one but several outputs. Or perhaps we want to inspect intermediate values to learn more about our model (say inspecting attention in a sequence-to-sequence model). Fortunately, `skorch` makes it easy to achieve this. In the following, we demonstrate how to handle multiple outputs from the `Module`.
To demonstrate this, we implement a very simple autoencoder. It consists of an encoder that reduces our input of 20 units to 5 units using two linear layers, and a decoder that tries to reconstruct the original input, again using two linear layers.
### Implementing a simple autoencoder
```
from skorch import NeuralNetRegressor
class Encoder(nn.Module):
def __init__(self, num_units=5):
super().__init__()
self.num_units = num_units
self.encode = nn.Sequential(
nn.Linear(20, 10),
nn.ReLU(),
nn.Linear(10, self.num_units),
nn.ReLU(),
)
def forward(self, X):
encoded = self.encode(X)
return encoded
class Decoder(nn.Module):
def __init__(self, num_units):
super().__init__()
self.num_units = num_units
self.decode = nn.Sequential(
nn.Linear(self.num_units, 10),
nn.ReLU(),
nn.Linear(10, 20),
)
def forward(self, X):
decoded = self.decode(X)
return decoded
```
The autoencoder module below actually returns a tuple of two values, the decoded input and the encoded input. This way, we cannot only use the decoded input to calculate the normal loss but also have access to the encoded state.
```
class AutoEncoder(nn.Module):
def __init__(self, num_units):
super().__init__()
self.num_units = num_units
self.encoder = Encoder(num_units=self.num_units)
self.decoder = Decoder(num_units=self.num_units)
def forward(self, X):
encoded = self.encoder(X)
decoded = self.decoder(encoded)
return decoded, encoded # <- return a tuple of two values
```
Since the module's `forward` method returns two values, we have to adjust our objective to do the right thing with those values. If we don't do this, the criterion wouldn't know what to do with the two values and would raise an error.
One strategy would be to only use the decoded state for the loss and discard the encoded state. For this demonstration, we have a different plan: We would like the encoded state to be sparse. Therefore, we add an L1 loss of the encoded state to the reconstruction loss. This way, the net will try to reconstruct the input as accurately as possible while keeping the encoded state as sparse as possible.
To implement this, the right method to override is called `get_loss`, which is where `skorch` computes and returns the loss. It gets the prediction (our tuple) and the target as input, as well as other arguments and keywords that we pass through. We create a subclass of `NeuralNetRegressor` that overrides said method and implements our idea for the loss.
```
class AutoEncoderNet(NeuralNetRegressor):
def get_loss(self, y_pred, y_true, *args, **kwargs):
decoded, encoded = y_pred # <- unpack the tuple that was returned by `forward`
loss_reconstruction = super().get_loss(decoded, y_true, *args, **kwargs)
loss_l1 = 1e-3 * torch.abs(encoded).sum()
return loss_reconstruction + loss_l1
```
*Note*: Alternatively, we could have used an unaltered `NeuralNetRegressor` but implement a custom criterion that is responsible for unpacking the tuple and computing the loss.
### Training the autoencoder
Now that everything is ready, we train the model as usual. We initialize our net subclass with the `AutoEncoder` module and call the `fit` method with `X` both as input and as target (since we want to reconstruct the original data):
```
net = AutoEncoderNet(
AutoEncoder,
module__num_units=5,
lr=0.3,
)
net.fit(X, X)
```
Voilà, the model was trained using our custom loss function that makes use of both predicted values.
### Extracting the decoder and the encoder output
Sometimes, we may wish to inspect all the values returned by the `foward` method of the module. There are several ways to achieve this. In theory, we can always access the module directly by using the `net.module_` attribute. However, this is unwieldy, since this completely shortcuts the prediction loop, which takes care of important steps like casting `numpy` arrays to `pytorch` tensors and batching.
Also, we cannot use the `predict` method on the net. This method will only return the first output from the forward method, in this case the decoded state. The reason for this is that `predict` is part of the `sklearn` API, which requires there to be only one output. This is shown below:
```
y_pred = net.predict(X)
y_pred.shape # only the decoded state is returned
```
However, the net itself provides two methods to retrieve all outputs. The first one is the `net.forward` method, which retrieves *all* the predicted batches from the `Module.forward` and concatenates them. Use this to retrieve the complete decoded and encoded state:
```
decoded_pred, encoded_pred = net.forward(X)
decoded_pred.shape, encoded_pred.shape
```
The other method is called `net.forward_iter`. It is similar to `net.forward` but instead of collecting all the batches, this method is lazy and only yields one batch at a time. This can be especially useful if the output doesn't fit into memory:
```
for decoded_pred, encoded_pred in net.forward_iter(X):
# do something with each batch
break
decoded_pred.shape, encoded_pred.shape
```
Finally, let's make sure that our initial goal of having a sparse encoded state was met. We check how many activities are close to zero:
```
torch.isclose(encoded_pred, torch.zeros_like(encoded_pred)).float().mean()
```
As we had hoped, the encoded state is quite sparse, with the majority of outpus being 0.
| true |
code
| 0.815085 | null | null | null | null |
|
# RDD basics
This notebook will introduce **three basic but essential Spark operations**. Two of them are the transformations map and filter. The other is the action collect. At the same time we will introduce the concept of persistence in Spark.
## Getting the data and creating the RDD
We will use the reduced dataset (10 percent) provided for the KDD Cup 1999, containing nearly half million network interactions. The file is provided as a Gzip file that we will download locally.
```
import urllib
f = urllib.urlretrieve ("http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz", "kddcup.data_10_percent.gz")
```
Now we can use this file to create our RDD.
```
data_file = "./kddcup.data_10_percent.gz"
raw_data = sc.textFile(data_file)
```
## The filter transformation
This transformation can be applied to RDDs in order to keep just elements that satisfy a certain condition. More concretely, a function is evaluated on every element in the original RDD. The new resulting RDD will contain just those elements that make the function return True.
For example, imagine we want to count how many normal. interactions we have in our dataset. We can filter our raw_data RDD as follows.
```
normal_raw_data = raw_data.filter(lambda x: 'normal.' in x)
```
Now we can count how many elements we have in the new RDD.
```
from time import time
t0 = time()
normal_count = normal_raw_data.count()
tt = time() - t0
print "There are {} 'normal' interactions".format(normal_count)
print "Count completed in {} seconds".format(round(tt,3))
```
The **real calculations** (distributed) in Spark **occur when we execute actions and not transformations.** In this case counting is the action that we execute in the RDD. We can apply as many transformations as we would like in a RDD and no computation will take place until we call the first action which, in this case, takes a few seconds to complete.
## The map transformation
By using the map transformation in Spark, we can apply a function to every element in our RDD. **Python's lambdas are specially expressive for this particular.**
In this case we want to read our data file as a CSV formatted one. We can do this by applying a lambda function to each element in the RDD as follows.
```
from pprint import pprint
csv_data = raw_data.map(lambda x: x.split(","))
t0 = time()
head_rows = csv_data.take(5)
tt = time() - t0
print "Parse completed in {} seconds".format(round(tt,3))
pprint(head_rows[0])
```
Again, **all action happens once we call the first Spark action** (i.e. take in this case). What if we take a lot of elements instead of just the first few?
```
t0 = time()
head_rows = csv_data.take(100000)
tt = time() - t0
print "Parse completed in {} seconds".format(round(tt,3))
```
We can see that it takes longer. The map function is applied now in a distributed way to a lot of elements on the RDD, hence the longer execution time.
## Using map and predefined functions
Of course we can use predefined functions with map. Imagine we want to have each element in the RDD as a key-value pair where the key is the tag (e.g. normal) and the value is the whole list of elements that represents the row in the CSV formatted file. We could proceed as follows.
```
def parse_interaction(line):
elems = line.split(",")
tag = elems[41]
return (tag, elems)
key_csv_data = raw_data.map(parse_interaction)
head_rows = key_csv_data.take(5)
pprint(head_rows[0])
```
## The collect action
**Basically it will get all the elements in the RDD into memory for us to work with them.** For this reason it has to be used with care, specially when working with large RDDs.
An example using our raw data.
```
t0 = time()
all_raw_data = raw_data.collect()
tt = time() - t0
print "Data collected in {} seconds".format(round(tt,3))
```
Every Spark worker node that has a fragment of the RDD has to be coordinated in order to retrieve its part, and then reduce everything together.
As a last example combining all the previous, we want to collect all the normal interactions as key-value pairs.
```
# get data from file
data_file = "./kddcup.data_10_percent.gz"
raw_data = sc.textFile(data_file)
# parse into key-value pairs
key_csv_data = raw_data.map(parse_interaction)
# filter normal key interactions
normal_key_interactions = key_csv_data.filter(lambda x: x[0] == "normal.")
# collect all
t0 = time()
all_normal = normal_key_interactions.collect()
tt = time() - t0
normal_count = len(all_normal)
print "Data collected in {} seconds".format(round(tt,3))
print "There are {} 'normal' interactions".format(normal_count)
```
This count matches with the previous count for normal interactions. The new procedure is more time consuming. This is because we retrieve all the data with collect and then use Python's len on the resulting list. Before we were just counting the total number of elements in the RDD by using count.
| true |
code
| 0.264477 | null | null | null | null |
|
# Taylor problem 3.23
last revised: 04-Jan-2020 by Dick Furnstahl [[email protected]]
**This notebook is almost ready to go, except that the initial conditions and $\Delta v$ are different from the problem statement and there is no statement to print the figure. Fix these and you're done!**
This is a conservation of momentum problem, which in the end lets us determine the trajectories of the two masses before and after the explosion. How should we visualize that the center-of-mass of the pieces continues to follow the original parabolic path?
Plan:
1. Plot the original trajectory, also continued past the explosion time.
2. Plot the two trajectories after the explosion.
3. For some specified times of the latter two trajectories, connect the points and indicate the center of mass.
The implementation here could certainly be improved! Please make suggestions (and develop improved versions).
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
First define some functions we think we will need. The formulas are based on our paper-and-pencil work.
The trajectory starting from $t=0$ is:
$
\begin{align}
x(t) &= x_0 + v_{x0} t \\
y(t) &= y_0 + v_{y0} t - \frac{1}{2} g t^2
\end{align}
$
```
def trajectory(x0, y0, vx0, vy0, t_pts, g=9.8):
"""Calculate the x(t) and y(t) trajectories for an array of times,
which must start with t=0.
"""
return x0 + vx0*t_pts, y0 + vy0*t_pts - g*t_pts**2/2.
```
The velocity at the final time $t_f$ is:
$
\begin{align}
v_{x}(t) &= v_{x0} \\
v_{y}(t) &= v_{y0} - g t_f
\end{align}
$
```
def final_velocity(vx0, vy0, t_pts, g=9.8):
"""Calculate the vx(t) and vy(t) at the end of an array of times t_pts"""
return vx0, vy0 - g*t_pts[-1] # -1 gives the last element
```
The center of mass of two particles at $(x_1, y_1)$ and $(x_2, y_2)$ is:
$
\begin{align}
x_{cm} &= \frac{1}{2}(x_1 + x_2) \\
y_{cm} &= \frac{1}{2}(y_1 + y_2)
\end{align}
$
```
def com_position(x1, y1, x2, y2):
"""Find the center-of-mass (com) position given two positions (x,y)."""
return (x1 + x2)/2., (y1 + y2)/2.
```
**1. Calculate and plot the original trajectory up to the explosion.**
```
# initial conditions
x0_before, y0_before = [0., 0.] # put the origin at the starting point
vx0_before, vy0_before = [6., 3.] # given in the problem statement
g = 1. # as recommended
# Array of times to calculate the trajectory up to the explosion at t=4
t_pts_before = np.array([0., 1., 2., 3., 4.])
x_before, y_before = trajectory(x0_before, y0_before,
vx0_before, vy0_before,
t_pts_before, g)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(x_before, y_before, 'ro-')
ax.set_xlabel('x')
ax.set_ylabel('y')
```
Does it make sense so far? Note that we could use more intermediate points to make a more correct curve (rather than the piecewise straight lines) but this is fine at least for a first pass.
**2. Calculate and plot the two trajectories after the explosion.**
For the second part of the trajectory, we reset our clock to $t=0$ because that is how our trajectory function is constructed. We'll need initial positions and velocities of the pieces just after the explosion. These are the final position of the combined piece before the explosion and the final velocity plus and minus $\Delta \mathbf{v}$. We are told $\Delta \mathbf{v}$. We have to figure out the final velocity before the explosion.
```
delta_v = np.array([2., 1.]) # change in velociy of one piece
# reset time to 0 for calculating trajectories
t_pts_after = np.array([0., 1., 2., 3., 4., 5.])
# Also could have used np.arange(0.,6.,1.)
x0_after = x_before[-1] # -1 here means the last element of the array
y0_after = y_before[-1]
vxcm0_after, vycm0_after = final_velocity(vx0_before, vy0_before,
t_pts_before, g)
# The _1 and _2 refer to the two pieces after the explosinon
vx0_after_1 = vxcm0_after + delta_v[0]
vy0_after_1 = vycm0_after + delta_v[1]
vx0_after_2 = vxcm0_after - delta_v[0]
vy0_after_2 = vycm0_after - delta_v[1]
# Given the initial conditions after the explosion, we calculate trajectories
x_after_1, y_after_1 = trajectory(x0_after, y0_after,
vx0_after_1, vy0_after_1,
t_pts_after, g)
x_after_2, y_after_2 = trajectory(x0_after, y0_after,
vx0_after_2, vy0_after_2,
t_pts_after, g)
# This is the center-of-mass trajectory
xcm_after, ycm_after = trajectory(x0_after, y0_after,
vxcm0_after, vycm0_after,
t_pts_after, g)
# These are calculated points of the center-of-mass
xcm_pts, ycm_pts = com_position(x_after_1, y_after_1, x_after_2, y_after_2)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(x_before, y_before, 'ro-', label='before explosion')
ax.plot(x_after_1, y_after_1, 'go-', label='piece 1 after')
ax.plot(x_after_2, y_after_2, 'bo-', label='piece 2 after')
ax.plot(xcm_after, ycm_after, 'r--', label='original trajectory')
ax.plot(xcm_pts, ycm_pts, 'o', color='black', label='center-of-mass of 1 and 2')
for i in range(len(t_pts_after)):
ax.plot([x_after_1[i], x_after_2[i]],
[y_after_1[i], y_after_2[i]],
'k--'
)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend();
```
| true |
code
| 0.639764 | null | null | null | null |
|
# Flux.pl
The `Flux.pl` Perl script takes four input parameters:
`Flux.pl [input file] [output file] [bin width (s)] [geometry base directory]`
or, as invoked from the command line,
`$ perl ./perl/Flux.pl [input file] [output file] [bin width (s)] [geometry directory]`
## Input Parameters
* `[input file]`
`Flux.pl` expects the first non-comment line of the input file to begin with a string of the form `<DAQ ID>.<channel>`. This is satisfied by threshold and wire delay files, as well as the outputs of data transformation scripts like `Sort.pl` and `Combine.pl` if their inputs are of the appropriate form.
If the input file doesn't meet this condition, `Flux.pl` -- specifically, the `all_geo_info{}` subroutine of `CommonSubs.pl` -- won't be able to load the appropriate geometry files and execution will fail (see the `[geometry directory]` parameter below).
* `[output file]`
This is what the output file will be named.
* `[binWidth]`
In physical terms, cosmic ray _flux_ is the number of incident rays per unit area per unit time. The `[binWidth]` parameter determines the "per unit time" portion of this quantity. `Flux.pl` will sort the events in its input data into bins of the given time interval, returning the number of events per unit area recorded within each bin.
* `[geometry directory]`
With `[binWidth]` handling the "per unit time" portion of the flux calculation, the geometry file associated with each detector handles the "per unit area".
`Flux.pl` expects geometry files to be stored in a directory structure of the form
```
geo/
├── 6119/
│ └── 6119.geo
├── 6148/
│ └── 6148.geo
└── 6203/
└── 6203.geo
```
where each DAQ has its own subdirectory whose name is the DAQ ID, and each such subdirectory has a geometry file whose name is given by the DAQ ID with the `.geo` extension. The command-line argument in this case is `geo/`, the parent directory. With this as the base directory, `Flux.pl` determines what geometry file to load by looking for the DAQ ID in the first line of data. This is why, as noted above, the first non-comment line of `[input file]` must begin with `<DAQ ID>.<channel>`.
## Flux Input Files
As we mentioned above, threshold files have the appropriate first-line structure to allow `Flux.pl` to access geometry data for them. So what does `Flux.pl` do when acting on a threshold file?
We'll test it using the threshold files `files/6148.2016.0109.0.thresh` and `files/6119.2016.0104.1.thresh` as input. First, take a look at the files themselves so we know what the input looks like:
```
!head -10 files/6148.2016.0109.0.thresh
!wc -l files/6148.2016.0109.0.thresh
!head -10 files/6119.2016.0104.1.thresh
!wc -l files/6119.2016.0104.1.thresh
```
(remember, `wc -l` returns a count of the number of lines in the file). These look like fairly standard threshold files. Now we'll see what `Flux.pl` does with them.
## The Parsl Flux App
For convenience, we'll wrap the UNIX command-line invocation of the `Flux.pl` script in a Parsl App, which will make it easier to work with from within the Jupyter Notebook environment.
```
# The prep work:
import parsl
from parsl.config import Config
from parsl.executors.threads import ThreadPoolExecutor
from parsl.app.app import bash_app,python_app
from parsl import File
config = Config(
executors=[ThreadPoolExecutor()],
lazy_errors=True
)
parsl.load(config)
# The App:
@bash_app
def Flux(inputs=[], outputs=[], binWidth='600', geoDir='geo/', stdout='stdout.txt', stderr='stderr.txt'):
return 'perl ./perl/Flux.pl %s %s %s %s' % (inputs[0], outputs[0], binWidth, geoDir)
```
_Edit stuff below to use the App_
## Flux Output
Below is the output generated by `Flux.pl` using the threshold files `6148.2016.0109.0.thresh` and `6119.2016.0104.1.thresh` (separately) as input:
```
$ perl ./perl/Flux.pl files/6148.2016.0109.0.thresh outputs/ThreshFluxOut6148_01 600 geo/
$ head -15 outputs/ThreshFluxOut6148_01
#cf12d07ed2dfe4e4c0d52eb663dd9956
#md5_hex(1536259294 1530469616 files/6148.2016.0109.0.thresh outputs/ThreshFluxOut6148_01 600 geo/)
01/09/2016 00:06:00 59.416172 8.760437
01/09/2016 00:16:00 63.291139 9.041591
01/09/2016 00:26:00 71.041075 9.579177
01/09/2016 00:36:00 50.374580 8.066389
01/09/2016 00:46:00 55.541204 8.469954
01/09/2016 00:56:00 73.624386 9.751788
01/09/2016 01:06:00 42.624645 7.419998
01/09/2016 01:16:00 54.249548 8.370887
01/09/2016 01:26:00 45.207957 7.641539
01/09/2016 01:36:00 42.624645 7.419998
01/09/2016 01:46:00 65.874451 9.224268
01/09/2016 01:56:00 59.416172 8.760437
01/09/2016 02:06:00 94.290881 11.035913
```
```
$ perl ./perl/Flux.pl files/6119.2016.0104.1.thresh outputs/ThreshFluxOut6119_01 600 geo/
$ head -15 outputs/ThreshFluxOut6119_01
#84d0f02f26edb8f59da2d4011a27389d
#md5_hex(1536259294 1528996902 files/6119.2016.0104.1.thresh outputs/ThreshFluxOut6119_01 600 geo/)
01/04/2016 21:00:56 12496.770860 127.049313
01/04/2016 21:10:56 12580.728494 127.475379
01/04/2016 21:20:56 12929.475588 129.230157
01/04/2016 21:30:56 12620.769827 127.678079
01/04/2016 21:40:56 12893.309222 129.049289
01/04/2016 21:50:56 12859.726169 128.881113
01/04/2016 22:00:56 12782.226815 128.492174
01/04/2016 22:10:56 12520.020666 127.167443
01/04/2016 22:20:56 12779.643503 128.479189
01/04/2016 22:30:56 12746.060449 128.310265
01/04/2016 22:40:56 12609.144924 127.619264
01/04/2016 22:50:56 12372.771894 126.417419
01/04/2016 23:00:56 12698.269181 128.069490
```
`Flux.pl` seems to give reasonable output with a threshold file as input, provided the DAQ has a geometry file that's up to standards. Can we interpret the output? Despite the lack of a header line, some reasonable inferences will make it clear.
The first column is clearly the date that the data was taken, and in both cases it agrees with the date indicated by the threshold file's filename.
The second column is clearly time-of-day values, but what do they mean? We might be tempted to think of them as the full-second portion of cosmic ray event times, but we note in both cases that they occur in a regular pattern of exactly every ten minutes. Of course, that happens to be exactly what we selected as the `binWidth` parameter, 600s = 10min. These are the time bins into which the cosmic ray event data is organized.
Since we're calculating flux -- muon strikes per unit area per unit time -- we expect the flux count itself to be included in the data, and in fact this is what the third column is, in units of $events/m^2/min$. Note that the "$/min$" part is *always* a part of the units of the third column, no matter what the size of the time bins we selected.
Finally, when doing science, having a measurement means having uncertainty. The fourth column is the obligatory statistical uncertainty in the flux.
## An exercise in statistical uncertainty
The general formula for flux $\Phi$ is
$$\Phi = \frac{N}{AT}$$
where $N$ is the number of incident events, $A$ is the cross-sectional area over which the flux is measured, and $T$ is the time interval over which the flux is measured.
By the rule of quadrature for propagating uncertainties,
$$\frac{\delta \Phi}{\Phi} \approx \frac{\delta N}{N} + \frac{\delta A}{A} + \frac{\delta T}{T}$$
Here, $N$ is the raw count of muon hits in the detector, an integer with a standard statistical uncertainty of $\sqrt{N}$.
In our present analysis, errors in the bin width and detector area are negligible compared to the statistical fluctuation of cosmic ray muons. Thus, we'll take $\delta A \approx \delta T \approx 0$ to leave
$$\delta \Phi \approx \frac{\delta N}{N} \Phi = \frac{\Phi}{\sqrt{N}}$$
Rearranging this a bit, we find that we should be able to calculate the exact number of muon strikes for each time bin as
$$N \approx \left(\frac{\Phi}{\delta\Phi}\right)^2.$$
Let's see what happens when we apply this to the data output from `Flux.pl`. For the 6148 data with `binWidth=600`, we find
```
Date Time Phi dPhi (Phi/dPhi)^2
01/09/16 12:06:00 AM 59.416172 8.760437 45.999996082
01/09/16 12:16:00 AM 63.291139 9.041591 49.0000030968
01/09/16 12:26:00 AM 71.041075 9.579177 54.9999953935
01/09/16 12:36:00 AM 50.37458 8.066389 38.9999951081
01/09/16 12:46:00 AM 55.541204 8.469954 43.0000020769
01/09/16 12:56:00 AM 73.624386 9.751788 57.000001784
01/09/16 01:06:00 AM 42.624645 7.419998 33.0000025577
01/09/16 01:16:00 AM 54.249548 8.370887 41.999999903
01/09/16 01:26:00 AM 45.207957 7.641539 35.0000040418
01/09/16 01:36:00 AM 42.624645 7.419998 33.0000025577
01/09/16 01:46:00 AM 65.874451 9.224268 51.00000197
01/09/16 01:56:00 AM 59.416172 8.760437 45.999996082
01/09/16 02:06:00 AM 94.290881 11.035913 72.9999984439
```
The numbers we come up with are in fact integers to an excellent approximation!
---
### Exercise 1
**A)** Using the data table above, round the `(Phi/dPhi)^2` column to the nearest integer, calling it `N`. With $\delta N = \sqrt{N}$, calculate $\frac{\delta N}{N}$ for each row in the data.
**B)** Using your knowledge of the cosmic ray muon detector, estimate the uncertainty $\delta A$ in the detector area $A$ and the uncertainty $\delta T$ in the time bin $T$ given as the input `binWidth` parameter. Calculate $\frac{\delta A}{A}$ and $\frac{\delta T}{T}$ for this analysis.
**C)** Considering the results of **A)** and **B)**, do you think our previous assumption that $\frac{\delta A}{A} \approx 0$ and $\frac{\delta T}{T} \approx 0$ compared to $\frac{\delta N}{N}$ is justified?
---
### Additional Exercises
* Do the number of counts $N$ in one `binWidth=600s` bin match the sum of counts in the ten corresponding `binWidth=60s` bins?
* Considering raw counts, do you think the "zero" bins in the above analyses are natural fluctuations in cosmic ray muon strikes?
* Do the flux values shown above reasonably agree with the known average CR muon flux at sea level? If "no," what effects do you think might account for the difference?
---
We can dig more information out of the `Flux.pl` output by returning to the definition of flux
$$\Phi = \frac{N}{AT}.$$
Now that we know $N$ for each data point, and given that we know the bin width $T$ because we set it for the entire analysis, we should be able to calculate the area of the detector as
$$A = \frac{N}{\Phi T}$$
One important comment: `Flux.pl` gives flux values in units of `events/m^2/min` - note the use of minutes instead of seconds. When substituting a numerical value for $T$, we must convert the command line parameter `binWidth=600` from $600s$ to $10min$.
When we perform this calculation, we find consistent values for $A$:
```
Date Time Phi dPhi N=(Phi/dPhi)^2 A=N/Phi T
01/09/16 12:06:00 AM 59.416172 8.760437 45.999996082 0.0774199928
01/09/16 12:16:00 AM 63.291139 9.041591 49.0000030968 0.0774200052
01/09/16 12:26:00 AM 71.041075 9.579177 54.9999953935 0.0774199931
01/09/16 12:36:00 AM 50.37458 8.066389 38.9999951081 0.0774199906
01/09/16 12:46:00 AM 55.541204 8.469954 43.0000020769 0.0774200035
01/09/16 12:56:00 AM 73.624386 9.751788 57.000001784 0.0774200029
01/09/16 01:06:00 AM 42.624645 7.419998 33.0000025577 0.0774200056
01/09/16 01:16:00 AM 54.249548 8.370887 41.999999903 0.0774199997
01/09/16 01:26:00 AM 45.207957 7.641539 35.0000040418 0.0774200083
01/09/16 01:36:00 AM 42.624645 7.419998 33.0000025577 0.0774200056
01/09/16 01:46:00 AM 65.874451 9.224268 51.00000197 0.077420003
01/09/16 01:56:00 AM 59.416172 8.760437 45.999996082 0.0774199928
01/09/16 02:06:00 AM 94.290881 11.035913 72.9999984439 0.0774199983
```
In fact, the area of one standard 6000-series QuarkNet CRMD detector panel is $0.07742m^2$.
It's important to note that we're reversing only the calculations, not the physics! That is, we find $A=0.07742m^2$ because that's the value stored in the `6248.geo` file, not because we're able to determine the actual area of the detector panel from the `Flux.pl` output data using physical principles.
## Testing binWidth
To verify that the third-column flux values behave as expected, we can run a quick check by manipulating the `binWidth` parameter. We'll run `Flux.pl` on the above two threshold files again, but this time we'll reduce `binWidth` by a factor of 10:
```
$ perl ./perl/Flux.pl files/6148.2016.0109.0.thresh outputs/ThreshFluxOut6148_02 60 geo/
```
```
!head -15 outputs/ThreshFluxOut6148_02
```
```
$ perl ./perl/Flux.pl files/6119.2016.0104.1.thresh outputs/ThreshFluxOut6119_02 60 geo/
```
```
!head -15 outputs/ThreshFluxOut6119_02
```
In the case of the 6148 data, our new fine-grained binning reveals some sparsity in the first several minutes of the data, as all of the bins between the `2:30` bin and the `13:30` bin are empty of muon events (and therefore not reported). What happened here? It's difficult to say -- under normal statistical variations, it's possible that there were simply no recorded events during these bins. It's also possible that the experimenter adjusted the level of physical shielding around the detector during these times, or had a cable unplugged while troubleshooting.
| true |
code
| 0.365542 | null | null | null | null |
|
# Character-level recurrent sequence-to-sequence model
**Author:** [fchollet](https://twitter.com/fchollet)<br>
**Date created:** 2017/09/29<br>
**Last modified:** 2020/04/26<br>
**Description:** Character-level recurrent sequence-to-sequence model.
## Introduction
This example demonstrates how to implement a basic character-level
recurrent sequence-to-sequence model. We apply it to translating
short English sentences into short French sentences,
character-by-character. Note that it is fairly unusual to
do character-level machine translation, as word-level
models are more common in this domain.
**Summary of the algorithm**
- We start with input sequences from a domain (e.g. English sentences)
and corresponding target sequences from another domain
(e.g. French sentences).
- An encoder LSTM turns input sequences to 2 state vectors
(we keep the last LSTM state and discard the outputs).
- A decoder LSTM is trained to turn the target sequences into
the same sequence but offset by one timestep in the future,
a training process called "teacher forcing" in this context.
It uses as initial state the state vectors from the encoder.
Effectively, the decoder learns to generate `targets[t+1...]`
given `targets[...t]`, conditioned on the input sequence.
- In inference mode, when we want to decode unknown input sequences, we:
- Encode the input sequence into state vectors
- Start with a target sequence of size 1
(just the start-of-sequence character)
- Feed the state vectors and 1-char target sequence
to the decoder to produce predictions for the next character
- Sample the next character using these predictions
(we simply use argmax).
- Append the sampled character to the target sequence
- Repeat until we generate the end-of-sequence character or we
hit the character limit.
## Setup
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
```
## Download the data
```
!!curl -O http://www.manythings.org/anki/fra-eng.zip
!!unzip fra-eng.zip
```
## Configuration
```
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = "fra.txt"
```
## Prepare the data
```
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, "r", encoding="utf-8") as f:
lines = f.read().split("\n")
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text, _ = line.split("\t")
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = "\t" + target_text + "\n"
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print("Number of samples:", len(input_texts))
print("Number of unique input tokens:", num_encoder_tokens)
print("Number of unique output tokens:", num_decoder_tokens)
print("Max sequence length for inputs:", max_encoder_seq_length)
print("Max sequence length for outputs:", max_decoder_seq_length)
input_token_index = dict([(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict([(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype="float32"
)
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.0
encoder_input_data[i, t + 1 :, input_token_index[" "]] = 1.0
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.0
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.0
decoder_input_data[i, t + 1 :, target_token_index[" "]] = 1.0
decoder_target_data[i, t:, target_token_index[" "]] = 1.0
```
## Build the model
```
# Define an input sequence and process it.
encoder_inputs = keras.Input(shape=(None, num_encoder_tokens))
encoder = keras.layers.LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = keras.Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = keras.layers.Dense(num_decoder_tokens, activation="softmax")
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
```
## Train the model
```
model.compile(
optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.fit(
[encoder_input_data, decoder_input_data],
decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2,
)
# Save model
model.save("s2s")
```
## Run inference (sampling)
1. encode input and retrieve initial decoder state
2. run one step of decoder with this initial state
and a "start of sequence" token as target.
Output will be the next target token.
3. Repeat with the current target token and current states
```
# Define sampling models
# Restore the model and construct the encoder and decoder.
model = keras.models.load_model("s2s")
encoder_inputs = model.input[0] # input_1
encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output # lstm_1
encoder_states = [state_h_enc, state_c_enc]
encoder_model = keras.Model(encoder_inputs, encoder_states)
decoder_inputs = model.input[1] # input_2
decoder_state_input_h = keras.Input(shape=(latent_dim,))
decoder_state_input_c = keras.Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_lstm = model.layers[3]
decoder_outputs, state_h_dec, state_c_dec = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs
)
decoder_states = [state_h_dec, state_c_dec]
decoder_dense = model.layers[4]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = keras.Model(
[decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states
)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict((i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict((i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index["\t"]] = 1.0
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ""
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if sampled_char == "\n" or len(decoded_sentence) > max_decoder_seq_length:
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.0
# Update states
states_value = [h, c]
return decoded_sentence
```
You can now generate decoded sentences as such:
```
for seq_index in range(20):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index : seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print("-")
print("Input sentence:", input_texts[seq_index])
print("Decoded sentence:", decoded_sentence)
```
| true |
code
| 0.650217 | null | null | null | null |
|
**Note**: There are multiple ways to solve these problems in SQL. Your solution may be quite different from mine and still be correct.
**1**. Connect to the SQLite3 database at `data/faculty.db` in the `notebooks` folder using the `sqlite` package or `ipython-sql` magic functions. Inspect the `sql` creation statement for each tables so you know their structure.
```
%load_ext sql
%sql sqlite:///../notebooks/data/faculty.db
%%sql
SELECT sql FROM sqlite_master WHERE type='table';
```
2. Find the youngest and oldest faculty member(s) of each gender.
```
%%sql
SELECT min(age), max(age) FROM person
%%sql
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT min(age) FROM person) AND gender = 'Male'
UNION
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT min(age) FROM person) AND gender = 'Female'
UNION
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT max(age) FROM person) AND gender = 'Male'
UNION
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT max(age) FROM person) AND gender = 'Female'
LIMIT 10
```
3. Find the median age of the faculty members who know Python.
As SQLite3 does not provide a median function, you can create a User Defined Function (UDF) to do this. See [documentation](https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.create_function).
```
import statistics
class Median:
def __init__(self):
self.acc = []
def step(self, value):
self.acc.append(value)
def finalize(self):
return statistics.median(self.acc)
import sqlite3
con = sqlite3.connect('../notebooks/data/faculty.db')
con.create_aggregate("Median", 1, Median)
cr = con.cursor()
cr.execute('SELECT median(age) FROM person')
cr.fetchall()
```
4. Arrange countries by the average age of faculty in descending order. Countries are only included in the table if there are at least 3 faculty members from that country.
```
%%sql
SELECT country, count(country), avg(age)
FROM person
INNER JOIN country
ON person.country_id = country.country_id
GROUP BY country
HAVING count(*) > 3
ORDER BY age DESC
LIMIT 3
```
5. Which country has the highest average body mass index (BMII) among the faculty? Recall that BMI is weight (kg) / (height (m))^2.
```
%%sql
SELECT country, avg(weight / (height*height)) as avg_bmi
FROM person
INNER JOIN country
ON person.country_id = country.country_id
GROUP BY country
ORDER BY avg_bmi DESC
LIMIT 3
```
6. Do obese faculty (BMI > 30) know more languages on average than non-obese faculty?
```
%%sql
SELECT is_obese, avg(language)
FROM (
SELECT
weight / (height*height) > 30 AS is_obese,
count(language_name) AS language
FROM person
INNER JOIN person_language
ON person.person_id = person_language.person_id
INNER JOIN language
ON person_language.language_id = language.language_id
GROUP BY person.person_id
)
GROUP BY is_obese
```
| true |
code
| 0.265428 | null | null | null | null |
|
## Probalistic Confirmed COVID19 Cases- Denmark
**Jorge: remember to reexecute the cell with the photo.**
### Table of contents
[Initialization](#Initialization)
[Data Importing and Processing](#Data-Importing-and-Processing)
1. [Kalman Filter Modeling: Case of Denmark Data](#1.-Kalman-Filter-Modeling:-Case-of-Denmark-Data)
1.1. [Model with the vector c fixed as [0, 1]](#1.1.-Kalman-Filter-Model-vector-c-fixed-as-[0,-1])
1.2. [Model with the vector c as a random variable with prior](#1.2.-Kalman-Filter-with-the-vector-c-as-a-random-variable-with-prior)
1.3. [Model without input (2 hidden variables)](#1.3.-Kalman-Filter-without-Input)
2. [Kalman Filter Modeling: Case of Norway Data](#2.-Kalman-Filter-Modeling:-Case-of-Norway-Data)
2.1. [Model with the vector c fixed as [0, 1]](#2.1.-Kalman-Filter-Model-vector-c-fixed-as-[0,-1])
2.2. [Model with the vector c as a random variable with prior](#2.2.-Kalman-Filter-with-the-vector-c-as-a-random-variable-with-prior)
2.3. [Model without input (2 hidden variables)](#2.3.-Kalman-Filter-without-Input)
3. [Kalman Filter Modeling: Case of Sweden Data](#Kalman-Filter-Modeling:-Case-of-Sweden-Data)
3.1. [Model with the vector c fixed as [0, 1]](#3.1.-Kalman-Filter-Model-vector-c-fixed-as-[0,-1])
3.2. [Model with the vector c as a random variable with prior](#3.2.-Kalman-Filter-with-the-vector-c-as-a-random-variable-with-prior)
3.3. [Model without input (2 hidden variables)](#3.3.-Kalman-Filter-without-Input)
## Initialization
```
from os.path import join, pardir
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import numpyro
import numpyro.distributions as dist
import pandas as pd
import seaborn as sns
from jax import lax, random, vmap
from jax.scipy.special import logsumexp
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
from sklearn.preprocessing import StandardScaler
np.random.seed(2103)
ROOT = pardir
DATA = join(ROOT, "data", "raw")
# random seed
np.random.seed(42)
#plot style
plt.style.use('ggplot')
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 10)
```
## Data Importing and Processing
The data in this case are the confirmed cases of the COVID-19 and the the mobility data (from Google) for three specific countries: Denmark, Sweden and Norway.
```
adress = join(ROOT, "data", "processed")
data = pd.read_csv(join(adress, 'data_three_mob_cov.csv'),parse_dates=['Date'])
data.info()
data.head(5)
```
Handy functions to split the data, train the models and plot the results.
```
def split_forecast(df, n_train=65):
"""Split dataframe `df` as training, test and input mobility data."""
# just take the first 4 mobility features
X = df.iloc[:, 3:7].values.astype(np.float_)
# confirmed cases
y = df.iloc[:,2].values.astype(np.float_)
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, len(y))]
y_train = y[:n_train]
y_test = y[n_train:]
return X, y_train, y_test
def train_kf(model, data, n_train, n_test, num_samples=9000, num_warmup=3000, **kwargs):
"""Train a Kalman Filter model."""
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
nuts_kernel = NUTS(model=model)
# burn-in is still too much in comparison with the samples
mcmc = MCMC(
nuts_kernel, num_samples=num_samples, num_warmup=num_warmup, num_chains=1
)
mcmc.run(rng_key_, T=n_train, T_forecast=n_test, obs=data, **kwargs)
return mcmc
def get_samples(mcmc):
"""Get samples from variables in MCMC."""
return {k: v for k, v in mcmc.get_samples().items()}
def plot_samples(hmc_samples, nodes, dist=True):
"""Plot samples from the variables in `nodes`."""
for node in nodes:
if len(hmc_samples[node].shape) > 1:
n_vars = hmc_samples[node].shape[1]
for i in range(n_vars):
plt.figure(figsize=(4, 3))
if dist:
sns.distplot(hmc_samples[node][:, i], label=node + "%d" % i)
else:
plt.plot(hmc_samples[node][:, i], label=node + "%d" % i)
plt.legend()
plt.show()
else:
plt.figure(figsize=(4, 3))
if dist:
sns.distplot(hmc_samples[node], label=node)
else:
plt.plot(hmc_samples[node], label=node)
plt.legend()
plt.show()
def plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test):
"""Plot the results of forecasting (dimension are different)."""
y_hat = hmc_samples["y_pred"].mean(axis=0)
y_std = hmc_samples["y_pred"].std(axis=0)
y_pred_025 = y_hat - 1.96 * y_std
y_pred_975 = y_hat + 1.96 * y_std
plt.plot(idx_train, y_train, "b-")
plt.plot(idx_test, y_test, "bx")
plt.plot(idx_test[:-1], y_hat, "r-")
plt.plot(idx_test[:-1], y_pred_025, "r--")
plt.plot(idx_test[:-1], y_pred_975, "r--")
plt.fill_between(idx_test[:-1], y_pred_025, y_pred_975, alpha=0.3)
plt.legend(
[
"true (train)",
"true (test)",
"forecast",
"forecast + stddev",
"forecast - stddev",
]
)
plt.show()
n_train = 65 # number of points to train
n_test = 20 # number of points to forecast
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, n_train+n_test)]
```
## 1. Kalman Filter Modeling: Case of Denmark Data
```
data_dk=data[data['Country'] == "Denmark"]
data_dk.head(5)
print("The length of the full dataset for Denmark is:" + " " )
print(len(data_dk))
```
Prepare input of the models (we are using numpyro so the inputs are numpy arrays).
```
X, y_train, y_test = split_forecast(data_dk)
```
### 1.1. Kalman Filter Model vector c fixed as [0, 1]
First model: the sampling distribution is replaced by one fixed variable $c$.
```
def f(carry, input_t):
x_t, noise_t = input_t
W, beta, z_prev, tau = carry
z_t = beta * z_prev + W @ x_t + noise_t
z_prev = z_t
return (W, beta, z_prev, tau), z_t
def model_wo_c(T, T_forecast, x, obs=None):
"""Define KF with inputs and fixed sampling dist."""
# Define priors over beta, tau, sigma, z_1
W = numpyro.sample(
name="W", fn=dist.Normal(loc=jnp.zeros((2, 4)), scale=jnp.ones((2, 4)))
)
beta = numpyro.sample(
name="beta", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
tau = numpyro.sample(name="tau", fn=dist.HalfCauchy(scale=jnp.ones(2)))
sigma = numpyro.sample(name="sigma", fn=dist.HalfCauchy(scale=0.1))
z_prev = numpyro.sample(
name="z_1", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
# Define LKJ prior
L_Omega = numpyro.sample("L_Omega", dist.LKJCholesky(2, 10.0))
Sigma_lower = jnp.matmul(
jnp.diag(jnp.sqrt(tau)), L_Omega
) # lower cholesky factor of the covariance matrix
noises = numpyro.sample(
"noises",
fn=dist.MultivariateNormal(loc=jnp.zeros(2), scale_tril=Sigma_lower),
sample_shape=(T + T_forecast - 2,),
)
# Propagate the dynamics forward using jax.lax.scan
carry = (W, beta, z_prev, tau)
z_collection = [z_prev]
carry, zs_exp = lax.scan(f, carry, (x, noises), T + T_forecast - 2)
z_collection = jnp.concatenate((jnp.array(z_collection), zs_exp), axis=0)
obs_mean = z_collection[:T, 1]
pred_mean = z_collection[T:, 1]
# Sample the observed y (y_obs)
numpyro.sample(name="y_obs", fn=dist.Normal(loc=obs_mean, scale=sigma), obs=obs)
numpyro.sample(name="y_pred", fn=dist.Normal(loc=pred_mean, scale=sigma), obs=None)
mcmc = train_kf(model_wo_c, y_train, n_train, n_test, x=X[2:])
```
Plots of the distribution of the samples for each variable.
```
hmc_samples = get_samples(mcmc)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
```
Forecasting prediction, all the datapoints in the test set are within the Confidence Interval.
```
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 1.2. Kalman Filter with the vector c as a random variable with prior
Second model: the sampling distribution is a Normal distribution $c$.
```
def model_w_c(T, T_forecast, x, obs=None):
# Define priors over beta, tau, sigma, z_1 (keep the shapes in mind)
W = numpyro.sample(
name="W", fn=dist.Normal(loc=jnp.zeros((2, 4)), scale=jnp.ones((2, 4)))
)
beta = numpyro.sample(
name="beta", fn=dist.Normal(loc=jnp.array([0.0, 0.0]), scale=jnp.ones(2))
)
tau = numpyro.sample(name="tau", fn=dist.HalfCauchy(scale=jnp.array([2,2])))
sigma = numpyro.sample(name="sigma", fn=dist.HalfCauchy(scale=1))
z_prev = numpyro.sample(
name="z_1", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
# Define LKJ prior
L_Omega = numpyro.sample("L_Omega", dist.LKJCholesky(2, 10.0))
Sigma_lower = jnp.matmul(
jnp.diag(jnp.sqrt(tau)), L_Omega
) # lower cholesky factor of the covariance matrix
noises = numpyro.sample(
"noises",
fn=dist.MultivariateNormal(loc=jnp.zeros(2), scale_tril=Sigma_lower),
sample_shape=(T + T_forecast - 2,),
)
# Propagate the dynamics forward using jax.lax.scan
carry = (W, beta, z_prev, tau)
z_collection = [z_prev]
carry, zs_exp = lax.scan(f, carry, (x, noises), T + T_forecast - 2)
z_collection = jnp.concatenate((jnp.array(z_collection), zs_exp), axis=0)
c = numpyro.sample(
name="c", fn=dist.Normal(loc=jnp.array([[0.0], [0.0]]), scale=jnp.ones((2, 1)))
)
obs_mean = jnp.dot(z_collection[:T, :], c).squeeze()
pred_mean = jnp.dot(z_collection[T:, :], c).squeeze()
# Sample the observed y (y_obs)
numpyro.sample(name="y_obs", fn=dist.Normal(loc=obs_mean, scale=sigma), obs=obs)
numpyro.sample(name="y_pred", fn=dist.Normal(loc=pred_mean, scale=sigma), obs=None)
mcmc2 = train_kf(model_w_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc2)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 1.3. Kalman Filter without Input
Third model: no input mobility data, **two** hidden states.
```
def f_s(carry, noise_t):
"""Propagate forward the time series."""
beta, z_prev, tau = carry
z_t = beta * z_prev + noise_t
z_prev = z_t
return (beta, z_prev, tau), z_t
def twoh_c_kf(T, T_forecast, obs=None):
"""Define Kalman Filter with two hidden variates."""
# Define priors over beta, tau, sigma, z_1
# W = numpyro.sample(name="W", fn=dist.Normal(loc=jnp.zeros((2,4)), scale=jnp.ones((2,4))))
beta = numpyro.sample(
name="beta", fn=dist.Normal(loc=jnp.array([0.0, 0.0]), scale=jnp.ones(2))
)
tau = numpyro.sample(name="tau", fn=dist.HalfCauchy(scale=jnp.array([10,10])))
sigma = numpyro.sample(name="sigma", fn=dist.HalfCauchy(scale=5))
z_prev = numpyro.sample(
name="z_1", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
# Define LKJ prior
L_Omega = numpyro.sample("L_Omega", dist.LKJCholesky(2, 10.0))
Sigma_lower = jnp.matmul(
jnp.diag(jnp.sqrt(tau)), L_Omega
) # lower cholesky factor of the covariance matrix
noises = numpyro.sample(
"noises",
fn=dist.MultivariateNormal(loc=jnp.zeros(2), scale_tril=Sigma_lower),
sample_shape=(T + T_forecast - 2,),
)
# Propagate the dynamics forward using jax.lax.scan
carry = (beta, z_prev, tau)
z_collection = [z_prev]
carry, zs_exp = lax.scan(f_s, carry, noises, T + T_forecast - 2)
z_collection = jnp.concatenate((jnp.array(z_collection), zs_exp), axis=0)
c = numpyro.sample(
name="c", fn=dist.Normal(loc=jnp.array([[0.0], [0.0]]), scale=jnp.ones((2, 1)))
)
obs_mean = jnp.dot(z_collection[:T, :], c).squeeze()
pred_mean = jnp.dot(z_collection[T:, :], c).squeeze()
# Sample the observed y (y_obs)
numpyro.sample(name="y_obs", fn=dist.Normal(loc=obs_mean, scale=sigma), obs=obs)
numpyro.sample(name="y_pred", fn=dist.Normal(loc=pred_mean, scale=sigma), obs=None)
mcmc3 = train_kf(twoh_c_kf, y_train, n_train, n_test, num_samples=12000, num_warmup=5000)
hmc_samples = get_samples(mcmc3)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
## 2. Kalman Filter Modeling: Case of Norway Data
```
data_no=data[data['Country'] == "Norway"]
data_no.head(5)
print("The length of the full dataset for Norway is:" + " " )
print(len(data_no))
n_train = 66 # number of points to train
n_test = 20 # number of points to forecast
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, n_train+n_test)]
X, y_train, y_test = split_forecast(data_no, n_train)
```
### 2.1. Kalman Filter Model vector c fixed as [0, 1]
```
mcmc_no = train_kf(model_wo_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc_no)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 2.2. Kalman Filter with the vector c as a random variable with prior
```
mcmc2_no = train_kf(model_w_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc2_no)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 2.3. Kalman Filter without Input
```
mcmc3_no = train_kf(twoh_c_kf, y_train, n_train, n_test)
hmc_samples = get_samples(mcmc3_no)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
## 3. Kalman Filter Modeling: Case of Sweden Data
```
data_sw=data[data['Country'] == "Sweden"]
data_sw.head(5)
print("The length of the full dataset for Sweden is:" + " " )
print(len(data_sw))
n_train = 75 # number of points to train
n_test = 22 # number of points to forecast
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, n_train+n_test)]
X, y_train, y_test = split_forecast(data_sw, n_train)
```
### 3.1. Kalman Filter Model vector c fixed as [0, 1]
```
mcmc_sw = train_kf(model_wo_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc_sw)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 3.2. Kalman Filter with the vector c as a random variable with prior
```
mcmc2_sw = train_kf(model_w_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc2_sw)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 3.3. Kalman Filter without Input
```
mcmc3_sw = train_kf(twoh_c_kf, y_train, n_train, n_test)
hmc_samples = get_samples(mcmc3_sw)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
Save results to rerun the plotting functions.
```
import pickle
MODELS = join(ROOT, "models")
for i, mc in enumerate([mcmc3_no, mcmc_sw, mcmc2_sw, mcmc3_sw]):
with open(join(MODELS, f"hmc_ok_{i}.pickle"), "wb") as f:
pickle.dump(get_samples(mc),f)
```
## Gaussian Process
| true |
code
| 0.715449 | null | null | null | null |
|
# Advanced RNNs
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
In this notebook we're going to cover some advanced topics related to RNNs.
1. Conditioned hidden state
2. Char-level embeddings
3. Encoder and decoder
4. Attentional mechanisms
5. Implementation
# Set up
```
# Load PyTorch library
!pip3 install torch
import os
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
# Set Numpy and PyTorch seeds
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
# Arguments
args = Namespace(
seed=1234,
cuda=True,
batch_size=4,
condition_vocab_size=3, # vocabulary for condition possibilities
embedding_dim=100,
rnn_hidden_dim=100,
hidden_dim=100,
num_layers=1,
bidirectional=False,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
```
# Conditioned RNNs
Conditioning an RNN is to add extra information that will be helpful towards a prediction. We can encode (embed it) this information and feed it along with the sequential input into our model. For example, suppose in our document classificaiton example in the previous notebook, we knew the publisher of each news article (NYTimes, ESPN, etc.). We could have encoded that information to help with the prediction. There are several different ways of creating a conditioned RNN.
**Note**: If the conditioning information is novel for each input in the sequence, just concatenate it along with each time step's input.
1. Make the initial hidden state the encoded information instead of using the initial zerod hidden state. Make sure that the size of the encoded information is the same as the hidden state for the RNN.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn1.png" width=400>
```
import torch.nn as nn
import torch.nn.functional as F
# Condition
condition = torch.LongTensor([0, 2, 1, 2]) # batch size of 4 with a vocab size of 3
condition_embeddings = nn.Embedding(
embedding_dim=args.embedding_dim, # should be same as RNN hidden dim
num_embeddings=args.condition_vocab_size) # of unique conditions
# Initialize hidden state
num_directions = 1
if args.bidirectional:
num_directions = 2
# If using multiple layers and directions, the hidden state needs to match that size
hidden_t = condition_embeddings(condition).unsqueeze(0).repeat(
args.num_layers * num_directions, 1, 1).to(args.device) # initial state to RNN
print (hidden_t.size())
# Feed into RNN
# y_out, _ = self.rnn(x_embedded, hidden_t)
```
2. Concatenate the encoded information with the hidden state at each time step. Do not replace the hidden state because the RNN needs that to learn.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn2.png" width=400>
```
# Initialize hidden state
hidden_t = torch.zeros((args.num_layers * num_directions, args.batch_size, args.rnn_hidden_dim))
print (hidden_t.size())
def concat_condition(condition_embeddings, condition, hidden_t, num_layers, num_directions):
condition_t = condition_embeddings(condition).unsqueeze(0).repeat(
num_layers * num_directions, 1, 1)
hidden_t = torch.cat([hidden_t, condition_t], 2)
return hidden_t
# Loop through the inputs time steps
hiddens = []
seq_size = 1
for t in range(seq_size):
hidden_t = concat_condition(condition_embeddings, condition, hidden_t,
args.num_layers, num_directions).to(args.device)
print (hidden_t.size())
# Feed into RNN
# hidden_t = rnn_cell(x_in[t], hidden_t)
...
```
# Char-level embeddings
Our conv operations will have inputs that are words in a sentence represented at the character level| $\in \mathbb{R}^{NXSXWXE}$ and outputs are embeddings for each word (based on convlutions applied at the character level.)
**Word embeddings**: capture the temporal correlations among
adjacent tokens so that similar words have similar representations. Ex. "New Jersey" is close to "NJ" is close to "Garden State", etc.
**Char embeddings**: create representations that map words at a character level. Ex. "toy" and "toys" will be close to each other.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/char_embeddings.png" width=450>
```
# Arguments
args = Namespace(
seed=1234,
cuda=False,
shuffle=True,
batch_size=64,
vocab_size=20, # vocabulary
seq_size=10, # max length of each sentence
word_size=15, # max length of each word
embedding_dim=100,
num_filters=100, # filters per size
)
class Model(nn.Module):
def __init__(self, embedding_dim, num_embeddings, num_input_channels,
num_output_channels, padding_idx):
super(Model, self).__init__()
# Char-level embedding
self.embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
padding_idx=padding_idx)
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_output_channels,
kernel_size=f) for f in [2,3,4]])
def forward(self, x, channel_first=False, apply_softmax=False):
# x: (N, seq_len, word_len)
input_shape = x.size()
batch_size, seq_len, word_len = input_shape
x = x.view(-1, word_len) # (N*seq_len, word_len)
# Embedding
x = self.embeddings(x) # (N*seq_len, word_len, embedding_dim)
# Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)
if not channel_first:
x = x.transpose(1, 2)
# Convolution
z = [F.relu(conv(x)) for conv in self.conv]
# Pooling
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)
# Concat to get char-level embeddings
z = torch.cat(z, 2) # join conv outputs
return z
# Input
input_size = (args.batch_size, args.seq_size, args.word_size)
x_in = torch.randint(low=0, high=args.vocab_size, size=input_size).long()
print (x_in.size())
# Initial char-level embedding model
model = Model(embedding_dim=args.embedding_dim,
num_embeddings=args.vocab_size,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
padding_idx=0)
print (model.named_modules)
# Forward pass to get char-level embeddings
z = model(x_in)
print (z.size())
```
There are several different ways you can use these char-level embeddings:
1. Concat char-level embeddings with word-level embeddings, since we have an embedding for each word (at a char-level) and then feed it into an RNN.
2. You can feed the char-level embeddings into an RNN to processes them.
# Encoder and decoder
So far we've used RNNs to `encode` a sequential input and generate hidden states. We use these hidden states to `decode` the predictions. So far, the encoder was an RNN and the decoder was just a few fully connected layers followed by a softmax layer (for classification). But the encoder and decoder can assume other architectures as well. For example, the decoder could be an RNN that processes the hidden state outputs from the encoder RNN.
```
# Arguments
args = Namespace(
batch_size=64,
embedding_dim=100,
rnn_hidden_dim=100,
hidden_dim=100,
num_layers=1,
bidirectional=False,
dropout=0.1,
)
class Encoder(nn.Module):
def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim,
num_layers, bidirectional, padding_idx=0):
super(Encoder, self).__init__()
# Embeddings
self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
padding_idx=padding_idx)
# GRU weights
self.gru = nn.GRU(input_size=embedding_dim, hidden_size=rnn_hidden_dim,
num_layers=num_layers, batch_first=True,
bidirectional=bidirectional)
def forward(self, x_in, x_lengths):
# Word level embeddings
z_word = self.word_embeddings(x_in)
# Feed into RNN
out, h_n = self.gru(z)
# Gather the last relevant hidden state
out = gather_last_relevant_hidden(out, x_lengths)
return out
class Decoder(nn.Module):
def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):
super(Decoder, self).__init__()
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, encoder_output, apply_softmax=False):
# FC layers
z = self.dropout(encoder_output)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
class Model(nn.Module):
def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim,
hidden_dim, num_layers, bidirectional, output_dim, dropout_p,
padding_idx=0):
super(Model, self).__init__()
self.encoder = Encoder(embedding_dim, num_embeddings, rnn_hidden_dim,
num_layers, bidirectional, padding_idx=0)
self.decoder = Decoder(rnn_hidden_dim, hidden_dim, output_dim, dropout_p)
def forward(self, x_in, x_lengths, apply_softmax=False):
encoder_outputs = self.encoder(x_in, x_lengths)
y_pred = self.decoder(encoder_outputs, apply_softmax)
return y_pred
model = Model(embedding_dim=args.embedding_dim, num_embeddings=1000,
rnn_hidden_dim=args.rnn_hidden_dim, hidden_dim=args.hidden_dim,
num_layers=args.num_layers, bidirectional=args.bidirectional,
output_dim=4, dropout_p=args.dropout, padding_idx=0)
print (model.named_parameters)
```
# Attentional mechanisms
When processing an input sequence with an RNN, recall that at each time step we process the input and the hidden state at that time step. For many use cases, it's advantageous to have access to the inputs at all time steps and pay selective attention to the them at each time step. For example, in machine translation, it's advantageous to have access to all the words when translating to another language because translations aren't necessarily word for word.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention1.jpg" width=650>
Attention can sound a bit confusing so let's see what happens at each time step. At time step j, the model has processed inputs $x_0, x_1, x_2, ..., x_j$ and has generted hidden states $h_0, h_1, h_2, ..., h_j$. The idea is to use all the processed hidden states to make the prediction and not just the most recent one. There are several approaches to how we can do this.
With **soft attention**, we learn a vector of floating points (probabilities) to multiply with the hidden states to create the context vector.
Ex. [0.1, 0.3, 0.1, 0.4, 0.1]
With **hard attention**, we can learn a binary vector to multiply with the hidden states to create the context vector.
Ex. [0, 0, 0, 1, 0]
We're going to focus on soft attention because it's more widley used and we can visualize how much of each hidden state helps with the prediction, which is great for interpretability.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention2.jpg" width=650>
We're going to implement attention in the document classification task below.
# Document classification with RNNs
We're going to implement the same document classification task as in the previous notebook but we're going to use an attentional interface for interpretability.
**Why not machine translation?** Normally, machine translation is the go-to example for demonstrating attention but it's not really practical. How many situations can you think of that require a seq to generate another sequence? Instead we're going to apply attention with our document classification example to see which input tokens are more influential towards predicting the genre.
## Set up
```
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
args = Namespace(
seed=1234,
cuda=True,
shuffle=True,
data_file="news.csv",
split_data_file="split_news.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="news",
train_size=0.7,
val_size=0.15,
test_size=0.15,
pretrained_embeddings=None,
cutoff=25,
num_epochs=5,
early_stopping_criteria=5,
learning_rate=1e-3,
batch_size=128,
embedding_dim=100,
kernels=[3,5],
num_filters=100,
rnn_hidden_dim=128,
hidden_dim=200,
num_layers=1,
bidirectional=False,
dropout_p=0.25,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Create save dir
create_dirs(args.save_dir)
# Expand filepaths
args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir, args.model_state_file)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
```
## Data
```
import urllib
url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/news.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(args.data_file, 'wb') as fp:
fp.write(html)
df = pd.read_csv(args.data_file, header=0)
df.head()
by_category = collections.defaultdict(list)
for _, row in df.iterrows():
by_category[row.category].append(row.to_dict())
for category in by_category:
print ("{0}: {1}".format(category, len(by_category[category])))
final_list = []
for _, item_list in sorted(by_category.items()):
if args.shuffle:
np.random.shuffle(item_list)
n = len(item_list)
n_train = int(args.train_size*n)
n_val = int(args.val_size*n)
n_test = int(args.test_size*n)
# Give data point a split attribute
for item in item_list[:n_train]:
item['split'] = 'train'
for item in item_list[n_train:n_train+n_val]:
item['split'] = 'val'
for item in item_list[n_train+n_val:]:
item['split'] = 'test'
# Add to final list
final_list.extend(item_list)
split_df = pd.DataFrame(final_list)
split_df["split"].value_counts()
def preprocess_text(text):
text = ' '.join(word.lower() for word in text.split(" "))
text = re.sub(r"([.,!?])", r" \1 ", text)
text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text)
text = text.strip()
return text
split_df.title = split_df.title.apply(preprocess_text)
split_df.to_csv(args.split_data_file, index=False)
split_df.head()
```
## Vocabulary
```
class Vocabulary(object):
def __init__(self, token_to_idx=None):
# Token to index
if token_to_idx is None:
token_to_idx = {}
self.token_to_idx = token_to_idx
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
def to_serializable(self):
return {'token_to_idx': self.token_to_idx}
@classmethod
def from_serializable(cls, contents):
return cls(**contents)
def add_token(self, token):
if token in self.token_to_idx:
index = self.token_to_idx[token]
else:
index = len(self.token_to_idx)
self.token_to_idx[token] = index
self.idx_to_token[index] = token
return index
def add_tokens(self, tokens):
return [self.add_token[token] for token in tokens]
def lookup_token(self, token):
return self.token_to_idx[token]
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self.token_to_idx)
# Vocabulary instance
category_vocab = Vocabulary()
for index, row in df.iterrows():
category_vocab.add_token(row.category)
print (category_vocab) # __str__
print (len(category_vocab)) # __len__
index = category_vocab.lookup_token("Business")
print (index)
print (category_vocab.lookup_index(index))
```
## Sequence vocabulary
Next, we're going to create our Vocabulary classes for the article's title, which is a sequence of words.
```
from collections import Counter
import string
class SequenceVocabulary(Vocabulary):
def __init__(self, token_to_idx=None, unk_token="<UNK>",
mask_token="<MASK>", begin_seq_token="<BEGIN>",
end_seq_token="<END>"):
super(SequenceVocabulary, self).__init__(token_to_idx)
self.mask_token = mask_token
self.unk_token = unk_token
self.begin_seq_token = begin_seq_token
self.end_seq_token = end_seq_token
self.mask_index = self.add_token(self.mask_token)
self.unk_index = self.add_token(self.unk_token)
self.begin_seq_index = self.add_token(self.begin_seq_token)
self.end_seq_index = self.add_token(self.end_seq_token)
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
def to_serializable(self):
contents = super(SequenceVocabulary, self).to_serializable()
contents.update({'unk_token': self.unk_token,
'mask_token': self.mask_token,
'begin_seq_token': self.begin_seq_token,
'end_seq_token': self.end_seq_token})
return contents
def lookup_token(self, token):
return self.token_to_idx.get(token, self.unk_index)
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the SequenceVocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<SequenceVocabulary(size=%d)>" % len(self.token_to_idx)
def __len__(self):
return len(self.token_to_idx)
# Get word counts
word_counts = Counter()
for title in split_df.title:
for token in title.split(" "):
if token not in string.punctuation:
word_counts[token] += 1
# Create SequenceVocabulary instance
title_word_vocab = SequenceVocabulary()
for word, word_count in word_counts.items():
if word_count >= args.cutoff:
title_word_vocab.add_token(word)
print (title_word_vocab) # __str__
print (len(title_word_vocab)) # __len__
index = title_word_vocab.lookup_token("general")
print (index)
print (title_word_vocab.lookup_index(index))
```
We're also going to create an instance fo SequenceVocabulary that processes the input on a character level.
```
# Create SequenceVocabulary instance
title_char_vocab = SequenceVocabulary()
for title in split_df.title:
for token in title:
title_char_vocab.add_token(token)
print (title_char_vocab) # __str__
print (len(title_char_vocab)) # __len__
index = title_char_vocab.lookup_token("g")
print (index)
print (title_char_vocab.lookup_index(index))
```
## Vectorizer
Something new that we introduce in this Vectorizer is calculating the length of our input sequence. We will use this later on to extract the last relevant hidden state for each input sequence.
```
class NewsVectorizer(object):
def __init__(self, title_word_vocab, title_char_vocab, category_vocab):
self.title_word_vocab = title_word_vocab
self.title_char_vocab = title_char_vocab
self.category_vocab = category_vocab
def vectorize(self, title):
# Word-level vectorization
word_indices = [self.title_word_vocab.lookup_token(token) for token in title.split(" ")]
word_indices = [self.title_word_vocab.begin_seq_index] + word_indices + \
[self.title_word_vocab.end_seq_index]
title_length = len(word_indices)
word_vector = np.zeros(title_length, dtype=np.int64)
word_vector[:len(word_indices)] = word_indices
# Char-level vectorization
word_length = max([len(word) for word in title.split(" ")])
char_vector = np.zeros((len(word_vector), word_length), dtype=np.int64)
char_vector[0, :] = self.title_word_vocab.mask_index # <BEGIN>
char_vector[-1, :] = self.title_word_vocab.mask_index # <END>
for i, word in enumerate(title.split(" ")):
char_vector[i+1,:len(word)] = [title_char_vocab.lookup_token(char) \
for char in word] # i+1 b/c of <BEGIN> token
return word_vector, char_vector, len(word_indices)
def unvectorize_word_vector(self, word_vector):
tokens = [self.title_word_vocab.lookup_index(index) for index in word_vector]
title = " ".join(token for token in tokens)
return title
def unvectorize_char_vector(self, char_vector):
title = ""
for word_vector in char_vector:
for index in word_vector:
if index == self.title_char_vocab.mask_index:
break
title += self.title_char_vocab.lookup_index(index)
title += " "
return title
@classmethod
def from_dataframe(cls, df, cutoff):
# Create class vocab
category_vocab = Vocabulary()
for category in sorted(set(df.category)):
category_vocab.add_token(category)
# Get word counts
word_counts = Counter()
for title in df.title:
for token in title.split(" "):
word_counts[token] += 1
# Create title vocab (word level)
title_word_vocab = SequenceVocabulary()
for word, word_count in word_counts.items():
if word_count >= cutoff:
title_word_vocab.add_token(word)
# Create title vocab (char level)
title_char_vocab = SequenceVocabulary()
for title in df.title:
for token in title:
title_char_vocab.add_token(token)
return cls(title_word_vocab, title_char_vocab, category_vocab)
@classmethod
def from_serializable(cls, contents):
title_word_vocab = SequenceVocabulary.from_serializable(contents['title_word_vocab'])
title_char_vocab = SequenceVocabulary.from_serializable(contents['title_char_vocab'])
category_vocab = Vocabulary.from_serializable(contents['category_vocab'])
return cls(title_word_vocab=title_word_vocab,
title_char_vocab=title_char_vocab,
category_vocab=category_vocab)
def to_serializable(self):
return {'title_word_vocab': self.title_word_vocab.to_serializable(),
'title_char_vocab': self.title_char_vocab.to_serializable(),
'category_vocab': self.category_vocab.to_serializable()}
# Vectorizer instance
vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff)
print (vectorizer.title_word_vocab)
print (vectorizer.title_char_vocab)
print (vectorizer.category_vocab)
word_vector, char_vector, title_length = vectorizer.vectorize(preprocess_text(
"Roger Federer wins the Wimbledon tennis tournament."))
print ("word_vector:", np.shape(word_vector))
print ("char_vector:", np.shape(char_vector))
print ("title_length:", title_length)
print (word_vector)
print (char_vector)
print (vectorizer.unvectorize_word_vector(word_vector))
print (vectorizer.unvectorize_char_vector(char_vector))
```
## Dataset
```
from torch.utils.data import Dataset, DataLoader
class NewsDataset(Dataset):
def __init__(self, df, vectorizer):
self.df = df
self.vectorizer = vectorizer
# Data splits
self.train_df = self.df[self.df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.df[self.df.split=='val']
self.val_size = len(self.val_df)
self.test_df = self.df[self.df.split=='test']
self.test_size = len(self.test_df)
self.lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.val_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
# Class weights (for imbalances)
class_counts = df.category.value_counts().to_dict()
def sort_key(item):
return self.vectorizer.category_vocab.lookup_token(item[0])
sorted_counts = sorted(class_counts.items(), key=sort_key)
frequencies = [count for _, count in sorted_counts]
self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
@classmethod
def load_dataset_and_make_vectorizer(cls, split_data_file, cutoff):
df = pd.read_csv(split_data_file, header=0)
train_df = df[df.split=='train']
return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff))
@classmethod
def load_dataset_and_load_vectorizer(cls, split_data_file, vectorizer_filepath):
df = pd.read_csv(split_data_file, header=0)
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(df, vectorizer)
def load_vectorizer_only(vectorizer_filepath):
with open(vectorizer_filepath) as fp:
return NewsVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
with open(vectorizer_filepath, "w") as fp:
json.dump(self.vectorizer.to_serializable(), fp)
def set_split(self, split="train"):
self.target_split = split
self.target_df, self.target_size = self.lookup_dict[split]
def __str__(self):
return "<Dataset(split={0}, size={1})".format(
self.target_split, self.target_size)
def __len__(self):
return self.target_size
def __getitem__(self, index):
row = self.target_df.iloc[index]
title_word_vector, title_char_vector, title_length = \
self.vectorizer.vectorize(row.title)
category_index = self.vectorizer.category_vocab.lookup_token(row.category)
return {'title_word_vector': title_word_vector,
'title_char_vector': title_char_vector,
'title_length': title_length,
'category': category_index}
def get_num_batches(self, batch_size):
return len(self) // batch_size
def generate_batches(self, batch_size, collate_fn, shuffle=True,
drop_last=False, device="cpu"):
dataloader = DataLoader(dataset=self, batch_size=batch_size,
collate_fn=collate_fn, shuffle=shuffle,
drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
# Dataset instance
dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,
args.cutoff)
print (dataset) # __str__
input_ = dataset[10] # __getitem__
print (input_['title_word_vector'])
print (input_['title_char_vector'])
print (input_['title_length'])
print (input_['category'])
print (dataset.vectorizer.unvectorize_word_vector(input_['title_word_vector']))
print (dataset.vectorizer.unvectorize_char_vector(input_['title_char_vector']))
print (dataset.class_weights)
```
## Model
embed → encoder → attend → predict
```
import torch.nn as nn
import torch.nn.functional as F
class NewsEncoder(nn.Module):
def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,
kernels, num_input_channels, num_output_channels,
rnn_hidden_dim, num_layers, bidirectional,
word_padding_idx=0, char_padding_idx=0):
super(NewsEncoder, self).__init__()
self.num_layers = num_layers
self.bidirectional = bidirectional
# Embeddings
self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_word_embeddings,
padding_idx=word_padding_idx)
self.char_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_char_embeddings,
padding_idx=char_padding_idx)
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels,
num_output_channels,
kernel_size=f) for f in kernels])
# GRU weights
self.gru = nn.GRU(input_size=embedding_dim*(len(kernels)+1),
hidden_size=rnn_hidden_dim, num_layers=num_layers,
batch_first=True, bidirectional=bidirectional)
def initialize_hidden_state(self, batch_size, rnn_hidden_dim, device):
"""Modify this to condition the RNN."""
num_directions = 1
if self.bidirectional:
num_directions = 2
hidden_t = torch.zeros(self.num_layers * num_directions,
batch_size, rnn_hidden_dim).to(device)
def get_char_level_embeddings(self, x):
# x: (N, seq_len, word_len)
input_shape = x.size()
batch_size, seq_len, word_len = input_shape
x = x.view(-1, word_len) # (N*seq_len, word_len)
# Embedding
x = self.char_embeddings(x) # (N*seq_len, word_len, embedding_dim)
# Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)
x = x.transpose(1, 2)
# Convolution
z = [F.relu(conv(x)) for conv in self.conv]
# Pooling
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)
# Concat to get char-level embeddings
z = torch.cat(z, 2) # join conv outputs
return z
def forward(self, x_word, x_char, x_lengths, device):
"""
x_word: word level representation (N, seq_size)
x_char: char level representation (N, seq_size, word_len)
"""
# Word level embeddings
z_word = self.word_embeddings(x_word)
# Char level embeddings
z_char = self.get_char_level_embeddings(x=x_char)
# Concatenate
z = torch.cat([z_word, z_char], 2)
# Feed into RNN
initial_h = self.initialize_hidden_state(
batch_size=z.size(0), rnn_hidden_dim=self.gru.hidden_size,
device=device)
out, h_n = self.gru(z, initial_h)
return out
class NewsDecoder(nn.Module):
def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):
super(NewsDecoder, self).__init__()
# Attention FC layer
self.fc_attn = nn.Linear(rnn_hidden_dim, rnn_hidden_dim)
self.v = nn.Parameter(torch.rand(rnn_hidden_dim))
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, encoder_outputs, apply_softmax=False):
# Attention
z = torch.tanh(self.fc_attn(encoder_outputs))
z = z.transpose(2,1) # [B*H*T]
v = self.v.repeat(encoder_outputs.size(0),1).unsqueeze(1) #[B*1*H]
z = torch.bmm(v,z).squeeze(1) # [B*T]
attn_scores = F.softmax(z, dim=1)
context = torch.matmul(encoder_outputs.transpose(-2, -1),
attn_scores.unsqueeze(dim=2)).squeeze()
if len(context.size()) == 1:
context = context.unsqueeze(0)
# FC layers
z = self.dropout(context)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return attn_scores, y_pred
class NewsModel(nn.Module):
def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,
kernels, num_input_channels, num_output_channels,
rnn_hidden_dim, hidden_dim, output_dim, num_layers,
bidirectional, dropout_p, word_padding_idx, char_padding_idx):
super(NewsModel, self).__init__()
self.encoder = NewsEncoder(embedding_dim, num_word_embeddings,
num_char_embeddings, kernels,
num_input_channels, num_output_channels,
rnn_hidden_dim, num_layers, bidirectional,
word_padding_idx, char_padding_idx)
self.decoder = NewsDecoder(rnn_hidden_dim, hidden_dim, output_dim,
dropout_p)
def forward(self, x_word, x_char, x_lengths, device, apply_softmax=False):
encoder_outputs = self.encoder(x_word, x_char, x_lengths, device)
y_pred = self.decoder(encoder_outputs, apply_softmax)
return y_pred
```
## Training
```
import torch.optim as optim
class Trainer(object):
def __init__(self, dataset, model, model_state_file, save_dir, device,
shuffle, num_epochs, batch_size, learning_rate,
early_stopping_criteria):
self.dataset = dataset
self.class_weights = dataset.class_weights.to(device)
self.device = device
self.model = model.to(device)
self.save_dir = save_dir
self.device = device
self.shuffle = shuffle
self.num_epochs = num_epochs
self.batch_size = batch_size
self.loss_func = nn.CrossEntropyLoss(self.class_weights)
self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer=self.optimizer, mode='min', factor=0.5, patience=1)
self.train_state = {
'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'early_stopping_criteria': early_stopping_criteria,
'learning_rate': learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': model_state_file}
def update_train_state(self):
# Verbose
print ("[EPOCH]: {0:02d} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format(
self.train_state['epoch_index'], self.train_state['learning_rate'],
self.train_state['train_loss'][-1], self.train_state['train_acc'][-1],
self.train_state['val_loss'][-1], self.train_state['val_acc'][-1]))
# Save one model at least
if self.train_state['epoch_index'] == 0:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
self.train_state['stop_early'] = False
# Save model if performance improved
elif self.train_state['epoch_index'] >= 1:
loss_tm1, loss_t = self.train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= self.train_state['early_stopping_best_val']:
# Update step
self.train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < self.train_state['early_stopping_best_val']:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
# Reset early stopping step
self.train_state['early_stopping_step'] = 0
# Stop early ?
self.train_state['stop_early'] = self.train_state['early_stopping_step'] \
>= self.train_state['early_stopping_criteria']
return self.train_state
def compute_accuracy(self, y_pred, y_target):
_, y_pred_indices = y_pred.max(dim=1)
n_correct = torch.eq(y_pred_indices, y_target).sum().item()
return n_correct / len(y_pred_indices) * 100
def pad_word_seq(self, seq, length):
vector = np.zeros(length, dtype=np.int64)
vector[:len(seq)] = seq
vector[len(seq):] = self.dataset.vectorizer.title_word_vocab.mask_index
return vector
def pad_char_seq(self, seq, seq_length, word_length):
vector = np.zeros((seq_length, word_length), dtype=np.int64)
vector.fill(self.dataset.vectorizer.title_char_vocab.mask_index)
for i in range(len(seq)):
char_padding = np.zeros(word_length-len(seq[i]), dtype=np.int64)
vector[i] = np.concatenate((seq[i], char_padding), axis=None)
return vector
def collate_fn(self, batch):
# Make a deep copy
batch_copy = copy.deepcopy(batch)
processed_batch = {"title_word_vector": [], "title_char_vector": [],
"title_length": [], "category": []}
# Max lengths
get_seq_length = lambda sample: len(sample["title_word_vector"])
get_word_length = lambda sample: len(sample["title_char_vector"][0])
max_seq_length = max(map(get_seq_length, batch))
max_word_length = max(map(get_word_length, batch))
# Pad
for i, sample in enumerate(batch_copy):
padded_word_seq = self.pad_word_seq(
sample["title_word_vector"], max_seq_length)
padded_char_seq = self.pad_char_seq(
sample["title_char_vector"], max_seq_length, max_word_length)
processed_batch["title_word_vector"].append(padded_word_seq)
processed_batch["title_char_vector"].append(padded_char_seq)
processed_batch["title_length"].append(sample["title_length"])
processed_batch["category"].append(sample["category"])
# Convert to appropriate tensor types
processed_batch["title_word_vector"] = torch.LongTensor(
processed_batch["title_word_vector"])
processed_batch["title_char_vector"] = torch.LongTensor(
processed_batch["title_char_vector"])
processed_batch["title_length"] = torch.LongTensor(
processed_batch["title_length"])
processed_batch["category"] = torch.LongTensor(
processed_batch["category"])
return processed_batch
def run_train_loop(self):
for epoch_index in range(self.num_epochs):
self.train_state['epoch_index'] = epoch_index
# Iterate over train dataset
# initialize batch generator, set loss and acc to 0, set train mode on
self.dataset.set_split('train')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# zero the gradients
self.optimizer.zero_grad()
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute gradients using loss
loss.backward()
# use optimizer to take a gradient step
self.optimizer.step()
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['train_loss'].append(running_loss)
self.train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# initialize batch generator, set loss and acc to 0, set eval mode on
self.dataset.set_split('val')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.
running_acc = 0.
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.to("cpu").item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['val_loss'].append(running_loss)
self.train_state['val_acc'].append(running_acc)
self.train_state = self.update_train_state()
self.scheduler.step(self.train_state['val_loss'][-1])
if self.train_state['stop_early']:
break
def run_test_loop(self):
# initialize batch generator, set loss and acc to 0, set eval mode on
self.dataset.set_split('test')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['test_loss'] = running_loss
self.train_state['test_acc'] = running_acc
def plot_performance(self):
# Figure size
plt.figure(figsize=(15,5))
# Plot Loss
plt.subplot(1, 2, 1)
plt.title("Loss")
plt.plot(trainer.train_state["train_loss"], label="train")
plt.plot(trainer.train_state["val_loss"], label="val")
plt.legend(loc='upper right')
# Plot Accuracy
plt.subplot(1, 2, 2)
plt.title("Accuracy")
plt.plot(trainer.train_state["train_acc"], label="train")
plt.plot(trainer.train_state["val_acc"], label="val")
plt.legend(loc='lower right')
# Save figure
plt.savefig(os.path.join(self.save_dir, "performance.png"))
# Show plots
plt.show()
def save_train_state(self):
with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp:
json.dump(self.train_state, fp)
# Initialization
dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,
args.cutoff)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.vectorizer
model = NewsModel(embedding_dim=args.embedding_dim,
num_word_embeddings=len(vectorizer.title_word_vocab),
num_char_embeddings=len(vectorizer.title_char_vocab),
kernels=args.kernels,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
rnn_hidden_dim=args.rnn_hidden_dim,
hidden_dim=args.hidden_dim,
output_dim=len(vectorizer.category_vocab),
num_layers=args.num_layers,
bidirectional=args.bidirectional,
dropout_p=args.dropout_p,
word_padding_idx=vectorizer.title_word_vocab.mask_index,
char_padding_idx=vectorizer.title_char_vocab.mask_index)
print (model.named_modules)
# Train
trainer = Trainer(dataset=dataset, model=model,
model_state_file=args.model_state_file,
save_dir=args.save_dir, device=args.device,
shuffle=args.shuffle, num_epochs=args.num_epochs,
batch_size=args.batch_size, learning_rate=args.learning_rate,
early_stopping_criteria=args.early_stopping_criteria)
trainer.run_train_loop()
# Plot performance
trainer.plot_performance()
# Test performance
trainer.run_test_loop()
print("Test loss: {0:.2f}".format(trainer.train_state['test_loss']))
print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc']))
# Save all results
trainer.save_train_state()
```
## Inference
```
class Inference(object):
def __init__(self, model, vectorizer):
self.model = model
self.vectorizer = vectorizer
def predict_category(self, title):
# Vectorize
word_vector, char_vector, title_length = self.vectorizer.vectorize(title)
title_word_vector = torch.tensor(word_vector).unsqueeze(0)
title_char_vector = torch.tensor(char_vector).unsqueeze(0)
title_length = torch.tensor([title_length]).long()
# Forward pass
self.model.eval()
attn_scores, y_pred = self.model(x_word=title_word_vector,
x_char=title_char_vector,
x_lengths=title_length,
device="cpu",
apply_softmax=True)
# Top category
y_prob, indices = y_pred.max(dim=1)
index = indices.item()
# Predicted category
category = vectorizer.category_vocab.lookup_index(index)
probability = y_prob.item()
return {'category': category, 'probability': probability,
'attn_scores': attn_scores}
def predict_top_k(self, title, k):
# Vectorize
word_vector, char_vector, title_length = self.vectorizer.vectorize(title)
title_word_vector = torch.tensor(word_vector).unsqueeze(0)
title_char_vector = torch.tensor(char_vector).unsqueeze(0)
title_length = torch.tensor([title_length]).long()
# Forward pass
self.model.eval()
_, y_pred = self.model(x_word=title_word_vector,
x_char=title_char_vector,
x_lengths=title_length,
device="cpu",
apply_softmax=True)
# Top k categories
y_prob, indices = torch.topk(y_pred, k=k)
probabilities = y_prob.detach().numpy()[0]
indices = indices.detach().numpy()[0]
# Results
results = []
for probability, index in zip(probabilities, indices):
category = self.vectorizer.category_vocab.lookup_index(index)
results.append({'category': category, 'probability': probability})
return results
# Load the model
dataset = NewsDataset.load_dataset_and_load_vectorizer(
args.split_data_file, args.vectorizer_file)
vectorizer = dataset.vectorizer
model = NewsModel(embedding_dim=args.embedding_dim,
num_word_embeddings=len(vectorizer.title_word_vocab),
num_char_embeddings=len(vectorizer.title_char_vocab),
kernels=args.kernels,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
rnn_hidden_dim=args.rnn_hidden_dim,
hidden_dim=args.hidden_dim,
output_dim=len(vectorizer.category_vocab),
num_layers=args.num_layers,
bidirectional=args.bidirectional,
dropout_p=args.dropout_p,
word_padding_idx=vectorizer.title_word_vocab.mask_index,
char_padding_idx=vectorizer.title_char_vocab.mask_index)
model.load_state_dict(torch.load(args.model_state_file))
model = model.to("cpu")
print (model.named_modules)
# Inference
inference = Inference(model=model, vectorizer=vectorizer)
title = input("Enter a title to classify: ")
prediction = inference.predict_category(preprocess_text(title))
print("{} → {} (p={:0.2f})".format(title, prediction['category'],
prediction['probability']))
# Top-k inference
top_k = inference.predict_top_k(preprocess_text(title), k=len(vectorizer.category_vocab))
print ("{}: ".format(title))
for result in top_k:
print ("{} (p={:0.2f})".format(result['category'],
result['probability']))
```
# Interpretability
We can inspect the probability vector that is generated at each time step to visualize the importance of each of the previous hidden states towards a particular time step's prediction.
```
import seaborn as sns
import matplotlib.pyplot as plt
attn_matrix = prediction['attn_scores'].detach().numpy()
ax = sns.heatmap(attn_matrix, linewidths=2, square=True)
tokens = ["<BEGIN>"]+preprocess_text(title).split(" ")+["<END>"]
ax.set_xticklabels(tokens, rotation=45)
ax.set_xlabel("Token")
ax.set_ylabel("Importance\n")
plt.show()
```
# TODO
- attn visualization isn't always great
- bleu score
- ngram-overlap
- perplexity
- beamsearch
- hierarchical softmax
- hierarchical attention
- Transformer networks
- attention interpretability is hit/miss
| true |
code
| 0.822403 | null | null | null | null |
|
# Entities Recognition
<div class="alert alert-info">
This tutorial is available as an IPython notebook at [Malaya/example/entities](https://github.com/huseinzol05/Malaya/tree/master/example/entities).
</div>
<div class="alert alert-warning">
This module only trained on standard language structure, so it is not save to use it for local language structure.
</div>
```
%%time
import malaya
```
### Models accuracy
We use `sklearn.metrics.classification_report` for accuracy reporting, check at https://malaya.readthedocs.io/en/latest/models-accuracy.html#entities-recognition and https://malaya.readthedocs.io/en/latest/models-accuracy.html#entities-recognition-ontonotes5
### Describe supported entities
```
import pandas as pd
pd.set_option('display.max_colwidth', -1)
malaya.entity.describe()
```
### Describe supported Ontonotes 5 entities
```
malaya.entity.describe_ontonotes5()
```
### List available Transformer NER models
```
malaya.entity.available_transformer()
```
### List available Transformer NER Ontonotes 5 models
```
malaya.entity.available_transformer_ontonotes5()
string = 'KUALA LUMPUR: Sempena sambutan Aidilfitri minggu depan, Perdana Menteri Tun Dr Mahathir Mohamad dan Menteri Pengangkutan Anthony Loke Siew Fook menitipkan pesanan khas kepada orang ramai yang mahu pulang ke kampung halaman masing-masing. Dalam video pendek terbitan Jabatan Keselamatan Jalan Raya (JKJR) itu, Dr Mahathir menasihati mereka supaya berhenti berehat dan tidur sebentar sekiranya mengantuk ketika memandu.'
string1 = 'memperkenalkan Husein, dia sangat comel, berumur 25 tahun, bangsa melayu, agama islam, tinggal di cyberjaya malaysia, bercakap bahasa melayu, semua membaca buku undang-undang kewangan, dengar laju Siti Nurhaliza - Seluruh Cinta sambil makan ayam goreng KFC'
```
### Load Transformer model
```python
def transformer(model: str = 'xlnet', quantized: bool = False, **kwargs):
"""
Load Transformer Entity Tagging model trained on Malaya Entity, transfer learning Transformer + CRF.
Parameters
----------
model : str, optional (default='bert')
Model architecture supported. Allowed values:
* ``'bert'`` - Google BERT BASE parameters.
* ``'tiny-bert'`` - Google BERT TINY parameters.
* ``'albert'`` - Google ALBERT BASE parameters.
* ``'tiny-albert'`` - Google ALBERT TINY parameters.
* ``'xlnet'`` - Google XLNET BASE parameters.
* ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
* ``'fastformer'`` - FastFormer BASE parameters.
* ``'tiny-fastformer'`` - FastFormer TINY parameters.
quantized : bool, optional (default=False)
if True, will load 8-bit quantized model.
Quantized model not necessary faster, totally depends on the machine.
Returns
-------
result: model
List of model classes:
* if `bert` in model, will return `malaya.model.bert.TaggingBERT`.
* if `xlnet` in model, will return `malaya.model.xlnet.TaggingXLNET`.
* if `fastformer` in model, will return `malaya.model.fastformer.TaggingFastFormer`.
"""
```
```
model = malaya.entity.transformer(model = 'alxlnet')
```
#### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_model = malaya.entity.transformer(model = 'alxlnet', quantized = True)
```
#### Predict
```python
def predict(self, string: str):
"""
Tag a string.
Parameters
----------
string : str
Returns
-------
result: Tuple[str, str]
"""
```
```
model.predict(string)
model.predict(string1)
quantized_model.predict(string)
quantized_model.predict(string1)
```
#### Group similar tags
```python
def analyze(self, string: str):
"""
Analyze a string.
Parameters
----------
string : str
Returns
-------
result: {'words': List[str], 'tags': [{'text': 'text', 'type': 'location', 'score': 1.0, 'beginOffset': 0, 'endOffset': 1}]}
"""
```
```
model.analyze(string)
model.analyze(string1)
```
#### Vectorize
Let say you want to visualize word level in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, string: str):
"""
vectorize a string.
Parameters
----------
string: List[str]
Returns
-------
result: np.array
"""
```
```
strings = [string,
'Husein baca buku Perlembagaan yang berharga 3k ringgit dekat kfc sungai petani minggu lepas, 2 ptg 2 oktober 2019 , suhu 32 celcius, sambil makan ayam goreng dan milo o ais',
'contact Husein at [email protected]',
'tolong tempahkan meja makan makan nasi dagang dan jus apple, milo tarik esok dekat Restoran Sebulek']
r = [quantized_model.vectorize(string) for string in strings]
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
Pretty good, the model able to know cluster similar entities.
### Load Transformer Ontonotes 5 model
```python
def transformer_ontonotes5(
model: str = 'xlnet', quantized: bool = False, **kwargs
):
"""
Load Transformer Entity Tagging model trained on Ontonotes 5 Bahasa, transfer learning Transformer + CRF.
Parameters
----------
model : str, optional (default='bert')
Model architecture supported. Allowed values:
* ``'bert'`` - Google BERT BASE parameters.
* ``'tiny-bert'`` - Google BERT TINY parameters.
* ``'albert'`` - Google ALBERT BASE parameters.
* ``'tiny-albert'`` - Google ALBERT TINY parameters.
* ``'xlnet'`` - Google XLNET BASE parameters.
* ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
* ``'fastformer'`` - FastFormer BASE parameters.
* ``'tiny-fastformer'`` - FastFormer TINY parameters.
quantized : bool, optional (default=False)
if True, will load 8-bit quantized model.
Quantized model not necessary faster, totally depends on the machine.
Returns
-------
result: model
List of model classes:
* if `bert` in model, will return `malaya.model.bert.TaggingBERT`.
* if `xlnet` in model, will return `malaya.model.xlnet.TaggingXLNET`.
* if `fastformer` in model, will return `malaya.model.fastformer.TaggingFastFormer`.
"""
```
```
albert = malaya.entity.transformer_ontonotes5(model = 'albert')
alxlnet = malaya.entity.transformer_ontonotes5(model = 'alxlnet')
```
#### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_albert = malaya.entity.transformer_ontonotes5(model = 'albert', quantized = True)
quantized_alxlnet = malaya.entity.transformer_ontonotes5(model = 'alxlnet', quantized = True)
```
#### Predict
```python
def predict(self, string: str):
"""
Tag a string.
Parameters
----------
string : str
Returns
-------
result: Tuple[str, str]
"""
```
```
albert.predict(string)
alxlnet.predict(string)
albert.predict(string1)
alxlnet.predict(string1)
quantized_albert.predict(string)
quantized_alxlnet.predict(string1)
```
#### Group similar tags
```python
def analyze(self, string: str):
"""
Analyze a string.
Parameters
----------
string : str
Returns
-------
result: {'words': List[str], 'tags': [{'text': 'text', 'type': 'location', 'score': 1.0, 'beginOffset': 0, 'endOffset': 1}]}
"""
```
```
alxlnet.analyze(string1)
```
#### Vectorize
Let say you want to visualize word level in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, string: str):
"""
vectorize a string.
Parameters
----------
string: List[str]
Returns
-------
result: np.array
"""
```
```
strings = [string, string1]
r = [quantized_model.vectorize(string) for string in strings]
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
Pretty good, the model able to know cluster similar entities.
### Load general Malaya entity model
This model able to classify,
1. date
2. money
3. temperature
4. distance
5. volume
6. duration
7. phone
8. email
9. url
10. time
11. datetime
12. local and generic foods, can check available rules in malaya.texts._food
13. local and generic drinks, can check available rules in malaya.texts._food
We can insert BERT or any deep learning model by passing `malaya.entity.general_entity(model = model)`, as long the model has `predict` method and return `[(string, label), (string, label)]`. This is an optional.
```
entity = malaya.entity.general_entity(model = model)
entity.predict('Husein baca buku Perlembagaan yang berharga 3k ringgit dekat kfc sungai petani minggu lepas, 2 ptg 2 oktober 2019 , suhu 32 celcius, sambil makan ayam goreng dan milo o ais')
entity.predict('contact Husein at [email protected]')
entity.predict('tolong tempahkan meja makan makan nasi dagang dan jus apple, milo tarik esok dekat Restoran Sebulek')
```
### Voting stack model
```
malaya.stack.voting_stack([albert, alxlnet, alxlnet], string1)
```
| true |
code
| 0.749084 | null | null | null | null |
|
# Shallow regression for vector data
This script reads zip code data produced by **vectorDataPreparations** and creates different machine learning models for
predicting the average zip code income from population and spatial variables.
It assesses the model accuracy with a test dataset but also predicts the number to all zip codes and writes it to a geopackage
for closer inspection
# 1. Read the data
```
import time
import geopandas as gpd
import pandas as pd
from math import sqrt
import os
import matplotlib.pyplot as plt
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor, BaggingRegressor,ExtraTreesRegressor, AdaBoostRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error,r2_score
```
### 1.1 Input and output file paths
```
paavo_data = "../data/paavo"
### Relative path to the zip code geopackage file that was prepared by vectorDataPreparations.py
input_geopackage_path = os.path.join(paavo_data,"zip_code_data_after_preparation.gpkg")
### Output file. You can change the name to identify different regression models
output_geopackage_path = os.path.join(paavo_data,"median_income_per_zipcode_shallow_model.gpkg")
```
### 1.2 Read the input data to a Geopandas dataframe
```
original_gdf = gpd.read_file(input_geopackage_path)
original_gdf.head()
```
# 2. Train the model
Here we try training different models. We encourage you to dive into the documentation of different models a bit and try different parameters.
Which one is the best model? Can you figure out how to improve it even more?
### 2.1 Split the dataset to train and test datasets
```
### Split the gdf to x (the predictor attributes) and y (the attribute to be predicted)
y = original_gdf['hr_mtu'] # Average income
### Remove geometry and textual fields
x = original_gdf.drop(['geometry','postinumer','nimi','hr_mtu'],axis=1)
### Split the both datasets to train (80%) and test (20%) datasets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.2, random_state=42)
```
### 2.2 These are the functions used for training, estimating and predicting.
```
def trainModel(x_train, y_train, model):
start_time = time.time()
print(model)
model.fit(x_train,y_train)
print('Model training took: ', round((time.time() - start_time), 2), ' seconds')
return model
def estimateModel(x_test,y_test, model):
### Predict the unemployed number to the test dataset
prediction = model.predict(x_test)
### Assess the accuracy of the model with root mean squared error, mean absolute error and coefficient of determination r2
rmse = sqrt(mean_squared_error(y_test, prediction))
mae = mean_absolute_error(y_test, prediction)
r2 = r2_score(y_test, prediction)
print(f"\nMODEL ACCURACY METRICS WITH TEST DATASET: \n" +
f"\t Root mean squared error: {round(rmse)} \n" +
f"\t Mean absolute error: {round(mae)} \n" +
f"\t Coefficient of determination: {round(r2,4)} \n")
```
### 2.3 Run different models
### Gradient Boosting Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#regression
```
model = GradientBoostingRegressor(n_estimators=30, learning_rate=0.1,verbose=1)
model_name = "Gradient Boosting Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### Random Forest Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#forest
```
model = RandomForestRegressor(n_estimators=30,verbose=1)
model_name = "Random Forest Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### Extra Trees Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html
```
model = ExtraTreesRegressor(n_estimators=30,verbose=1)
model_name = "Extra Trees Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### Bagging Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#bagging
```
model = BaggingRegressor(n_estimators=30,verbose=1)
model_name = "Bagging Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### AdaBoost Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#adaboost
```
model = AdaBoostRegressor(n_estimators=30)
model_name = "AdaBoost Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
# 3. Predict average income to all zip codes
Here we predict the average income to the whole dataset. Prediction is done with the model you have stored in the model variable - the one you ran last
```
### Print chosen model (the one you ran last)
print(model)
### Drop the not-used columns from original_gdf as done before model training.
x = original_gdf.drop(['geometry','postinumer','nimi','hr_mtu'],axis=1)
### Predict the median income with already trained model
prediction = model.predict(x)
### Join the predictions to the original geodataframe and pick only interesting columns for results
original_gdf['predicted_hr_mtu'] = prediction.round(0)
original_gdf['difference'] = original_gdf['predicted_hr_mtu'] - original_gdf['hr_mtu']
resulting_gdf = original_gdf[['postinumer','nimi','hr_mtu','predicted_hr_mtu','difference','geometry']]
fig, ax = plt.subplots(figsize=(20, 10))
ax.set_title("Predicted average income by zip code " + model_name, fontsize=25)
ax.set_axis_off()
resulting_gdf.plot(column='predicted_hr_mtu', ax=ax, legend=True, cmap="magma")
```
# 4. EXERCISE: Calculate the difference between real and predicted incomes
Calculate the difference of real and predicted income amounts by zip code level and plot a map of it
* **original_gdf** is the original dataframe
* **resulting_gdf** is the predicted one
| true |
code
| 0.648327 | null | null | null | null |
|
# RNN Sentiment Classifier
In this notebook, we use an RNN to classify IMDB movie reviews by their sentiment.
[](https://colab.research.google.com/github/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/rnn_sentiment_classifier.ipynb)
#### Load dependencies
```
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, SpatialDropout1D
from keras.layers import SimpleRNN # new!
from keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Set hyperparameters
```
# output directory name:
output_dir = 'model_output/rnn'
# training:
epochs = 16 # way more!
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 10000
max_review_length = 100 # lowered due to vanishing gradient over time
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# RNN layer architecture:
n_rnn = 256
drop_rnn = 0.2
# dense layer architecture:
# n_dense = 256
# dropout = 0.2
```
#### Load data
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words) # removed n_words_to_skip
```
#### Preprocess data
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
```
#### Design neural network architecture
```
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
model.add(SimpleRNN(n_rnn, dropout=drop_rnn))
# model.add(Dense(n_dense, activation='relu')) # typically don't see top dense layer in NLP like in
# model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
#### Configure model
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### Train!
```
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### Evaluate
```
model.load_weights(output_dir+"/weights.07.hdf5")
y_hat = model.predict_proba(x_valid)
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
"{:0.2f}".format(roc_auc_score(y_valid, y_hat)*100.0)
```
| true |
code
| 0.754926 | null | null | null | null |
|
## Computer Vision Learner
[`vision.learner`](/vision.learner.html#vision.learner) is the module that defines the [`cnn_learner`](/vision.learner.html#cnn_learner) method, to easily get a model suitable for transfer learning.
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
```
## Transfer learning
Transfer learning is a technique where you use a model trained on a very large dataset (usually [ImageNet](http://image-net.org/) in computer vision) and then adapt it to your own dataset. The idea is that it has learned to recognize many features on all of this data, and that you will benefit from this knowledge, especially if your dataset is small, compared to starting from a randomly initialized model. It has been proved in [this article](https://arxiv.org/abs/1805.08974) on a wide range of tasks that transfer learning nearly always give better results.
In practice, you need to change the last part of your model to be adapted to your own number of classes. Most convolutional models end with a few linear layers (a part will call head). The last convolutional layer will have analyzed features in the image that went through the model, and the job of the head is to convert those in predictions for each of our classes. In transfer learning we will keep all the convolutional layers (called the body or the backbone of the model) with their weights pretrained on ImageNet but will define a new head initialized randomly.
Then we will train the model we obtain in two phases: first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data), then we unfreeze the layers of the backbone (gradually if necessary) and fine-tune the whole model (possibly using differential learning rates).
The [`cnn_learner`](/vision.learner.html#cnn_learner) factory method helps you to automatically get a pretrained model from a given architecture with a custom head that is suitable for your data.
```
show_doc(cnn_learner)
```
This method creates a [`Learner`](/basic_train.html#Learner) object from the [`data`](/vision.data.html#vision.data) object and model inferred from it with the backbone given in `arch`. Specifically, it will cut the model defined by `arch` (randomly initialized if `pretrained` is False) at the last convolutional layer by default (or as defined in `cut`, see below) and add:
- an [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) layer,
- a [`Flatten`](/layers.html#Flatten) layer,
- blocks of \[[`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)\] layers.
The blocks are defined by the `lin_ftrs` and `ps` arguments. Specifically, the first block will have a number of inputs inferred from the backbone `arch` and the last one will have a number of outputs equal to `data.c` (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_frts` (of course a block has a number of inputs equal to the number of outputs of the previous block). The default is to have an intermediate hidden size of 512 (which makes two blocks `model_activation` -> 512 -> `n_classes`). If you pass a float then the final dropout layer will have the value `ps`, and the remaining will be `ps/2`. If you pass a list then the values are used for dropout probabilities directly.
Note that the very last block doesn't have a [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU) activation, to allow you to use any final activation you want (generally included in the loss function in pytorch). Also, the backbone will be frozen if you choose `pretrained=True` (so only the head will train if you call [`fit`](/basic_train.html#fit)) so that you can immediately start phase one of training as described above.
Alternatively, you can define your own `custom_head` to put on top of the backbone. If you want to specify where to split `arch` you should so in the argument `cut` which can either be the index of a specific layer (the result will not include that layer) or a function that, when passed the model, will return the backbone you want.
The final model obtained by stacking the backbone and the head (custom or defined as we saw) is then separated in groups for gradual unfreezing or differential learning rates. You can specify how to split the backbone in groups with the optional argument `split_on` (should be a function that returns those groups when given the backbone).
The `kwargs` will be passed on to [`Learner`](/basic_train.html#Learner), so you can put here anything that [`Learner`](/basic_train.html#Learner) will accept ([`metrics`](/metrics.html#metrics), `loss_func`, `opt_func`...)
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learner = cnn_learner(data, models.resnet18, metrics=[accuracy])
learner.fit_one_cycle(1,1e-3)
learner.save('one_epoch')
show_doc(unet_learner)
```
This time the model will be a [`DynamicUnet`](/vision.models.unet.html#DynamicUnet) with an encoder based on `arch` (maybe `pretrained`) that is cut depending on `split_on`. `blur_final`, `norm_type`, `blur`, `self_attention`, `y_range`, `last_cross` and `bottle` are passed to unet constructor, the `kwargs` are passed to the initialization of the [`Learner`](/basic_train.html#Learner).
```
jekyll_warn("The models created with this function won't work with pytorch `nn.DataParallel`, you have to use distributed training instead!")
```
### Get predictions
Once you've actually trained your model, you may want to use it on a single image. This is done by using the following method.
```
show_doc(Learner.predict)
img = learner.data.train_ds[0][0]
learner.predict(img)
```
Here the predict class for our image is '3', which corresponds to a label of 0. The probabilities the model found for each class are 99.65% and 0.35% respectively, so its confidence is pretty high.
Note that if you want to load your trained model and use it on inference mode with the previous function, you should export your [`Learner`](/basic_train.html#Learner).
```
learner.export()
```
And then you can load it with an empty data object that has the same internal state like this:
```
learn = load_learner(path)
```
### Customize your model
You can customize [`cnn_learner`](/vision.learner.html#cnn_learner) for your own model's default `cut` and `split_on` functions by adding them to the dictionary `model_meta`. The key should be your model and the value should be a dictionary with the keys `cut` and `split_on` (see the source code for examples). The constructor will call [`create_body`](/vision.learner.html#create_body) and [`create_head`](/vision.learner.html#create_head) for you based on `cut`; you can also call them yourself, which is particularly useful for testing.
```
show_doc(create_body)
show_doc(create_head, doc_string=False)
```
Model head that takes `nf` features, runs through `lin_ftrs`, and ends with `nc` classes. `ps` is the probability of the dropouts, as documented above in [`cnn_learner`](/vision.learner.html#cnn_learner).
```
show_doc(ClassificationInterpretation, title_level=3)
```
This provides a confusion matrix and visualization of the most incorrect images. Pass in your [`data`](/vision.data.html#vision.data), calculated `preds`, actual `y`, and your `losses`, and then use the methods below to view the model interpretation results. For instance:
```
learn = cnn_learner(data, models.resnet18)
learn.fit(1)
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
```
The following factory method gives a more convenient way to create an instance of this class:
```
show_doc(ClassificationInterpretation.from_learner, full_name='from_learner')
```
You can also use a shortcut `learn.interpret()` to do the same.
```
show_doc(Learner.interpret, full_name='interpret')
```
Note that this shortcut is a [`Learner`](/basic_train.html#Learner) object/class method that can be called as: `learn.interpret()`.
```
show_doc(ClassificationInterpretation.plot_top_losses, full_name='plot_top_losses')
```
The `k` items are arranged as a square, so it will look best if `k` is a square number (4, 9, 16, etc). The title of each image shows: prediction, actual, loss, probability of actual class. When `heatmap` is True (by default it's True) , Grad-CAM heatmaps (http://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf) are overlaid on each image. `plot_top_losses` should be used with single-labeled datasets. See `plot_multi_top_losses` below for a version capable of handling multi-labeled datasets.
```
interp.plot_top_losses(9, figsize=(7,7))
show_doc(ClassificationInterpretation.top_losses)
```
Returns tuple of *(losses,indices)*.
```
interp.top_losses(9)
show_doc(ClassificationInterpretation.plot_multi_top_losses, full_name='plot_multi_top_losses')
```
Similar to `plot_top_losses()` but aimed at multi-labeled datasets. It plots misclassified samples sorted by their respective loss.
Since you can have multiple labels for a single sample, they can easily overlap in a grid plot. So it plots just one sample per row.
Note that you can pass `save_misclassified=True` (by default it's `False`). In such case, the method will return a list containing the misclassified images which you can use to debug your model and/or tune its hyperparameters.
```
show_doc(ClassificationInterpretation.plot_confusion_matrix)
```
If [`normalize`](/vision.data.html#normalize), plots the percentages with `norm_dec` digits. `slice_size` can be used to avoid out of memory error if your set is too big. `kwargs` are passed to `plt.figure`.
```
interp.plot_confusion_matrix()
show_doc(ClassificationInterpretation.confusion_matrix)
interp.confusion_matrix()
show_doc(ClassificationInterpretation.most_confused)
```
#### Working with large datasets
When working with large datasets, memory problems can arise when computing the confusion matrix. For example, an error can look like this:
RuntimeError: $ Torch: not enough memory: you tried to allocate 64GB. Buy new RAM!
In this case it is possible to force [`ClassificationInterpretation`](/train.html#ClassificationInterpretation) to compute the confusion matrix for data slices and then aggregate the result by specifying slice_size parameter.
```
interp.confusion_matrix(slice_size=10)
interp.plot_confusion_matrix(slice_size=10)
interp.most_confused(slice_size=10)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
## New Methods - Please document or move to the undocumented section
| true |
code
| 0.845688 | null | null | null | null |
|
# Trax : Ungraded Lecture Notebook
In this notebook you'll get to know about the Trax framework and learn about some of its basic building blocks.
## Background
### Why Trax and not TensorFlow or PyTorch?
TensorFlow and PyTorch are both extensive frameworks that can do almost anything in deep learning. They offer a lot of flexibility, but that often means verbosity of syntax and extra time to code.
Trax is much more concise. It runs on a TensorFlow backend but allows you to train models with 1 line commands. Trax also runs end to end, allowing you to get data, model and train all with a single terse statements. This means you can focus on learning, instead of spending hours on the idiosyncrasies of big framework implementation.
### Why not Keras then?
Keras is now part of Tensorflow itself from 2.0 onwards. Also, trax is good for implementing new state of the art algorithms like Transformers, Reformers, BERT because it is actively maintained by Google Brain Team for advanced deep learning tasks. It runs smoothly on CPUs,GPUs and TPUs as well with comparatively lesser modifications in code.
### How to Code in Trax
Building models in Trax relies on 2 key concepts:- **layers** and **combinators**.
Trax layers are simple objects that process data and perform computations. They can be chained together into composite layers using Trax combinators, allowing you to build layers and models of any complexity.
### Trax, JAX, TensorFlow and Tensor2Tensor
You already know that Trax uses Tensorflow as a backend, but it also uses the JAX library to speed up computation too. You can view JAX as an enhanced and optimized version of numpy.
**Watch out for assignments which import `import trax.fastmath.numpy as np`. If you see this line, remember that when calling `np` you are really calling Trax’s version of numpy that is compatible with JAX.**
As a result of this, where you used to encounter the type `numpy.ndarray` now you will find the type `jax.interpreters.xla.DeviceArray`.
Tensor2Tensor is another name you might have heard. It started as an end to end solution much like how Trax is designed, but it grew unwieldy and complicated. So you can view Trax as the new improved version that operates much faster and simpler.
### Resources
- Trax source code can be found on Github: [Trax](https://github.com/google/trax)
- JAX library: [JAX](https://jax.readthedocs.io/en/latest/index.html)
## Installing Trax
Trax has dependencies on JAX and some libraries like JAX which are yet to be supported in [Windows](https://github.com/google/jax/blob/1bc5896ee4eab5d7bb4ec6f161d8b2abb30557be/README.md#installation) but work well in Ubuntu and MacOS. We would suggest that if you are working on Windows, try to install Trax on WSL2.
Official maintained documentation - [trax-ml](https://trax-ml.readthedocs.io/en/latest/) not to be confused with this [TraX](https://trax.readthedocs.io/en/latest/index.html)
```
#!pip install trax==1.3.1 Use this version for this notebook
```
## Imports
```
import numpy as np # regular ol' numpy
from trax import layers as tl # core building block
from trax import shapes # data signatures: dimensionality and type
from trax import fastmath # uses jax, offers numpy on steroids
# Trax version 1.3.1 or better
!pip list | grep trax
```
## Layers
Layers are the core building blocks in Trax or as mentioned in the lectures, they are the base classes.
They take inputs, compute functions/custom calculations and return outputs.
You can also inspect layer properties. Let me show you some examples.
### Relu Layer
First I'll show you how to build a relu activation function as a layer. A layer like this is one of the simplest types. Notice there is no object initialization so it works just like a math function.
**Note: Activation functions are also layers in Trax, which might look odd if you have been using other frameworks for a longer time.**
```
# Layers
# Create a relu trax layer
relu = tl.Relu()
# Inspect properties
print("-- Properties --")
print("name :", relu.name)
print("expected inputs :", relu.n_in)
print("promised outputs :", relu.n_out, "\n")
# Inputs
x = np.array([-2, -1, 0, 1, 2])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = relu(x)
print("-- Outputs --")
print("y :", y)
```
### Concatenate Layer
Now I'll show you how to build a layer that takes 2 inputs. Notice the change in the expected inputs property from 1 to 2.
```
# Create a concatenate trax layer
concat = tl.Concatenate()
print("-- Properties --")
print("name :", concat.name)
print("expected inputs :", concat.n_in)
print("promised outputs :", concat.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2, "\n")
# Outputs
y = concat([x1, x2])
print("-- Outputs --")
print("y :", y)
```
## Layers are Configurable
You can change the default settings of layers. For example, you can change the expected inputs for a concatenate layer from 2 to 3 using the optional parameter `n_items`.
```
# Configure a concatenate layer
concat_3 = tl.Concatenate(n_items=3) # configure the layer's expected inputs
print("-- Properties --")
print("name :", concat_3.name)
print("expected inputs :", concat_3.n_in)
print("promised outputs :", concat_3.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
x3 = x2 * 0.99
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2)
print("x3 :", x3, "\n")
# Outputs
y = concat_3([x1, x2, x3])
print("-- Outputs --")
print("y :", y)
```
**Note: At any point,if you want to refer the function help/ look up the [documentation](https://trax-ml.readthedocs.io/en/latest/) or use help function.**
```
#help(tl.Concatenate) #Uncomment this to see the function docstring with explaination
```
## Layers can have Weights
Some layer types include mutable weights and biases that are used in computation and training. Layers of this type require initialization before use.
For example the `LayerNorm` layer calculates normalized data, that is also scaled by weights and biases. During initialization you pass the data shape and data type of the inputs, so the layer can initialize compatible arrays of weights and biases.
```
# Uncomment any of them to see information regarding the function
# help(tl.LayerNorm)
# help(shapes.signature)
# Layer initialization
norm = tl.LayerNorm()
# You first must know what the input data will look like
x = np.array([0, 1, 2, 3], dtype="float")
# Use the input data signature to get shape and type for initializing weights and biases
norm.init(shapes.signature(x)) # We need to convert the input datatype from usual tuple to trax ShapeDtype
print("Normal shape:",x.shape, "Data Type:",type(x.shape))
print("Shapes Trax:",shapes.signature(x),"Data Type:",type(shapes.signature(x)))
# Inspect properties
print("-- Properties --")
print("name :", norm.name)
print("expected inputs :", norm.n_in)
print("promised outputs :", norm.n_out)
# Weights and biases
print("weights :", norm.weights[0])
print("biases :", norm.weights[1], "\n")
# Inputs
print("-- Inputs --")
print("x :", x)
# Outputs
y = norm(x)
print("-- Outputs --")
print("y :", y)
```
## Custom Layers
This is where things start getting more interesting!
You can create your own custom layers too and define custom functions for computations by using `tl.Fn`. Let me show you how.
```
help(tl.Fn)
# Define a custom layer
# In this example you will create a layer to calculate the input times 2
def TimesTwo():
layer_name = "TimesTwo" #don't forget to give your custom layer a name to identify
# Custom function for the custom layer
def func(x):
return x * 2
return tl.Fn(layer_name, func)
# Test it
times_two = TimesTwo()
# Inspect properties
print("-- Properties --")
print("name :", times_two.name)
print("expected inputs :", times_two.n_in)
print("promised outputs :", times_two.n_out, "\n")
# Inputs
x = np.array([1, 2, 3])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = times_two(x)
print("-- Outputs --")
print("y :", y)
```
## Combinators
You can combine layers to build more complex layers. Trax provides a set of objects named combinator layers to make this happen. Combinators are themselves layers, so behavior commutes.
### Serial Combinator
This is the most common and easiest to use. For example could build a simple neural network by combining layers into a single layer using the `Serial` combinator. This new layer then acts just like a single layer, so you can inspect intputs, outputs and weights. Or even combine it into another layer! Combinators can then be used as trainable models. _Try adding more layers_
**Note:As you must have guessed, if there is serial combinator, there must be a parallel combinator as well. Do try to explore about combinators and other layers from the trax documentation and look at the repo to understand how these layers are written.**
```
# help(tl.Serial)
# help(tl.Parallel)
# Serial combinator
serial = tl.Serial(
tl.LayerNorm(), # normalize input
tl.Relu(), # convert negative values to zero
times_two, # the custom layer you created above, multiplies the input recieved from above by 2
### START CODE HERE
# tl.Dense(n_units=2), # try adding more layers. eg uncomment these lines
# tl.Dense(n_units=1), # Binary classification, maybe? uncomment at your own peril
# tl.LogSoftmax() # Yes, LogSoftmax is also a layer
### END CODE HERE
)
# Initialization
x = np.array([-2, -1, 0, 1, 2]) #input
serial.init(shapes.signature(x)) #initialising serial instance
print("-- Serial Model --")
print(serial,"\n")
print("-- Properties --")
print("name :", serial.name)
print("sublayers :", serial.sublayers)
print("expected inputs :", serial.n_in)
print("promised outputs :", serial.n_out)
print("weights & biases:", serial.weights, "\n")
# Inputs
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = serial(x)
print("-- Outputs --")
print("y :", y)
```
## JAX
Just remember to lookout for which numpy you are using, the regular ol' numpy or Trax's JAX compatible numpy. Both tend to use the alias np so watch those import blocks.
**Note:There are certain things which are still not possible in fastmath.numpy which can be done in numpy so you will see in assignments we will switch between them to get our work done.**
```
# Numpy vs fastmath.numpy have different data types
# Regular ol' numpy
x_numpy = np.array([1, 2, 3])
print("good old numpy : ", type(x_numpy), "\n")
# Fastmath and jax numpy
x_jax = fastmath.numpy.array([1, 2, 3])
print("jax trax numpy : ", type(x_jax))
```
## Summary
Trax is a concise framework, built on TensorFlow, for end to end machine learning. The key building blocks are layers and combinators. This notebook is just a taste, but sets you up with some key inuitions to take forward into the rest of the course and assignments where you will build end to end models.
| true |
code
| 0.446434 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/choderalab/pinot/blob/master/scripts/adlala_mol_graph.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# import
```
! rm -rf pinot
! git clone https://github.com/choderalab/pinot.git
! pip install dgl
! wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
! chmod +x Miniconda3-latest-Linux-x86_64.sh
! time bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
! time conda install -q -y -c conda-forge rdkit
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
sys.path.append('/content/pinot/')
```
# data
```
import pinot
dir(pinot)
ds = pinot.data.esol()
ds = pinot.data.utils.batch(ds, 32)
ds_tr, ds_te = pinot.data.utils.split(ds, [4, 1])
```
# network
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
```
# Adam
```
import torch
import numpy as np
opt = torch.optim.Adam(net.parameters(), 1e-3)
loss_fn = torch.nn.functional.mse_loss
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
opt.step()
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
# Langevin
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
opt = pinot.inference.adlala.AdLaLa(net.parameters(), partition='La', h=1e-3)
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
def l():
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
print(loss)
return loss
opt.step(l)
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
# Adaptive Langevin
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
opt = pinot.inference.adlala.AdLaLa(net.parameters(), partition='AdLa', h=1e-3)
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
def l():
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
print(loss)
return loss
opt.step(l)
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
# AdLaLa: AdLa for GN, La for last layer
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
net
opt = pinot.inference.adlala.AdLaLa(
[
{'params': list(net.f_in.parameters())\
+ list(net.d0.parameters())\
+ list(net.d2.parameters())\
+ list(net.d4.parameters()), 'partition': 'AdLa', 'h': torch.tensor(1e-3)},
{
'params': list(net.d6.parameters()) + list(net.f_out.parameters()),
'partition': 'La', 'h': torch.tensor(1e-3)
}
])
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
def l():
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
print(loss)
return loss
opt.step(l)
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
| true |
code
| 0.633637 | null | null | null | null |
|
# Data Loading Tutorial
```
cd ../..
save_path = 'data/'
from scvi.dataset import LoomDataset, CsvDataset, Dataset10X, AnnDataset
import urllib.request
import os
from scvi.dataset import BrainLargeDataset, CortexDataset, PbmcDataset, RetinaDataset, HematoDataset, CbmcDataset, BrainSmallDataset, SmfishDataset
```
## Generic Datasets
`scvi v0.1.3` supports dataset loading for the following three generic file formats:
* `.loom` files
* `.csv` files
* `.h5ad` files
* datasets from `10x` website
Most of the dataset loading instances implemented in scvi use a positional argument `filename` and an optional argument `save_path` (value by default: `data/`). Files will be downloaded or searched for at the location `os.path.join(save_path, filename)`, make sure this path is valid when you specify the arguments.
### Loading a `.loom` file
Any `.loom` file can be loaded with initializing `LoomDataset` with `filename`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `url`: url the dataset if the file needs to be downloaded from the web
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
* `subset_genes`: a list of gene names for subsampling
```
# Loading a remote dataset
remote_loom_dataset = LoomDataset("osmFISH_SScortex_mouse_all_cell.loom",
save_path=save_path,
url='http://linnarssonlab.org/osmFISH/osmFISH_SScortex_mouse_all_cells.loom')
# Loading a local dataset
local_loom_dataset = LoomDataset("osmFISH_SScortex_mouse_all_cell.loom",
save_path=save_path)
```
### Loading a `.csv` file
Any `.csv` file can be loaded with initializing `CsvDataset` with `filename`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `url`: url of the dataset if the file needs to be downloaded from the web
* `compression`: set `compression` as `.gz`, `.bz2`, `.zip`, or `.xz` to load a zipped `csv` file
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
* `subset_genes`: a list of gene names for subsampling
Note: `CsvDataset` currently only supoorts `.csv` files that are genes by cells.
If the dataset has already been downloaded at the location `save_path`, it will not be downloaded again.
```
# Loading a remote dataset
remote_csv_dataset = CsvDataset("GSE100866_CBMC_8K_13AB_10X-RNA_umi.csv.gz",
save_path=save_path,
compression='gzip',
url = "https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE100866&format=file&file=GSE100866%5FCBMC%5F8K%5F13AB%5F10X%2DRNA%5Fumi%2Ecsv%2Egz")
# Loading a local dataset
local_csv_dataset = CsvDataset("GSE100866_CBMC_8K_13AB_10X-RNA_umi.csv.gz",
save_path=save_path,
compression='gzip')
```
### Loading a `.h5ad` file
[AnnData](http://anndata.readthedocs.io/en/latest/) objects can be stored in `.h5ad` format. Any `.h5ad` file can be loaded with initializing `AnnDataset` with `filename`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `url`: url the dataset if the file needs to be downloaded from the web
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
* `subset_genes`: a list of gene names for subsampling
```
# Loading a local dataset
local_ann_dataset = AnnDataset("TM_droplet_mat.h5ad",
save_path = save_path)
```
### Loading a file from `10x` website
If the dataset has already been downloaded at the location `save_path`, it will not be downloaded again.
`10x` has published several datasets on their [website](https://www.10xgenomics.com).
Initialize `Dataset10X` by passing in the dataset name of one of the following datasets that `scvi` currently supports: `frozen_pbmc_donor_a`, `frozen_pbmc_donor_b`, `frozen_pbmc_donor_c`, `pbmc8k`, `pbmc4k`, `t_3k`, `t_4k`, and `neuron_9k`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `type`: set `type` (default to be `filtered`) to be `filtered` or `raw` to choose one from the two datasets that's available on `10X`
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
```
tenX_dataset = Dataset10X("neuron_9k", save_path=save_path)
```
### Loading local `10x` data
It is also possible to create a Dataset object from 10X data saved locally. Initialize Dataset10X by passing in the optional remote argument as False to specify you're loading local data and give the name of the directory that contains the gene expression matrix and gene names of the data as well as the path to this directory.
If your data (the genes.tsv and matrix.mtx files) is located inside the directory 'mm10' which is located at 'data/10X/neuron_9k/filtered_gene_bc_matrices/'. Then filename should have the value 'mm10' and save_path should be the path to the directory containing 'mm10'.
```
local_10X_dataset = Dataset10X('mm10', save_path=os.path.join(save_path, '10X/neuron_9k/filtered_gene_bc_matrices/'),
remote=False)
```
## Built-In Datasets
We've also implemented seven built-in datasets to make it easier to reproduce results from the scVI paper.
* **PBMC**: 12,039 human peripheral blood mononuclear cells profiled with 10x;
* **RETINA**: 27,499 mouse retinal bipolar neurons, profiled in two batches using the Drop-Seq technology;
* **HEMATO**: 4,016 cells from two batches that were profiled using in-drop;
* **CBMC**: 8,617 cord blood mononuclear cells profiled using 10x along with, for each cell, 13 well-characterized mononuclear antibodies;
* **BRAIN SMALL**: 9,128 mouse brain cells profiled using 10x.
* **BRAIN LARGE**: 1.3 million mouse brain cells profiled using 10x;
* **CORTEX**: 3,005 mouse Cortex cells profiled using the Smart-seq2 protocol, with the addition of UMI
* **SMFISH**: 4,462 mouse Cortex cells profiled using the osmFISH protocol
* **DROPSEQ**: 71,639 mouse Cortex cells profiled using the Drop-Seq technology
* **STARMAP**: 3,722 mouse Cortex cells profiled using the STARmap technology
### Loading `STARMAP` dataset
`StarmapDataset` consists of 3722 cells profiled in 3 batches. The cells come with spatial coordinates of their location inside the tissue from which they were extracted and cell type labels retrieved by the authors ofthe original publication.
Reference: X.Wang et al., Science10.1126/science.aat5691 (2018)
### Loading `DROPSEQ` dataset
`DropseqDataset` consists of 71,639 mouse Cortex cells profiled using the Drop-Seq technology. To facilitate comparison with other methods we use a random filtered set of 15000 cells and then keep only a filtered set of 6000 highly variable genes. Cells have cell type annotaions and even sub-cell type annotations inferred by the authors of the original publication.
Reference: https://www.biorxiv.org/content/biorxiv/early/2018/04/10/299081.full.pdf
### Loading `SMFISH` dataset
`SmfishDataset` consists of 4,462 mouse cortex cells profiled using the OsmFISH protocol. The cells come with spatial coordinates of their location inside the tissue from which they were extracted and cell type labels retrieved by the authors of the original publication.
Reference: Simone Codeluppi, Lars E Borm, Amit Zeisel, Gioele La Manno, Josina A van Lunteren, Camilla I Svensson, and Sten Linnarsson. Spatial organization of the somatosensory cortex revealed by cyclic smFISH. bioRxiv, 2018.
```
smfish_dataset = SmfishDataset(save_path=save_path)
```
### Loading `BRAIN-LARGE` dataset
<font color='red'>Loading BRAIN-LARGE requires at least 32 GB memory!</font>
`BrainLargeDataset` consists of 1.3 million mouse brain cells, spanning the cortex, hippocampus and subventricular zone, and profiled with 10x chromium. We use this dataset to demonstrate the scalability of scVI. It can be used to demonstrate the scalability of scVI.
Reference: 10x genomics (2017). URL https://support.10xgenomics.com/single-cell-gene-expression/datasets.
```
brain_large_dataset = BrainLargeDataset(save_path=save_path)
```
### Loading `CORTEX` dataset
`CortexDataset` consists of 3,005 mouse cortex cells profiled with the Smart-seq2 protocol, with the addition of UMI. To facilitate com- parison with other methods, we use a filtered set of 558 highly variable genes. The `CortexDataset` exhibits a clear high-level subpopulation struc- ture, which has been inferred by the authors of the original publication using computational tools and annotated by inspection of specific genes or transcriptional programs. Similar levels of annotation are provided with the `PbmcDataset` and `RetinaDataset`.
Reference: Zeisel, A. et al. Cell types in the mouse cortex and hippocampus revealed by single-cell rna-seq. Science 347, 1138–1142 (2015).
```
cortex_dataset = CortexDataset(save_path=save_path)
```
### Loading `PBMC` dataset
`PbmcDataset` consists of 12,039 human peripheral blood mononu- clear cells profiled with 10x.
Reference: Zheng, G. X. Y. et al. Massively parallel digital transcriptional profiling of single cells. Nature Communications 8, 14049 (2017).
```
pbmc_dataset = PbmcDataset(save_path=save_path)
```
### Loading `RETINA` dataset
`RetinaDataset` includes 27,499 mouse retinal bipolar neu- rons, profiled in two batches using the Drop-Seq technology.
Reference: Shekhar, K. et al. Comprehensive classification of retinal bipolar neurons by single-cell transcriptomics. Cell 166, 1308–1323.e30 (2017).
```
retina_dataset = RetinaDataset(save_path=save_path)
```
### Loading `HEMATO` dataset
`HematoDataset` includes 4,016 cells from two batches that were profiled using in-drop. This data provides a snapshot of hematopoietic progenitor cells differentiating into various lineages. We use this dataset as an example for cases where gene expression varies in a continuous fashion (along pseudo-temporal axes) rather than forming discrete subpopulations.
Reference: Tusi, B. K. et al. Population snapshots predict early haematopoietic and erythroid hierarchies. Nature 555, 54–60 (2018).
```
hemato_dataset = HematoDataset(save_path=os.path.join(save_path, 'HEMATO/'))
```
### Loading `CBMC` dataset
`CbmcDataset` includes 8,617 cord blood mononuclear cells pro- filed using 10x along with, for each cell, 13 well-characterized mononuclear antibodies. We used this dataset to analyze how the latent spaces inferred by dimensionality-reduction algorithms summarize protein marker abundance.
Reference: Stoeckius, M. et al. Simultaneous epitope and transcriptome measurement in single cells. Nature Methods 14, 865–868 (2017).
```
cbmc_dataset = CbmcDataset(save_path=os.path.join(save_path, "citeSeq/"))
```
### Loading `BRAIN-SMALL` dataset
`BrainSmallDataset` consists of 9,128 mouse brain cells profiled using 10x. This dataset is used as a complement to PBMC for our study of zero abundance and quality control metrics correlation with our generative posterior parameters.
Reference:
```
brain_small_dataset = BrainSmallDataset(save_path=save_path)
def allow_notebook_for_test():
print("Testing the data loading notebook")
```
| true |
code
| 0.409811 | null | null | null | null |
|
```
%matplotlib inline
# Importing standard Qiskit libraries and configuring account
from qiskit import QuantumCircuit, execute, Aer, IBMQ
from qiskit.compiler import transpile, assemble
from qiskit.tools.jupyter import *
from qiskit.visualization import *
# Loading your IBM Q account(s)
provider = IBMQ.load_account()
```
# Chapter 11 - Ignis
```
# Import plot and math libraries
import numpy as np
import matplotlib.pyplot as plt
# Import the noise models and some standard error methods
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import amplitude_damping_error, phase_damping_error
# Import all three coherence circuits generators and fitters
from qiskit.ignis.characterization.coherence import t1_circuits, t2_circuits, t2star_circuits
from qiskit.ignis.characterization.coherence import T1Fitter, T2Fitter, T2StarFitter
# Generate the T1 test circuits
# Generate a list of number of gates to add to each circuit
# using np.linspace so that the number of gates increases linearly
# and append with a large span at the end of the list (200-4000)
num_of_gates = np.append((np.linspace(1, 100, 12)).astype(int), np.array([200, 400, 800, 1000, 2000, 4000]))
#Define the gate time for each Identity gate
gate_time = 0.1
# Select the first qubit as the one we wish to measure T1
qubits = [0]
# Generate the test circuits given the above parameters
test_circuits, delay_times = t1_circuits(num_of_gates, gate_time, qubits)
# The number of I gates appended for each circuit
print('Number of gates per test circuit: \n', num_of_gates, '\n')
# The gate time of each circuit (number of I gates * gate_time)
print('Delay times for each test circuit created, respectively:\n', delay_times)
print('Total test circuits created: ', len(test_circuits))
print('Test circuit 1 with 1 Identity gate:')
test_circuits[0].draw()
print('Test circuit 2 with 10 Identity gates:')
test_circuits[1].draw()
# Set the simulator with amplitude damping noise
# Set the amplitude damping noise channel parameters T1 and Lambda
t1 = 20
lam = np.exp(-gate_time/t1)
# Generate the amplitude dampling error channel
error = amplitude_damping_error(1 - lam)
noise_model = NoiseModel()
# Set the dampling error to the ID gate on qubit 0.
noise_model.add_quantum_error(error, 'id', [0])
# Run the simulator with the generated noise model
backend = Aer.get_backend('qasm_simulator')
shots = 200
backend_result = execute(test_circuits, backend, shots=shots, noise_model=noise_model).result()
# Plot the noisy results of the largest (last in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[0]))
# Plot the noisy results of the largest (last in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[len(test_circuits)-1]))
# Initialize the parameters for the T1Fitter, A, T1, and B
param_t1 = t1*1.2
param_a = 1.0
param_b = 0.0
# Generate the T1Fitter for our test circuit results
fit = T1Fitter(backend_result, delay_times, qubits,
fit_p0=[param_a, param_t1, param_b],
fit_bounds=([0, 0, -1], [2, param_t1*2, 1]))
# Plot the fitter results for T1 over each test circuit's delay time
fit.plot(0)
# Import the thermal relaxation error we will use to create our error
from qiskit.providers.aer.noise.errors.standard_errors import thermal_relaxation_error
# Import the T2Fitter Class and t2_circuits method
from qiskit.ignis.characterization.coherence import T2Fitter
from qiskit.ignis.characterization.coherence import t2_circuits
num_of_gates = (np.linspace(1, 300, 50)).astype(int)
gate_time = 0.1
# Note that it is possible to measure several qubits in parallel
qubits = [0]
t2echo_test_circuits, t2echo_delay_times = t2_circuits(num_of_gates, gate_time, qubits)
# The number of I gates appended for each circuit
print('Number of gates per test circuit: \n', num_of_gates, '\n')
# The gate time of each circuit (number of I gates * gate_time)
print('Delay times for T2 echo test circuits:\n', t2echo_delay_times)
# Draw the first T2 test circuit
t2echo_test_circuits[0].draw()
# We'll create a noise model on the backend simulator
backend = Aer.get_backend('qasm_simulator')
shots = 400
# set the t2 decay time
t2 = 25.0
# Define the T2 noise model based on the thermal relaxation error model
t2_noise_model = NoiseModel()
t2_noise_model.add_quantum_error(thermal_relaxation_error(np.inf, t2, gate_time, 0.5), 'id', [0])
# Execute the circuit on the noisy backend
t2echo_backend_result = execute(t2echo_test_circuits, backend, shots=shots,
noise_model=t2_noise_model, optimization_level=0).result()
plot_histogram(t2echo_backend_result.get_counts(t2echo_test_circuits[0]))
plot_histogram(t2echo_backend_result.get_counts(t2echo_test_circuits[len(t2echo_test_circuits)-1]))
```
# T2 Decoherence Time
```
# Generate the T2Fitter class using similar parameters as the T1Fitter
t2echo_fit = T2Fitter(t2echo_backend_result, t2echo_delay_times,
qubits, fit_p0=[0.5, t2, 0.5], fit_bounds=([-0.5, 0, -0.5], [1.5, 40, 1.5]))
# Print and plot the results
print(t2echo_fit.params)
t2echo_fit.plot(0)
plt.show()
# 50 total linearly spaced number of gates
# 30 from 10->150, 20 from 160->450
num_of_gates = np.append((np.linspace(1, 150, 30)).astype(int), (np.linspace(160,450,20)).astype(int))
# Set the Identity gate delay time
gate_time = 0.1
# Select the qubit to measure T2*
qubits = [0]
# Generate the 50 test circuits with number of oscillations set to 4
test_circuits, delay_times, osc_freq = t2star_circuits(num_of_gates, gate_time, qubits, nosc=4)
print('Circuits generated: ', len(test_circuits))
print('Delay times: ', delay_times)
print('Oscillating frequency: ', osc_freq)
print(test_circuits[0].count_ops())
test_circuits[0].draw()
print(test_circuits[1].count_ops())
test_circuits[1].draw()
# Get the backend to execute the test circuits
backend = Aer.get_backend('qasm_simulator')
# Set the T2* value to 10
t2Star = 10
# Set the phase damping error and add it to the noise model to the Identity gates
error = phase_damping_error(1 - np.exp(-2*gate_time/t2Star))
noise_model = NoiseModel()
noise_model.add_quantum_error(error, 'id', [0])
# Run the simulator
shots = 1024
backend_result = execute(test_circuits, backend, shots=shots,
noise_model=noise_model).result()
# Plot the noisy results of the shortest (first in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[0]))
# Plot the noisy results of the largest (last in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[len(test_circuits)-1]))
# Set the initial values of the T2StarFitter parameters
param_T2Star = t2Star*1.1
param_A = 0.5
param_B = 0.5
# Generate the T2StarFitter with the given parameters and bounds
fit = T2StarFitter(backend_result, delay_times, qubits,
fit_p0=[0.5, t2Star, osc_freq, 0, 0.5],
fit_bounds=([-0.5, 0, 0, -np.pi, -0.5],
[1.5, 40, 2*osc_freq, np.pi, 1.5]))
# Plot the qubit characterization from the T2StarFitter
fit.plot(0)
```
# Mitigating Readout errors
```
# Import Qiskit classes
from qiskit.providers.aer import noise
from qiskit.tools.visualization import plot_histogram
# Import measurement calibration functions
from qiskit.ignis.mitigation.measurement import complete_meas_cal, CompleteMeasFitter
# Generate the calibration circuits
# Set the number of qubits
num_qubits = 5
# Set the qubit list to generate the measurement calibration circuits
qubit_list = [0,1,2,3,4]
# Generate the measurement calibrations circuits and state labels
meas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=num_qubits, circlabel='mcal')
# Print the number of measurement calibration circuits generated
print(len(meas_calibs))
# Draw any of the generated calibration circuits, 0-31.
# In this example we will draw the last one.
meas_calibs[31].draw()
state_labels
# Execute the calibration circuits without noise on the qasm simulator
backend = Aer.get_backend('qasm_simulator')
job = execute(meas_calibs, backend=backend, shots=1000)
# Obtain the measurement calibration results
cal_results = job.result()
# The calibration matrix without noise is the identity matrix
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
meas_fitter.plot_calibration()
# Create a 5 qubit circuit
qc = QuantumCircuit(5,5)
# Place the first qubit in superpostion
qc.h(0)
# Entangle all other qubits together
qc.cx(0, 1)
qc.cx(1, 2)
qc.cx(2, 3)
qc.cx(3, 4)
# Include a barrier just to ease visualization of the circuit
qc.barrier()
# Measure and draw the final circuit
qc.measure([0,1,2,3,4], [0,1,2,3,4])
qc.draw()
# Obtain the least busy backend device, not a simulator
from qiskit.providers.ibmq import least_busy
# Find the least busy operational quantum device with 5 or more qubits
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
# Print the least busy backend
print("least busy backend: ", backend)
# Execute the quantum circuit on the backend
job = execute(qc, backend=backend, shots=1024)
results = job.result()
# Results from backend without mitigating the noise
noisy_counts = results.get_counts()
# Obtain the measurement fitter object
measurement_filter = meas_fitter.filter
# Mitigate the results by applying the measurement fitter
filtered_results = measurement_filter.apply(results)
# Get the mitigated result counts
filtered_counts = filtered_results.get_counts(0)
plot_histogram(noisy_counts)
plot_histogram(filtered_counts)
import qiskit.tools.jupyter
%qiskit_version_table
```
| true |
code
| 0.644812 | null | null | null | null |
|
# DeepDreaming with TensorFlow
>[Loading and displaying the model graph](#loading)
>[Naive feature visualization](#naive)
>[Multiscale image generation](#multiscale)
>[Laplacian Pyramid Gradient Normalization](#laplacian)
>[Playing with feature visualzations](#playing)
>[DeepDream](#deepdream)
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
- visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries)
- embed TensorBoard graph visualizations into Jupyter notebooks
- produce high-resolution images with tiled computation ([example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg))
- use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
- generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures.
```
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
```
<a id='loading'></a>
## Loading and displaying the model graph
The pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network:
```
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
```
To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
```
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
```
<a id='naive'></a>
## Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
```
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
```
<a id="multiscale"></a>
## Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
```
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
```
<a id="laplacian"></a>
## Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normalization_.
```
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
```
<a id="playing"></a>
## Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
```
render_lapnorm(T(layer)[:,:,:,65])
```
Lower layers produce features of lower complexity.
```
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
```
There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
```
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
```
<a id="deepdream"></a>
## DeepDream
Now let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow.
```
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
```
Let's load some image and populate it with DogSlugs (in case you've missed them).
```
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
```
Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
```
render_deepdream(T(layer)[:,:,:,139], img0)
```
Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image.
We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.
| true |
code
| 0.715256 | null | null | null | null |
|
# Feature Engineering

## Objective
Data preprocessing and engineering techniques generally refer to the addition, deletion, or transformation of data.
The time spent on identifying data engineering needs can be significant and requires you to spend substantial time understanding your data...
> _"Live with your data before you plunge into modeling"_ - Leo Breiman
In this module we introduce:
- an example of preprocessing numerical features,
- two common ways to preprocess categorical features,
- using a scikit-learn pipeline to chain preprocessing and model training.
## Basic prerequisites
Let's go ahead and import a couple required libraries and import our data.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p class="last">We will import additional libraries and functions as we proceed but we do so at the time of using the libraries and functions as that provides better learning context.</p>
</div>
```
import pandas as pd
# to display nice model diagram
from sklearn import set_config
set_config(display='diagram')
# import data
adult_census = pd.read_csv('../data/adult-census.csv')
# separate feature & target data
target = adult_census['class']
features = adult_census.drop(columns='class')
```
## Selection based on data types
Typically, data types fall into two categories:
* __Numeric__: a quantity represented by a real or integer number.
* __Categorical__: a discrete value, typically represented by string labels (but not only) taken from a finite list of possible choices.
```
features.dtypes
```
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;"><b>Warning</b></p>
<p class="last">Do not take dtype output at face value! It is possible to have categorical data represented by numbers (i.e. <tt class="docutils literal">education_num</tt>. And <tt class="docutils literal">object</tt> dtypes can represent data that would be better represented as continuous numbers (i.e. dates).
Bottom line, always understand how your data is representing your features!
</p>
</div>
We can separate categorical and numerical variables using their data types to identify them.
There are a few ways we can do this. Here, we make use of [`make_column_selector`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_selector.html) helper to select the corresponding columns.
```
from sklearn.compose import make_column_selector as selector
# create selector object based on data type
numerical_columns_selector = selector(dtype_exclude=object)
categorical_columns_selector = selector(dtype_include=object)
# get columns of interest
numerical_columns = numerical_columns_selector(features)
categorical_columns = categorical_columns_selector(features)
# results in a list containing relevant column names
numerical_columns
```
## Preprocessing numerical data
Scikit-learn works "out of the box" with numeric features. However, some algorithms make some assumptions regarding the distribution of our features.
We see that our numeric features span across different ranges:
```
numerical_features = features[numerical_columns]
numerical_features.describe()
```
Normalizing our features so that they have mean = 0 and standard deviation = 1, helps to ensure our features align to algorithm assumptions.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p>Here are some reasons for scaling features:</p>
<ul class="last simple">
<li>Models that rely on the distance between a pair of samples, for instance
k-nearest neighbors, should be trained on normalized features to make each
feature contribute approximately equally to the distance computations.</li>
<li>Many models such as logistic regression use a numerical solver (based on
gradient descent) to find their optimal parameters. This solver converges
faster when the features are scaled.</li>
</ul>
</div>
Whether or not a machine learning model requires normalization of the features depends on the model family. Linear models such as logistic regression generally benefit from scaling the features while other models such as tree-based models (i.e. decision trees, random forests) do not need such preprocessing (but will not suffer from it).
We can apply such normalization using a scikit-learn transformer called [`StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html).
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(numerical_features)
```
The `fit` method for transformers is similar to the `fit` method for
predictors. The main difference is that the former has a single argument (the
feature matrix), whereas the latter has two arguments (the feature matrix and the
target).

In this case, the algorithm needs to compute the mean and standard deviation
for each feature and store them into some NumPy arrays. Here, these
statistics are the model states.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p class="last">The fact that the model states of this scaler are arrays of means and
standard deviations is specific to the <tt class="docutils literal">StandardScaler</tt>. Other
scikit-learn transformers will compute different statistics and store them
as model states, in the same fashion.</p>
</div>
We can inspect the computed means and standard deviations.
```
scaler.mean_
scaler.scale_
```
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">Scikit-learn convention: if an attribute is learned from the data, its name
ends with an underscore (i.e. <tt class="docutils literal">_</tt>), as in <tt class="docutils literal">mean_</tt> and <tt class="docutils literal">scale_</tt> for the
<tt class="docutils literal">StandardScaler</tt>.</p>
</ul>
</div>
Once we have called the `fit` method, we can perform data transformation by
calling the method `transform`.
```
numerical_features_scaled = scaler.transform(numerical_features)
numerical_features_scaled
```
Let's illustrate the internal mechanism of the `transform` method and put it
to perspective with what we already saw with predictors.

The `transform` method for transformers is similar to the `predict` method
for predictors. It uses a predefined function, called a **transformation
function**, and uses the model states and the input data. However, instead of
outputting predictions, the job of the `transform` method is to output a
transformed version of the input data.
Finally, the method `fit_transform` is a shorthand method to call
successively `fit` and then `transform`.

```
# fitting and transforming in one step
scaler.fit_transform(numerical_features)
```
Notice that the mean of all the columns is close to 0 and the standard deviation in all cases is close to 1:
```
numerical_features = pd.DataFrame(
numerical_features_scaled,
columns=numerical_columns
)
numerical_features.describe()
```
## Model pipelines
We can easily combine sequential operations with a scikit-learn
[`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), which chains together operations and is used as any other
classifier or regressor. The helper function [`make_pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html#sklearn.pipeline.make_pipeline) will create a
`Pipeline`: it takes as arguments the successive transformations to perform,
followed by the classifier or regressor model.
```
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), LogisticRegression())
model
```
Let's divide our data into train and test sets and then apply and score our logistic regression model:
```
from sklearn.model_selection import train_test_split
# split our data into train & test
X_train, X_test, y_train, y_test = train_test_split(numerical_features, target, random_state=123)
# fit our pipeline model
model.fit(X_train, y_train)
# score our model on the test data
model.score(X_test, y_test)
```
## Preprocessing categorical data
Unfortunately, Scikit-learn does not accept categorical features in their raw form. Consequently, we need to transform them into numerical representations.
The following presents typical ways of dealing with categorical variables by encoding them, namely **ordinal encoding** and **one-hot encoding**.
### Encoding ordinal categories
The most intuitive strategy is to encode each category with a different
number. The [`OrdinalEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html) will transform the data in such manner.
We will start by encoding a single column to understand how the encoding
works.
```
from sklearn.preprocessing import OrdinalEncoder
# let's illustrate with the 'education' feature
education_column = features[["education"]]
encoder = OrdinalEncoder()
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
We see that each category in `"education"` has been replaced by a numeric
value. We could check the mapping between the categories and the numerical
values by checking the fitted attribute `categories_`.
```
encoder.categories_
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p class="last"><tt class="docutils literal">OrindalEncoder</tt> transforms the category value into the corresponding index value of <tt class="docutils literal">encoder.categories_</tt>.</p>
</div>
However, be careful when applying this encoding strategy:
using this integer representation leads downstream predictive models
to assume that the values are ordered (0 < 1 < 2 < 3... for instance).
By default, `OrdinalEncoder` uses a lexicographical strategy to map string
category labels to integers. This strategy is arbitrary and often
meaningless. For instance, suppose the dataset has a categorical variable
named `"size"` with categories such as "S", "M", "L", "XL". We would like the
integer representation to respect the meaning of the sizes by mapping them to
increasing integers such as `0, 1, 2, 3`.
However, the lexicographical strategy used by default would map the labels
"S", "M", "L", "XL" to 2, 1, 0, 3, by following the alphabetical order.
The `OrdinalEncoder` class accepts a `categories` argument to
pass categories in the expected ordering explicitly (`categories[i]` holds the categories expected in the ith column).
```
ed_levels = [' Preschool', ' 1st-4th', ' 5th-6th', ' 7th-8th', ' 9th', ' 10th', ' 11th',
' 12th', ' HS-grad', ' Prof-school', ' Some-college', ' Assoc-acdm',
' Assoc-voc', ' Bachelors', ' Masters', ' Doctorate']
encoder = OrdinalEncoder(categories=[ed_levels])
education_encoded = encoder.fit_transform(education_column)
education_encoded
encoder.categories_
```
If a categorical variable does not carry any meaningful order information
then this encoding might be misleading to downstream statistical models and
you might consider using one-hot encoding instead (discussed next).
### Ecoding nominal categories
[`OneHotEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) is an alternative encoder that converts the categorical levels into new columns.
We will start by encoding a single feature (e.g. `"education"`) to illustrate
how the encoding works.
```
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p><tt class="docutils literal">sparse=False</tt> is used in the <tt class="docutils literal">OneHotEncoder</tt> for didactic purposes, namely
easier visualization of the data.</p>
<p class="last">Sparse matrices are efficient data structures when most of your matrix
elements are zero. They won't be covered in detail in this workshop. If you
want more details about them, you can look at
<a class="reference external" href="https://scipy-lectures.org/advanced/scipy_sparse/introduction.html#why-sparse-matrices">this</a>.</p>
</div>
Viewing this as a data frame provides a more intuitive illustration:
```
feature_names = encoder.get_feature_names(input_features=["education"])
pd.DataFrame(education_encoded, columns=feature_names)
```
As we can see, each category (unique value) became a column; the encoding
returned, for each sample, a 1 to specify which category it belongs to.
Let's apply this encoding to all the categorical features:
```
# get all categorical features
categorical_features = features[categorical_columns]
# one-hot encode all features
categorical_features_encoded = encoder.fit_transform(categorical_features)
# view as a data frame
columns_encoded = encoder.get_feature_names(categorical_features.columns)
pd.DataFrame(categorical_features_encoded, columns=columns_encoded).head()
```
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;"><b>Warning</b></p>
<p class="last">One-hot encoding can significantly increase the number of features in our data. In this case we went from 8 features to 102! If you have a data set with many categorical variables and those categorical variables in turn have many unique levels, the number of features can explode. In these cases you may want to explore ordinal encoding or some other alternative.</p>
</ul>
</div>
### Choosing an encoding strategy
Choosing an encoding strategy will depend on the underlying models and the
type of categories (i.e. ordinal vs. nominal).
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">In general <tt class="docutils literal">OneHotEncoder</tt> is the encoding strategy used when the
downstream models are <strong>linear models</strong> while <tt class="docutils literal">OrdinalEncoder</tt> is often a
good strategy with <strong>tree-based models</strong>.</p>
</div>
Using an `OrdinalEncoder` will output ordinal categories. This means
that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The
impact of violating this ordering assumption is really dependent on the
downstream models. Linear models will be impacted by misordered categories
while tree-based models will not.
You can still use an `OrdinalEncoder` with linear models but you need to be
sure that:
- the original categories (before encoding) have an ordering;
- the encoded categories follow the same ordering than the original
categories.
One-hot encoding categorical variables with high cardinality can cause
computational inefficiency in tree-based models. Because of this, it is not recommended
to use `OneHotEncoder` in such cases even if the original categories do not
have a given order.
## Using numerical and categorical variables together
Now let's look at how to combine some of these tasks so we can preprocess both numeric and categorical data.
First, let's get our train & test data established:
```
# drop the duplicated column `"education-num"` as stated in the data exploration notebook
features = features.drop(columns='education-num')
# create selector object based on data type
numerical_columns_selector = selector(dtype_exclude=object)
categorical_columns_selector = selector(dtype_include=object)
# get columns of interest
numerical_columns = numerical_columns_selector(features)
categorical_columns = categorical_columns_selector(features)
# split into train & test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=123)
```
Scikit-learn provides a [`ColumnTransformer`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) class which will send specific
columns to a specific transformer, making it easy to fit a single predictive
model on a dataset that combines both kinds of variables together.
We first define the columns depending on their data type:
* **one-hot encoding** will be applied to categorical columns.
* **numerical scaling** numerical features which will be standardized.
We then create our `ColumnTransfomer` by specifying three values:
1. the preprocessor name,
2. the transformer, and
3. the columns.
First, let's create the preprocessors for the numerical and categorical
parts.
```
categorical_preprocessor = OneHotEncoder(handle_unknown="ignore")
numerical_preprocessor = StandardScaler()
```
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">We can use the <tt class="docutils literal">handle_unknown</tt> parameter to ignore rare categories that may show up in test data but were not present in the training data.</p>
</ul>
</div>
Now, we create the transformer and associate each of these preprocessors
with their respective columns.
```
from sklearn.compose import ColumnTransformer
preprocessor = ColumnTransformer([
('one-hot-encoder', categorical_preprocessor, categorical_columns),
('standard_scaler', numerical_preprocessor, numerical_columns)
])
```
We can take a minute to represent graphically the structure of a
`ColumnTransformer`:

A `ColumnTransformer` does the following:
* It **splits the columns** of the original dataset based on the column names
or indices provided. We will obtain as many subsets as the number of
transformers passed into the `ColumnTransformer`.
* It **transforms each subset**. A specific transformer is applied to
each subset: it will internally call `fit_transform` or `transform`. The
output of this step is a set of transformed datasets.
* It then **concatenates the transformed datasets** into a single dataset.
The important thing is that `ColumnTransformer` is like any other
scikit-learn transformer. In particular it can be combined with a classifier
in a `Pipeline`:
```
model = make_pipeline(preprocessor, LogisticRegression(max_iter=500))
model
```
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;"><b>Warning</b></p>
<p class="last">Including non-scaled data can cause some algorithms to iterate
longer in order to converge. Since our categorical features are not scaled it's often recommended to increase the number of allowed iterations for linear models.</p>
</div>
```
# fit our model
_ = model.fit(X_train, y_train)
# score on test set
model.score(X_test, y_test)
```
## Wrapping up
Unfortunately, we only have time to scratch the surface of feature engineering in this workshop. However, this module should provide you with a strong foundation of how to apply the more common feature preprocessing tasks.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">Scikit-learn provides many feature engineering options. Learn more here: <a href="https://scikit-learn.org/stable/modules/preprocessing.html">https://scikit-learn.org/stable/modules/preprocessing.html</a></p>
</ul>
</div>
In this module we learned how to:
- normalize numerical features with `StandardScaler`,
- ordinal and one-hot encode categorical features with `OrdinalEncoder` and `OneHotEncoder`, and
- chain feature preprocessing and model training steps together with `ColumnTransformer` and `make_pipeline`.
| true |
code
| 0.612252 | null | null | null | null |
|
ERROR: type should be string, got "https://keras.io/examples/structured_data/structured_data_classification_from_scratch/\n\nmudar nome das coisas. Editar como quero // para de servir de exemplo pra o futuro..\n\n```\nimport tensorflow as tf\nimport numpy as np\nimport pandas as pd\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport pydot\nfile_url = \"http://storage.googleapis.com/download.tensorflow.org/data/heart.csv\"\ndataframe = pd.read_csv(file_url)\ndataframe.head()\nval_dataframe = dataframe.sample(frac=0.2, random_state=1337)\ntrain_dataframe = dataframe.drop(val_dataframe.index)\ndef dataframe_to_dataset(dataframe):\n dataframe = dataframe.copy()\n labels = dataframe.pop(\"target\")\n ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))\n ds = ds.shuffle(buffer_size=len(dataframe))\n return ds\n\n\ntrain_ds = dataframe_to_dataset(train_dataframe)\nval_ds = dataframe_to_dataset(val_dataframe)\n```\n\nfor x, y in train_ds.take(1):\n print(\"Input:\", x)\n print(\"Target:\", y)\n \n |||||| entender isto melhor\n\n```\ntrain_ds = train_ds.batch(32)\nval_ds = val_ds.batch(32)\nfrom tensorflow.keras.layers.experimental.preprocessing import Normalization\nfrom tensorflow.keras.layers.experimental.preprocessing import CategoryEncoding\nfrom tensorflow.keras.layers.experimental.preprocessing import StringLookup\n\n\ndef encode_numerical_feature(feature, name, dataset):\n # Create a Normalization layer for our feature\n normalizer = Normalization()\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))\n\n # Learn the statistics of the data\n normalizer.adapt(feature_ds)\n\n # Normalize the input feature\n encoded_feature = normalizer(feature)\n return encoded_feature\n\n\ndef encode_string_categorical_feature(feature, name, dataset):\n # Create a StringLookup layer which will turn strings into integer indices\n index = StringLookup()\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))\n\n # Learn the set of possible string values and assign them a fixed integer index\n index.adapt(feature_ds)\n\n # Turn the string input into integer indices\n encoded_feature = index(feature)\n\n # Create a CategoryEncoding for our integer indices\n encoder = CategoryEncoding(output_mode=\"binary\")\n\n # Prepare a dataset of indices\n feature_ds = feature_ds.map(index)\n\n # Learn the space of possible indices\n encoder.adapt(feature_ds)\n\n # Apply one-hot encoding to our indices\n encoded_feature = encoder(encoded_feature)\n return encoded_feature\n\n\ndef encode_integer_categorical_feature(feature, name, dataset):\n # Create a CategoryEncoding for our integer indices\n encoder = CategoryEncoding(output_mode=\"binary\")\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))\n\n # Learn the space of possible indices\n encoder.adapt(feature_ds)\n\n # Apply one-hot encoding to our indices\n encoded_feature = encoder(feature)\n return encoded_feature\n# Categorical features encoded as integers\nsex = keras.Input(shape=(1,), name=\"sex\", dtype=\"int64\")\ncp = keras.Input(shape=(1,), name=\"cp\", dtype=\"int64\")\nfbs = keras.Input(shape=(1,), name=\"fbs\", dtype=\"int64\")\nrestecg = keras.Input(shape=(1,), name=\"restecg\", dtype=\"int64\")\nexang = keras.Input(shape=(1,), name=\"exang\", dtype=\"int64\")\nca = keras.Input(shape=(1,), name=\"ca\", dtype=\"int64\")\n\n# Categorical feature encoded as string\nthal = keras.Input(shape=(1,), name=\"thal\", dtype=\"string\")\n\n# Numerical features\nage = keras.Input(shape=(1,), name=\"age\")\ntrestbps = keras.Input(shape=(1,), name=\"trestbps\")\nchol = keras.Input(shape=(1,), name=\"chol\")\nthalach = keras.Input(shape=(1,), name=\"thalach\")\noldpeak = keras.Input(shape=(1,), name=\"oldpeak\")\nslope = keras.Input(shape=(1,), name=\"slope\")\n\nall_inputs = [\n sex,\n cp,\n fbs,\n restecg,\n exang,\n ca,\n thal,\n age,\n trestbps,\n chol,\n thalach,\n oldpeak,\n slope,\n]\n\n# Integer categorical features\nsex_encoded = encode_integer_categorical_feature(sex, \"sex\", train_ds)\ncp_encoded = encode_integer_categorical_feature(cp, \"cp\", train_ds)\nfbs_encoded = encode_integer_categorical_feature(fbs, \"fbs\", train_ds)\nrestecg_encoded = encode_integer_categorical_feature(restecg, \"restecg\", train_ds)\nexang_encoded = encode_integer_categorical_feature(exang, \"exang\", train_ds)\nca_encoded = encode_integer_categorical_feature(ca, \"ca\", train_ds)\n\n# String categorical features\nthal_encoded = encode_string_categorical_feature(thal, \"thal\", train_ds)\n\n# Numerical features\nage_encoded = encode_numerical_feature(age, \"age\", train_ds)\ntrestbps_encoded = encode_numerical_feature(trestbps, \"trestbps\", train_ds)\nchol_encoded = encode_numerical_feature(chol, \"chol\", train_ds)\nthalach_encoded = encode_numerical_feature(thalach, \"thalach\", train_ds)\noldpeak_encoded = encode_numerical_feature(oldpeak, \"oldpeak\", train_ds)\nslope_encoded = encode_numerical_feature(slope, \"slope\", train_ds)\n\nall_features = layers.concatenate(\n [\n sex_encoded,\n cp_encoded,\n fbs_encoded,\n restecg_encoded,\n exang_encoded,\n slope_encoded,\n ca_encoded,\n thal_encoded,\n age_encoded,\n trestbps_encoded,\n chol_encoded,\n thalach_encoded,\n oldpeak_encoded,\n ]\n)\nx = layers.Dense(32, activation=\"relu\")(all_features)\nx = layers.Dropout(0.5)(x)\noutput = layers.Dense(1, activation=\"sigmoid\")(x)\nmodel = keras.Model(all_inputs, output)\nmodel.compile(\"adam\", \"binary_crossentropy\", metrics=[\"accuracy\"])\nmodel.fit(train_ds, epochs=50, validation_data=val_ds)\nsample = {\n \"age\": 60,\n \"sex\": 1,\n \"cp\": 1,\n \"trestbps\": 145,\n \"chol\": 233,\n \"fbs\": 1,\n \"restecg\": 2,\n \"thalach\": 150,\n \"exang\": 0,\n \"oldpeak\": 2.3,\n \"slope\": 3,\n \"ca\": 0,\n \"thal\": \"fixed\",\n}\n\ninput_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}\npredictions = model.predict(input_dict)\n\nprint(\n \"This particular patient had a %.1f percent probability \"\n \"of having a heart disease, as evaluated by our model.\" % (100 * predictions[0][0],)\n)\n```\n\n" | true |
code
| 0.768863 | null | null | null | null |
|
Greyscale ℓ1-TV Denoising
=========================
This example demonstrates the use of class [tvl1.TVL1Denoise](http://sporco.rtfd.org/en/latest/modules/sporco.admm.tvl1.html#sporco.admm.tvl1.TVL1Denoise) for removing salt & pepper noise from a greyscale image using Total Variation regularization with an ℓ1 data fidelity term (ℓ1-TV denoising).
```
from __future__ import print_function
from builtins import input
import numpy as np
from sporco.admm import tvl1
from sporco import util
from sporco import signal
from sporco import metric
from sporco import plot
plot.config_notebook_plotting()
```
Load reference image.
```
img = util.ExampleImages().image('monarch.png', scaled=True,
idxexp=np.s_[:,160:672], gray=True)
```
Construct test image corrupted by 20% salt & pepper noise.
```
np.random.seed(12345)
imgn = signal.spnoise(img, 0.2)
```
Set regularization parameter and options for ℓ1-TV denoising solver. The regularization parameter used here has been manually selected for good performance.
```
lmbda = 8e-1
opt = tvl1.TVL1Denoise.Options({'Verbose': True, 'MaxMainIter': 200,
'RelStopTol': 5e-3, 'gEvalY': False,
'AutoRho': {'Enabled': True}})
```
Create solver object and solve, returning the the denoised image ``imgr``.
```
b = tvl1.TVL1Denoise(imgn, lmbda, opt)
imgr = b.solve()
```
Display solve time and denoising performance.
```
print("TVL1Denoise solve time: %5.2f s" % b.timer.elapsed('solve'))
print("Noisy image PSNR: %5.2f dB" % metric.psnr(img, imgn))
print("Denoised image PSNR: %5.2f dB" % metric.psnr(img, imgr))
```
Display reference, corrupted, and denoised images.
```
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.imview(img, title='Reference', fig=fig)
plot.subplot(1, 3, 2)
plot.imview(imgn, title='Corrupted', fig=fig)
plot.subplot(1, 3, 3)
plot.imview(imgr, title=r'Restored ($\ell_1$-TV)', fig=fig)
fig.show()
```
Get iterations statistics from solver object and plot functional value, ADMM primary and dual residuals, and automatically adjusted ADMM penalty parameter against the iteration number.
```
its = b.getitstat()
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(1, 3, 2)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'], fig=fig)
plot.subplot(1, 3, 3)
plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig)
fig.show()
```
| true |
code
| 0.773516 | null | null | null | null |
|
```
%matplotlib inline
```
# Out-of-core classification of text documents
This is an example showing how scikit-learn can be used for classification
using an out-of-core approach: learning from data that doesn't fit into main
memory. We make use of an online classifier, i.e., one that supports the
partial_fit method, that will be fed with batches of examples. To guarantee
that the features space remains the same over time we leverage a
HashingVectorizer that will project each example into the same feature space.
This is especially useful in the case of text classification where new
features (words) may appear in each batch.
```
# Authors: Eustache Diemert <[email protected]>
# @FedericoV <https://github.com/FedericoV/>
# License: BSD 3 clause
from glob import glob
import itertools
import os.path
import re
import tarfile
import time
import sys
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
from html.parser import HTMLParser
from urllib.request import urlretrieve
from sklearn.datasets import get_data_home
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import Perceptron
from sklearn.naive_bayes import MultinomialNB
def _not_in_sphinx():
# Hack to detect whether we are running by the sphinx builder
return '__file__' in globals()
```
Reuters Dataset related routines
--------------------------------
The dataset used in this example is Reuters-21578 as provided by the UCI ML
repository. It will be automatically downloaded and uncompressed on first
run.
```
class ReutersParser(HTMLParser):
"""Utility class to parse a SGML file and yield documents one at a time."""
def __init__(self, encoding='latin-1'):
HTMLParser.__init__(self)
self._reset()
self.encoding = encoding
def handle_starttag(self, tag, attrs):
method = 'start_' + tag
getattr(self, method, lambda x: None)(attrs)
def handle_endtag(self, tag):
method = 'end_' + tag
getattr(self, method, lambda: None)()
def _reset(self):
self.in_title = 0
self.in_body = 0
self.in_topics = 0
self.in_topic_d = 0
self.title = ""
self.body = ""
self.topics = []
self.topic_d = ""
def parse(self, fd):
self.docs = []
for chunk in fd:
self.feed(chunk.decode(self.encoding))
for doc in self.docs:
yield doc
self.docs = []
self.close()
def handle_data(self, data):
if self.in_body:
self.body += data
elif self.in_title:
self.title += data
elif self.in_topic_d:
self.topic_d += data
def start_reuters(self, attributes):
pass
def end_reuters(self):
self.body = re.sub(r'\s+', r' ', self.body)
self.docs.append({'title': self.title,
'body': self.body,
'topics': self.topics})
self._reset()
def start_title(self, attributes):
self.in_title = 1
def end_title(self):
self.in_title = 0
def start_body(self, attributes):
self.in_body = 1
def end_body(self):
self.in_body = 0
def start_topics(self, attributes):
self.in_topics = 1
def end_topics(self):
self.in_topics = 0
def start_d(self, attributes):
self.in_topic_d = 1
def end_d(self):
self.in_topic_d = 0
self.topics.append(self.topic_d)
self.topic_d = ""
def stream_reuters_documents(data_path=None):
"""Iterate over documents of the Reuters dataset.
The Reuters archive will automatically be downloaded and uncompressed if
the `data_path` directory does not exist.
Documents are represented as dictionaries with 'body' (str),
'title' (str), 'topics' (list(str)) keys.
"""
DOWNLOAD_URL = ('http://archive.ics.uci.edu/ml/machine-learning-databases/'
'reuters21578-mld/reuters21578.tar.gz')
ARCHIVE_FILENAME = 'reuters21578.tar.gz'
if data_path is None:
data_path = os.path.join(get_data_home(), "reuters")
if not os.path.exists(data_path):
"""Download the dataset."""
print("downloading dataset (once and for all) into %s" %
data_path)
os.mkdir(data_path)
def progress(blocknum, bs, size):
total_sz_mb = '%.2f MB' % (size / 1e6)
current_sz_mb = '%.2f MB' % ((blocknum * bs) / 1e6)
if _not_in_sphinx():
sys.stdout.write(
'\rdownloaded %s / %s' % (current_sz_mb, total_sz_mb))
archive_path = os.path.join(data_path, ARCHIVE_FILENAME)
urlretrieve(DOWNLOAD_URL, filename=archive_path,
reporthook=progress)
if _not_in_sphinx():
sys.stdout.write('\r')
print("untarring Reuters dataset...")
tarfile.open(archive_path, 'r:gz').extractall(data_path)
print("done.")
parser = ReutersParser()
for filename in glob(os.path.join(data_path, "*.sgm")):
for doc in parser.parse(open(filename, 'rb')):
yield doc
```
Main
----
Create the vectorizer and limit the number of features to a reasonable
maximum
```
vectorizer = HashingVectorizer(decode_error='ignore', n_features=2 ** 18,
alternate_sign=False)
# Iterator over parsed Reuters SGML files.
data_stream = stream_reuters_documents()
# We learn a binary classification between the "acq" class and all the others.
# "acq" was chosen as it is more or less evenly distributed in the Reuters
# files. For other datasets, one should take care of creating a test set with
# a realistic portion of positive instances.
all_classes = np.array([0, 1])
positive_class = 'acq'
# Here are some classifiers that support the `partial_fit` method
partial_fit_classifiers = {
'SGD': SGDClassifier(max_iter=5),
'Perceptron': Perceptron(),
'NB Multinomial': MultinomialNB(alpha=0.01),
'Passive-Aggressive': PassiveAggressiveClassifier(),
}
def get_minibatch(doc_iter, size, pos_class=positive_class):
"""Extract a minibatch of examples, return a tuple X_text, y.
Note: size is before excluding invalid docs with no topics assigned.
"""
data = [('{title}\n\n{body}'.format(**doc), pos_class in doc['topics'])
for doc in itertools.islice(doc_iter, size)
if doc['topics']]
if not len(data):
return np.asarray([], dtype=int), np.asarray([], dtype=int)
X_text, y = zip(*data)
return X_text, np.asarray(y, dtype=int)
def iter_minibatches(doc_iter, minibatch_size):
"""Generator of minibatches."""
X_text, y = get_minibatch(doc_iter, minibatch_size)
while len(X_text):
yield X_text, y
X_text, y = get_minibatch(doc_iter, minibatch_size)
# test data statistics
test_stats = {'n_test': 0, 'n_test_pos': 0}
# First we hold out a number of examples to estimate accuracy
n_test_documents = 1000
tick = time.time()
X_test_text, y_test = get_minibatch(data_stream, 1000)
parsing_time = time.time() - tick
tick = time.time()
X_test = vectorizer.transform(X_test_text)
vectorizing_time = time.time() - tick
test_stats['n_test'] += len(y_test)
test_stats['n_test_pos'] += sum(y_test)
print("Test set is %d documents (%d positive)" % (len(y_test), sum(y_test)))
def progress(cls_name, stats):
"""Report progress information, return a string."""
duration = time.time() - stats['t0']
s = "%20s classifier : \t" % cls_name
s += "%(n_train)6d train docs (%(n_train_pos)6d positive) " % stats
s += "%(n_test)6d test docs (%(n_test_pos)6d positive) " % test_stats
s += "accuracy: %(accuracy).3f " % stats
s += "in %.2fs (%5d docs/s)" % (duration, stats['n_train'] / duration)
return s
cls_stats = {}
for cls_name in partial_fit_classifiers:
stats = {'n_train': 0, 'n_train_pos': 0,
'accuracy': 0.0, 'accuracy_history': [(0, 0)], 't0': time.time(),
'runtime_history': [(0, 0)], 'total_fit_time': 0.0}
cls_stats[cls_name] = stats
get_minibatch(data_stream, n_test_documents)
# Discard test set
# We will feed the classifier with mini-batches of 1000 documents; this means
# we have at most 1000 docs in memory at any time. The smaller the document
# batch, the bigger the relative overhead of the partial fit methods.
minibatch_size = 1000
# Create the data_stream that parses Reuters SGML files and iterates on
# documents as a stream.
minibatch_iterators = iter_minibatches(data_stream, minibatch_size)
total_vect_time = 0.0
# Main loop : iterate on mini-batches of examples
for i, (X_train_text, y_train) in enumerate(minibatch_iterators):
tick = time.time()
X_train = vectorizer.transform(X_train_text)
total_vect_time += time.time() - tick
for cls_name, cls in partial_fit_classifiers.items():
tick = time.time()
# update estimator with examples in the current mini-batch
cls.partial_fit(X_train, y_train, classes=all_classes)
# accumulate test accuracy stats
cls_stats[cls_name]['total_fit_time'] += time.time() - tick
cls_stats[cls_name]['n_train'] += X_train.shape[0]
cls_stats[cls_name]['n_train_pos'] += sum(y_train)
tick = time.time()
cls_stats[cls_name]['accuracy'] = cls.score(X_test, y_test)
cls_stats[cls_name]['prediction_time'] = time.time() - tick
acc_history = (cls_stats[cls_name]['accuracy'],
cls_stats[cls_name]['n_train'])
cls_stats[cls_name]['accuracy_history'].append(acc_history)
run_history = (cls_stats[cls_name]['accuracy'],
total_vect_time + cls_stats[cls_name]['total_fit_time'])
cls_stats[cls_name]['runtime_history'].append(run_history)
if i % 3 == 0:
print(progress(cls_name, cls_stats[cls_name]))
if i % 3 == 0:
print('\n')
```
Plot results
------------
The plot represents the learning curve of the classifier: the evolution
of classification accuracy over the course of the mini-batches. Accuracy is
measured on the first 1000 samples, held out as a validation set.
To limit the memory consumption, we queue examples up to a fixed amount
before feeding them to the learner.
```
def plot_accuracy(x, y, x_legend):
"""Plot accuracy as a function of x."""
x = np.array(x)
y = np.array(y)
plt.title('Classification accuracy as a function of %s' % x_legend)
plt.xlabel('%s' % x_legend)
plt.ylabel('Accuracy')
plt.grid(True)
plt.plot(x, y)
rcParams['legend.fontsize'] = 10
cls_names = list(sorted(cls_stats.keys()))
# Plot accuracy evolution
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with #examples
accuracy, n_examples = zip(*stats['accuracy_history'])
plot_accuracy(n_examples, accuracy, "training examples (#)")
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc='best')
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with runtime
accuracy, runtime = zip(*stats['runtime_history'])
plot_accuracy(runtime, accuracy, 'runtime (s)')
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc='best')
# Plot fitting times
plt.figure()
fig = plt.gcf()
cls_runtime = [stats['total_fit_time']
for cls_name, stats in sorted(cls_stats.items())]
cls_runtime.append(total_vect_time)
cls_names.append('Vectorization')
bar_colors = ['b', 'g', 'r', 'c', 'm', 'y']
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5,
color=bar_colors)
ax.set_xticks(np.linspace(0, len(cls_names) - 1, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=10)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('runtime (s)')
ax.set_title('Training Times')
def autolabel(rectangles):
"""attach some text vi autolabel on rectangles."""
for rect in rectangles:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2.,
1.05 * height, '%.4f' % height,
ha='center', va='bottom')
plt.setp(plt.xticks()[1], rotation=30)
autolabel(rectangles)
plt.tight_layout()
plt.show()
# Plot prediction times
plt.figure()
cls_runtime = []
cls_names = list(sorted(cls_stats.keys()))
for cls_name, stats in sorted(cls_stats.items()):
cls_runtime.append(stats['prediction_time'])
cls_runtime.append(parsing_time)
cls_names.append('Read/Parse\n+Feat.Extr.')
cls_runtime.append(vectorizing_time)
cls_names.append('Hashing\n+Vect.')
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5,
color=bar_colors)
ax.set_xticks(np.linspace(0, len(cls_names) - 1, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=8)
plt.setp(plt.xticks()[1], rotation=30)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('runtime (s)')
ax.set_title('Prediction Times (%d instances)' % n_test_documents)
autolabel(rectangles)
plt.tight_layout()
plt.show()
```
| true |
code
| 0.738251 | null | null | null | null |
|
# Twitter Sentiment Analysis for Indian Election 2019
**Abstract**<br>
The goal of this project is to do sentiment analysis for the Indian Elections. The data used is the tweets that are extracted from Twitter. The BJP and Congress are the two major political parties that will be contesting the election. The dataset will consist of tweets for both the parties. The tweets will be labeled as positive or negative based on the sentiment score obtained using Textblob library. This data will be used to build models that can classify new tweets as positive or negative. The models built are a Bidirectional RNN and GloVe word embedding model.
**Implementation**<br>
```
import os
import pandas as pd
import tweepy
import re
import string
from textblob import TextBlob
import preprocessor as p
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import nltk
nltk.download('punkt')
import pandas as pd
from nltk.tokenize import word_tokenize
from string import punctuation
from nltk.corpus import stopwords
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding, SimpleRNN,Input
from keras.models import Sequential,Model
from keras.preprocessing import sequence
from keras.layers import Dense,Dropout
from keras.layers import Embedding, Flatten, Dense,Conv1D,MaxPooling1D
from sklearn import preprocessing
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import itertools
import seaborn as sns
from sklearn.metrics import confusion_matrix
from keras.utils import to_categorical
from collections import Counter
import tensorflow as tf
from keras.layers import LSTM, Bidirectional, Dropout
```
**Data Creation**
We use Tweepy API to access Twitter and download tweets. Tweepy supports accessing Twitter via Basic Authentication and the newer method, OAuth. Twitter has stopped accepting Basic Authentication so OAuth is now the only way to use the Twitter API.
The below code downloads the tweets from Twitter based on the keyword that we pass. The tweets sentiment score is obtained using the textblog library. The Tweets are then preprocessed. The preprocessing involved removing emoticons, removing stopwords.
```
consumer_key= '9oO3eQOBkuvCRPqMsFvnShRrq'
consumer_secret= 'BMWGbdC05jDcsWU5oI7AouWvwWmi46b2bD8zlnWXaaRC7832ep'
access_token='313324341-yQa0jL5IWmUKT15M6qM53uGeGW7FGcy1xAgx5Usy'
access_token_secret='OyjmhcMCbxGqBQAWzq12S0zrGYUvjChsZKavMYmPCAlrE'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
#file location changed to "data/telemedicine_data_extraction/" for clearer path
congress_tweets = "C:/Users/Abhishek/Election Twitter Sentiment analysis/congress_test.csv"
bjp_tweets = "C:/Users/Abhishek/Election Twitter Sentiment analysis/bjp_test_new.csv"
#set two date variables for date range
start_date = '2019-04-1'
end_date = '2019-04-20'
```
**Data cleaning scripts**
```
# Happy Emoticons
emoticons_happy = set([
':-)', ':)', ';)', ':o)', ':]', ':3', ':c)', ':>', '=]', '8)', '=)', ':}',
':^)', ':-D', ':D', '8-D', '8D', 'x-D', 'xD', 'X-D', 'XD', '=-D', '=D',
'=-3', '=3', ':-))', ":'-)", ":')", ':*', ':^*', '>:P', ':-P', ':P', 'X-P',
'x-p', 'xp', 'XP', ':-p', ':p', '=p', ':-b', ':b', '>:)', '>;)', '>:-)',
'<3'
])
# Sad Emoticons
emoticons_sad = set([
':L', ':-/', '>:/', ':S', '>:[', ':@', ':-(', ':[', ':-||', '=L', ':<',
':-[', ':-<', '=\\', '=/', '>:(', ':(', '>.<', ":'-(", ":'(", ':\\', ':-c',
':c', ':{', '>:\\', ';('
])
#Emoji patterns
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
#combine sad and happy emoticons
emoticons = emoticons_happy.union(emoticons_sad)
#mrhod clean_tweets()
def clean_tweets(tweet):
stop_words = set(stopwords.words('english'))
word_tokens = word_tokenize(tweet)
#after tweepy preprocessing the colon left remain after removing mentions
#or RT sign in the beginning of the tweet
tweet = re.sub(r':', '', tweet)
tweet = re.sub(r'…', '', tweet)
#replace consecutive non-ASCII characters with a space
tweet = re.sub(r'[^\x00-\x7F]+',' ', tweet)
#remove emojis from tweet
tweet = emoji_pattern.sub(r'', tweet)
#filter using NLTK library append it to a string
filtered_tweet = [w for w in word_tokens if not w in stop_words]
filtered_tweet = []
#looping through conditions
for w in word_tokens:
#check tokens against stop words , emoticons and punctuations
if w not in stop_words and w not in emoticons and w not in string.punctuation:
filtered_tweet.append(w)
return ' '.join(filtered_tweet)
#print(word_tokens)
#print(filtered_sentence)
#method write_tweets()
def write_tweets(keyword, file):
# If the file exists, then read the existing data from the CSV file.
if os.path.exists(file):
df = pd.read_csv(file, header=0)
else:
df = pd.DataFrame(columns=COLS)
#page attribute in tweepy.cursor and iteration
for page in tweepy.Cursor(api.search, q=keyword,
count=200, include_rts=False, since=start_date).pages(50):
for status in page:
new_entry = []
status = status._json
## check whether the tweet is in english or skip to the next tweet
if status['lang'] != 'en':
continue
#when run the code, below code replaces the retweet amount and
#no of favorires that are changed since last download.
if status['created_at'] in df['created_at'].values:
i = df.loc[df['created_at'] == status['created_at']].index[0]
if status['favorite_count'] != df.at[i, 'favorite_count'] or \
status['retweet_count'] != df.at[i, 'retweet_count']:
df.at[i, 'favorite_count'] = status['favorite_count']
df.at[i, 'retweet_count'] = status['retweet_count']
continue
#tweepy preprocessing called for basic preprocessing
#clean_text = p.clean(status['text'])
#call clean_tweet method for extra preprocessing
filtered_tweet=clean_tweets(status['text'])
#pass textBlob method for sentiment calculations
blob = TextBlob(filtered_tweet)
Sentiment = blob.sentiment
#seperate polarity and subjectivity in to two variables
polarity = Sentiment.polarity
subjectivity = Sentiment.subjectivity
#new entry append
new_entry += [status['id'], status['created_at'],
status['source'], status['text'],filtered_tweet, Sentiment,polarity,subjectivity, status['lang'],
status['favorite_count'], status['retweet_count']]
#to append original author of the tweet
new_entry.append(status['user']['screen_name'])
try:
is_sensitive = status['possibly_sensitive']
except KeyError:
is_sensitive = None
new_entry.append(is_sensitive)
# hashtagas and mentiones are saved using comma separted
hashtags = ", ".join([hashtag_item['text'] for hashtag_item in status['entities']['hashtags']])
new_entry.append(hashtags)
mentions = ", ".join([mention['screen_name'] for mention in status['entities']['user_mentions']])
new_entry.append(mentions)
#get location of the tweet if possible
try:
location = status['user']['location']
except TypeError:
location = ''
new_entry.append(location)
try:
coordinates = [coord for loc in status['place']['bounding_box']['coordinates'] for coord in loc]
except TypeError:
coordinates = None
new_entry.append(coordinates)
single_tweet_df = pd.DataFrame([new_entry], columns=COLS)
df = df.append(single_tweet_df, ignore_index=True)
csvFile = open(file, 'a' ,encoding='utf-8')
df.to_csv(csvFile, mode='a', columns=COLS, index=False, encoding="utf-8")
#declare keywords as a query for three categories
Congress_keywords = '#IndianNationalCongress OR #RahulGandhi OR #SoniaGandhi OR #INC'
BJP_keywords = '#BJP OR #Modi OR #AmitShah OR #BhartiyaJantaParty'
```
Creates two CSV files. First saves tweets for BJP and second saves tweets for Congress.
```
#call main method passing keywords and file path
write_tweets(Congress_keywords, congress_tweets)
write_tweets(BJP_keywords, bjp_tweets)
```
**LABELING TWEETS AS POSITIVE NEGATIVE**<br>
The tweepy libary gives out sentiment polarity in the range of -1 to +1. For our topic of election prediction the neutral tweets would be of no use as they will not provide any valuable information. Thus for simplicity purpose I have labeled tweets as only positive and negative. Tweets with polarity less than 0 will be labelled negative(0) and greater than 0 will be positive(1)
```
bjp_df['polarity'] = bjp_df['polarity'].apply(lambda x: 1 if x > 0 else 0)
congress_df['polarity'] = congress_df['polarity'].apply(lambda x: 1 if x > 0 else 0)
bjp_df['polarity'].value_counts()
```

```
congress_df['polarity'].value_counts()
```

## **RESAMPLING THE DATA** <br>
Since the ratio of the negative tweets to positive tweets is not proportional. Our data set is not balanced. This will create a bias while training the model. To avoid this I have resampled the data. New data was downloaded from twitter using the above procedure. For both the parties only positive tweets were sampled and appened to the main files to balance the data. After balancing the data. The count of positive and negative tweets for both the parties is as follows. The code for the resampling procedure can be found in the notebook Data_Labeling.ipynb

**CREATING FINAL DATASET**
```
frames = [bjp, congress]
election_data = pd.concat(frames)
```
The final dataset that will be used for our analysis saved in a csv file. That file can be loaded used to run our models. The final dataset looks as follows.

**TOKENIZING DATA**
We tokenize the text and keep the maximum length of the the vector 1000.

**TRAIN TEST SPLIT WITH 80:20 RATIO**
```
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
nb_validation_samples = int(.20 * data.shape[0])
x_train = data[:-nb_validation_samples]
y_train = labels[:-nb_validation_samples]
x_val = data[-nb_validation_samples:]
y_val = labels[-nb_validation_samples:]
```
**CREATING EMBEDDING MATRIX WITH HELP OF PRETRAINED MODEL: GLOVE**
Word Embeddings are text converted into numbers. There are number of ways to represent the numeric forms.<br>
Types of embeddings: Frequency based, Prediction based.<br>Frequency Based: Tf-idf, Co-occurrence matrix<br>
Prediction-Based: BOW, Skip-gram model
Using Pre-trained word vectors: Word2vec, Glove
Word Embedding is done for the experiment with the pre trained word vector Glove.
Glove version used : 100-dimensional GloVe embeddings of 400k words computed on a 2014 dump of English Wikipedia. Training is performed on an aggregated global word-word co-occurrence matrix, giving us a vector space with meaningful substructures

```
embedding_matrix = np.zeros((len(word_index) + 1, 100))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
```
Creating an embedding layer using GloVe
```
embedding_layer = Embedding(len(word_index) + 1,
100,
weights=[embedding_matrix],
input_length=1000,
trainable=False)
```
# Model 1
**Glove Word Embedding model**
GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase inter-esting linear substructures of the word vector space. GloVe can be used to find relations between words like synonyms, company - product relations, zip codes, and cities etc. It is also used by the spaCy model to build semantic word em-beddings/feature vectors while computing the top list words that match with distance measures such as Cosine Similar-ity and Euclidean distance approach.
```
def model_creation():
input_layer = Input(shape=(1000,), dtype='int32')
embed_layer = embedding_layer(input_layer)
x = Dense(100,activation='relu')(embed_layer)
x = Dense(50,activation='relu', kernel_regularizer=keras.regularizers.l2(0.002))(x)
x = Flatten()(x)
x = Dense(50,activation='relu', kernel_regularizer=keras.regularizers.l2(0.002))(x)
x = Dropout(0.5)(x)
x = Dense(50, activation='relu')(x)
x = Dropout(0.5)(x)
#x = Dense(512, activation='relu')(x)
#x = Dropout(0.4)(x)
final_layer = Dense(1, activation='sigmoid')(x)
opt = keras.optimizers.Adam(lr= learning_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model = Model(input_layer,final_layer)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['acc'])
return model
```
**MODEL 1 Architecture**
```
learning_rate = 0.0001
batch_size = 1024
epochs = 10
model_glove = model_creation()
```


**SAVE BEST MODEL AND WEIGHTS for Model1**
```
# serialize model to JSON
model_json = model_glove.to_json()
with open(".\\SavedModels\\Model_glove.h5", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model_glove.save_weights(".\\SavedModels\\Weights_glove.h5")
```
**MODEL1 LOSS AND ACCURAY**

**MODEL1 PERFORMANCE**
```
def plot_modelacc(fit_model):
with plt.style.context('ggplot'):
plt.plot(fit_model.history['acc'])
plt.plot(fit_model.history['val_acc'])
plt.ylim(0,1)
plt.title("MODEL ACCURACY")
plt.xlabel("# of EPOCHS")
plt.ylabel("ACCURACY")
plt.legend(['train', 'test'], loc='upper left')
return plt.show()
def plot_model_loss(fit_model):
with plt.style.context('ggplot'):
plt.plot(fit_model.history['loss'])
plt.plot(fit_model.history['val_loss'])
plt.title("MODEL LOSS")
plt.xlabel("# of EPOCHS")
plt.ylabel("LOSS")
plt.legend(['train', 'test'], loc='upper left')
return plt.show()
```

**CONFUSION MATRIX**<br>
A confusion matrix will show us the how the model predicted with respect to the acutal output.

True Positives: 870 (Predicted True and True in reality)<br>
True Negative: 1141(Predicted False and False in realtity)<br>
False Positive: 33 (Predicted Positve but Negative in reality)<br>
False Negative: 29 (Predicted Negative but Positive in reality)
# Model 2
**Bidirectional RNN model**
Bidirectional Recurrent Neural Networks (BRNN) connect two hidden layers of opposite directions to the same output. With this form of generative deep learning, the output layer can get information from past (backwards) and future (forward) states simultaneously.Invented in 1997 by Schuster and Paliwal,BRNNs were introduced to increase the amount of input information available to the network. For example, multilayer perceptron (MLPs) and time delay neural network (TDNNs) have limitations on the input data flexibility, as they require their input data to be fixed. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state.
BRNN are especially useful when the context of the input is needed. For example, in handwriting recognition, the performance can be enhanced by knowledge of the letters located before and after the current letter.
**MODEL 1 Architecture**


**SAVING BEST MODEL2 AND ITS WEIGHTS**
```
# serialize model to JSON
model_json = model.to_json()
with open(".\\SavedModels\\Model_Bidir_LSTM.h5", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights(".\\SavedModels\\Weights_bidir_LSTM.h5")
print("Saved model to disk")
```
**MODEL 2 LOSS AND ACCURACY**


**MODEL 2 CONFUSION MATRIX**

True Positives: 887(Predicted True and True in reality)
True Negative: 1140(Predicted False and False in realtity)
False Positive: 35 (Predicted Positve but Negative in reality)
False Negative: 11 (Predicted Negative but Positive in reality)
**PREDICTION USING THE BEST MODEL**
The models were compared based on the Test loss and Test Accuracy. The Bidirectional RNN performed slightly better than the GloVe model. The RNN despite its simple architec-ture performed better than the Glove model. We use the Bidirectional RNN to make the predictions for the tweets that will be used to infer election results.
Load the test data on which the predictions will be made using our best model. The data for both the parties was collected using the same procedure like above.
```
congress_test = pd.read_csv('congress_test.csv')
bjp_test = pd.read_csv('bjp_test.csv')
```
We took equal samples for both the files. We took 2000 tweets for Congress and 2000 for BJP. The party that gets the most number of positive votes can be infered to have the higest probablity of winning the 2019 English.
```
congress_test =congress_test[:2000]
bjp_test = bjp_test[0:2000]
```
Tokenize the tweets in the same was that were used for the Bidirectional RNN model.
```
congress_inputs = tokenze_data(congress_inputs)
bjp_inputs = tokenze_data(bjp_inputs)
```
**LOAD THE BEST MODEL (BIDIRECTIONAL LSTM)**
```
from keras.models import model_from_json
# load json and create model
json_file = open(".\\SavedModels\\Model_Bidir_LSTM.h5", 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights(".\\SavedModels\\Weights_bidir_LSTM.h5")
print("Loaded model from disk")
```
**SENTIMENT PREDICTION USING THE MODEL**
```
congress_prediction = loaded_model.predict(congress_inputs)
bjp_prediction = loaded_model.predict(bjp_inputs)
```
If the probabilty of the outcome is greater than 0.5 for any class then the sentiment belongs to that particular class. Since we are concerned with only the count of positive sentiments. We will check the second column variables for our inference.
```
congress_pred = (congress_prediction>0.5)
bjp_pred = (bjp_prediction>0.5)
def get_predictions(party_pred):
x = 0
for i in party_pred:
if(i[1]==True):
x+=1
return x
```

**CONCLUSION**
Just like the training data the majority of the tweets have a negative sentiment attached to them. After feeding 2000 tweets for both the Congress and BJP. The model predicted that BJP has 660 positive tweets while Congress has 416 positive tweets.<br><br> This indicated that the contest this year would be close and the chances of BJP winning on Majority like the 2015 elections are less. This has been corraborated by the poor perfomace of the BJP in the recent state elections where the lost power in three Major Hindi speaking states Rajasthan, Madhya Pradesh and Chattishgarh. <br><br>
**FUTURE SCOPE**
For this project only, a small sample of twitter data was considered for the analysis. It is difficult to give an estimate based on the limited amount of information we had access to. For future work, we can start by increasing the size of our dataset. In addition to Twitter, data can also be obtained from websites like Facebook, News websites. Apart from these we can try different models like Bidirectional RNN with attention mechanism. We can implement BERT which is currently the state of the art for solving various Natural Language Pro-cessing problems.
**LISCENCE**
**REFERENCES**
[1] Sepp Hochreiter and Jurgen Schmidhuber, “Long short- ¨ term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780,1997.<br>
[2] Mike Schuster and Kuldip K Paliwal, “Bidirectional recurrentneural networks,” Signal Processing, IEEE Transactions on, vol. 45, no. 11, pp. 2673–2681, 1997.<br>
[3] Jeffrey Pennington, Richard Socher, Christopher D. Manning.GloVe: Global Vectors for Word Representation <br>
[4] Apoorv Agarwal Boyi Xie Ilia Vovsha Owen Rambow Rebecca Passonneau Sentiment Analysis of Twitter Data <br>
[5] Alex Graves and Jurgen Schmidhuber, “Framewise ¨ phoneme classification with bidirectional LSTM and other neural network
architectures,” Neural Networks, vol. 18, no. 5, pp. 602–610,2005
| true |
code
| 0.351228 | null | null | null | null |
|
# Using geoprocessing tools
In ArcGIS API for Python, geoprocessing toolboxes and tools within them are represented as Python module and functions within that module. To learn more about this organization, refer to the page titled [Accessing geoprocessing tools](https://developers.arcgis.com/python/guide/accessing-geoprocessing-tools/).In this part of the guide, we will observe:
- [Invoking geoprocessing tools](#invoking-geoprocessing-tools)
- [Understanding tool input parameter and output return types](#understanding-tool-input-parameter-and-output-return-types)
- [Using helper types](#using-helper-types)
- [Using strings as input](#using-strings-as-input)
- [Tools with multiple outputs](#tools-with-multiple-outputs)
- [Invoking tools that create multiple outputs](#invoking-tools-that-create-multiple-outputs)
- [Using named tuple to access multiple outputs](#using-named-tuple-to-access-multiple-outputs)
- [Tools that export map image layer as output](#tools-that-export-map-image-layer-as-output)
<a id="invoking-geoprocessing-tools"></a>
## Invoking Geoprocessing Tools
You can execute a geoprocessing tool easily by importing its toolbox as a module and calling the function for the tool. Let us see how to execute the `extract_zion_data` tool from the Zion toolbox URL:
```
# connect to ArcGIS Online
from arcgis.gis import GIS
from arcgis.geoprocessing import import_toolbox
gis = GIS()
# import the Zion toolbox
zion_toolbox_url = 'http://gis.ices.dk/gis/rest/services/Tools/ExtractZionData/GPServer'
zion = import_toolbox(zion_toolbox_url)
result = zion.extract_zion_data()
```
Thus, executing a geoprocessing tool is that simple. Let us learn a few more concepts that will help in using these tools efficiently.
<a id="understanding-tool-input-parameter-and-output-return-types"></a>
## Understanding tool input parameter and output return types
The functions for calling geoprocessing tools can accept and return built-in Python types such as str, int, bool, float, dicts, datetime.datetime as well as some helper types defined in the ArcGIS API for Python such as the following:
* `arcgis.features.FeatureSet` - a set of features
* `arcgis.geoprocessing.LinearUnit` - linear distance with specified units
* `arcgis.geoprocessing.DataFile` - a url or item id referencing data
* `arcgis.geoprocessing.RasterData` - url or item id and format of raster data
The tools can also accept lists of the above types.
**Note**: When the helper types are used an input, the function also accepts strings in their place. For example '5 Miles' can be passed as an input instead of LinearUnit(5, 'Miles') and a URL can be passed instead of a `DataFile` or `RasterData` input.
Some geoprocessing tools are configured to return an `arcgis.mapping.MapImageLayer` for visualizing the results of the tool.
In all cases, the documentation of the tool function indicates the type of input parameters and the output values.
<a id="using-helper-types"></a>
### Using helper types
The helper types (`LinearUnit`, `DataFile` and `RasterData`) defined in the `arcgis.geoprocessing` module are simple classes that hold strings or URLs and have a dictionary representation.
The `extract_zion_data()` tool invoked above returns an output zip file as a `DataFile`:
```
type(result)
```
The output `Datafile` can be queried as shown in the snippet below.
```
result
```
The value types such as `DataFile` include helpful methods such as download:
```
result.download()
```
<a id="using-strings-as-input"></a>
### Using strings as input
Strings can also be used as inputs in place of the helper types such as `LinearUnit`, `RasterData` and `DataFile`.
The example below calls the viewshed tool to compute and display the geographical area that is visible from a clicked location on the map. The function accepts an observation point as a `FeatureSet` and a viewshed distance as a `LinearUnit`, and returns a `FeatureSet`:
```
viewshed = import_toolbox('http://sampleserver1.arcgisonline.com/ArcGIS/rest/services/Elevation/ESRI_Elevation_World/GPServer')
help(viewshed.viewshed)
import arcgis
arcgis.env.out_spatial_reference = 4326
map = gis.map('South San Francisco', zoomlevel=12)
map
```

The code snippet below adds an event listener to the map, such that when clicked, `get_viewshed()` is called with the map widget and clicked point geometry as inputs. The event handler creates a `FeatureSet` from the clicked point geometry, and uses the string '5 Miles' as input for the viewshed_distance parameter instead of creating a `LinearUnit` object. These are passed into the viewshed function that returns the viewshed from the observation point. The map widget is able to draw the returned `FeatureSet` using its `draw()` method:
```
from arcgis.features import Feature, FeatureSet
def get_viewshed(m, g):
res = viewshed.viewshed(FeatureSet([Feature(g)]),"5 Miles") # "5 Miles" or LinearUnit(5, 'Miles') can be passed as input
m.draw(res)
map.on_click(get_viewshed)
```
<a id="tools-with-multiple-outputs"></a>
## Tools with multiple outputs
Some Geoprocessing tools can return multiple results. For these tools, the corresponding function returns the multiple output values as a [named tuple](https://docs.python.org/3/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields).
The example below uses a tool that returns multiple outputs:
```
sandiego_toolbox_url = 'https://gis-public.co.san-diego.ca.us/arcgis/rest/services/InitialResearchPacketCSV_Phase2/GPServer'
multioutput_tbx = import_toolbox(sandiego_toolbox_url)
help(multioutput_tbx.initial_research_packet_csv)
```
<a id="invoking-tools-that-create-multiple-outputs"></a>
### Invoking tools that create multple outputs
The code snippet below shows how multiple outputs returned from a tool can be automatically unpacked by Python into multiple variables. Also, since we're not interested in the job status output, we can discard it using "_" as the variable name:
```
report_output_csv_file, output_map_flags_file, soil_output_file, _ = multioutput_tbx.initial_research_packet_csv()
report_output_csv_file
output_map_flags_file
soil_output_file
```
<a id="using-named-tuple-to-access-multiple-outputs"></a>
### Using named tuple to access multiple tool outputs
The code snippet below shows using a named tuple to access the multiple outputs returned from the tool:
```
results = multioutput_tbx.initial_research_packet_csv()
results.report_output_csv_file
results.job_status
```
<a id="tools-that-export-map-image-layer-as-output"></a>
## Tools that export MapImageLayer as output
Some Geoprocessing tools are configured to return their output as MapImageLayer for easier visualization of the results. The resultant layer can be added to a map or queried.
An example of such a tool is below:
```
hotspots = import_toolbox('https://sampleserver6.arcgisonline.com/arcgis/rest/services/911CallsHotspot/GPServer')
help(hotspots.execute_911_calls_hotspot)
result_layer, output_features, hotspot_raster = hotspots.execute_911_calls_hotspot()
result_layer
hotspot_raster
```
The resultant hotspot raster can be visualized in the Jupyter Notebook using the code snippet below:
```
from IPython.display import Image
Image(hotspot_raster['mapImage']['href'])
```
| true |
code
| 0.441191 | null | null | null | null |
|
# Chapter 10 - Predicting Continuous Target Variables with Regression Analysis
### Overview
- [Introducing a simple linear regression model](#Introducing-a-simple-linear-regression-model)
- [Exploring the Housing Dataset](#Exploring-the-Housing-Dataset)
- [Visualizing the important characteristics of a dataset](#Visualizing-the-important-characteristics-of-a-dataset)
- [Implementing an ordinary least squares linear regression model](#Implementing-an-ordinary-least-squares-linear-regression-model)
- [Solving regression for regression parameters with gradient descent](#Solving-regression-for-regression-parameters-with-gradient-descent)
- [Estimating the coefficient of a regression model via scikit-learn](#Estimating-the-coefficient-of-a-regression-model-via-scikit-learn)
- [Fitting a robust regression model using RANSAC](#Fitting-a-robust-regression-model-using-RANSAC)
- [Evaluating the performance of linear regression models](#Evaluating-the-performance-of-linear-regression-models)
- [Using regularized methods for regression](#Using-regularized-methods-for-regression)
- [Turning a linear regression model into a curve - polynomial regression](#Turning-a-linear-regression-model-into-a-curve---polynomial-regression)
- [Modeling nonlinear relationships in the Housing Dataset](#Modeling-nonlinear-relationships-in-the-Housing-Dataset)
- [Dealing with nonlinear relationships using random forests](#Dealing-with-nonlinear-relationships-using-random-forests)
- [Decision tree regression](#Decision-tree-regression)
- [Random forest regression](#Random-forest-regression)
- [Summary](#Summary)
<br>
<br>
```
from IPython.display import Image
%matplotlib inline
```
# Introducing a simple linear regression model
#### Univariate Model
$$
y = w_0 + w_1 x
$$
Relationship between
- a single feature (**explanatory variable**) $x$
- a continous target (**response**) variable $y$
```
Image(filename='./images/10_01.png', width=500)
```
- **regression line** : the best-fit line
- **offsets** or **residuals**: the gap between the regression line and the sample points
#### Multivariate Model
$$
y = w_0 + w_1 x_1 + \dots + w_m x_m
$$
<br>
<br>
# Exploring the Housing dataset
- Information about houses in the suburbs of Boston
- Collected by D. Harrison and D.L. Rubinfeld in 1978
- 506 samples
Source: [https://archive.ics.uci.edu/ml/datasets/Housing](https://archive.ics.uci.edu/ml/datasets/Housing)
Attributes:
<pre>
1. CRIM per capita crime rate by town
2. ZN proportion of residential land zoned for lots over
25,000 sq.ft.
3. INDUS proportion of non-retail business acres per town
4. CHAS Charles River dummy variable (= 1 if tract bounds
river; 0 otherwise)
5. NOX nitric oxides concentration (parts per 10 million)
6. RM average number of rooms per dwelling
7. AGE proportion of owner-occupied units built prior to 1940
8. DIS weighted distances to five Boston employment centres
9. RAD index of accessibility to radial highways
10. TAX full-value property-tax rate per $10,000
11. PTRATIO pupil-teacher ratio by town
12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks
by town
13. LSTAT % lower status of the population
14. MEDV Median value of owner-occupied homes in $1000's
</pre>
We'll consider **MEDV** as our target variable.
```
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/'
'housing/housing.data',
header=None,
sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
```
<br>
<br>
## Visualizing the important characteristics of a dataset
#### Scatter plot matrix
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV']
sns.pairplot(df[cols], size=2.5)
plt.tight_layout()
# plt.savefig('./figures/scatter.png', dpi=300)
plt.show()
```
#### Correlation Matrix
- a scaled version of the covariance matrix
- each entry contains the **Pearson product-moment correlation coefficients** (**Pearson's r**)
- quantifies **linear** relationship between features
- ranges in $[-1,1]$
- $r=1$ perfect positive correlation
- $r=0$ no correlation
- $r=-1$ perfect negative correlation
$$
r = \frac{
\sum_{i=1}^n [(x^{(i)}-\mu_x)(y^{(i)}-\mu_y)]
}{
\sqrt{\sum_{i=1}^n (x^{(i)}-\mu_x)^2}
\sqrt{\sum_{i=1}^n (y^{(i)}-\mu_y)^2}
} =
\frac{\sigma_{xy}}{\sigma_x\sigma_y}
$$
```
import numpy as np
cm = np.corrcoef(df[cols].values.T)
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
# plt.tight_layout()
# plt.savefig('./figures/corr_mat.png', dpi=300)
plt.show()
```
- MEDV has large correlation with LSTAT and RM
- The relation between MEDV ~ LSTAT may not be linear
- The relation between MEDV ~ RM looks liinear
```
sns.reset_orig()
%matplotlib inline
```
<br>
<br>
# Implementing an ordinary least squares (OLS) linear regression model
## Solving regression for regression parameters with gradient descent
#### OLS Cost Function (Sum of Squred Errors, SSE)
$$
J(w) = \frac12 \sum_{i=1}^n (y^{(i)} - \hat y^{(i)})^2 = \frac12 \| y - Xw - \mathbb{1}w_0\|^2
$$
- $\hat y^{(i)} = w^T x^{(i)} $ is the predicted value
- OLS linear regression can be understood as Adaline without the step function, which converts the linear response $w^T x$ into $\{-1,1\}$.
#### Gradient Descent (refresh)
$$
w_{k+1} = w_k - \eta_k \nabla J(w_k), \;\; k=1,2,\dots
$$
- $\eta_k>0$ is the learning rate
- $$
\nabla J(w_k) =
\begin{bmatrix} -X^T(y-Xw- \mathbb{1}w_0) \\
-\mathbb{1}^T(y-Xw- \mathbb{1}w_0)
\end{bmatrix}
$$
```
class LinearRegressionGD(object):
def __init__(self, eta=0.001, n_iter=20):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
return self.net_input(X)
X = df[['RM']].values
y = df[['MEDV']].values
y.shape
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
#y_std = sc_y.fit_transform(y[:, np.newaxis]).flatten()
y_std = sc_y.fit_transform(y).flatten()
y_std.shape
lr = LinearRegressionGD()
lr.fit(X_std, y_std)
plt.plot(range(1, lr.n_iter+1), lr.cost_)
plt.ylabel('SSE')
plt.xlabel('Epoch')
plt.tight_layout()
# plt.savefig('./figures/cost.png', dpi=300)
plt.show()
def lin_regplot(X, y, model):
plt.scatter(X, y, c='lightblue')
plt.plot(X, model.predict(X), color='red', linewidth=2)
return
lin_regplot(X_std, y_std, lr)
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
plt.tight_layout()
# plt.savefig('./figures/gradient_fit.png', dpi=300)
plt.show()
print('Slope: %.3f' % lr.w_[1])
print('Intercept: %.3f' % lr.w_[0])
num_rooms_std = sc_x.transform(np.array([[5.0]]))
price_std = lr.predict(num_rooms_std)
print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std))
```
<br>
<br>
## Estimating the coefficient of a regression model via scikit-learn
```
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
slr.fit(X, y)
y_pred = slr.predict(X)
print('Slope: %.3f' % slr.coef_[0])
print('Intercept: %.3f' % slr.intercept_)
```
The solution is different from the previous result, since the data is **not** normalized here.
```
lin_regplot(X, y, slr)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.tight_layout()
# plt.savefig('./figures/scikit_lr_fit.png', dpi=300)
plt.show()
```
<br>
<br>
# Fitting a robust regression model using RANSAC (RANdom SAmple Consensus)
- Linear regression models can be heavily affected by outliers
- A very small subset of data can have a big impact on the estimated model coefficients
- Removing outliers is not easy
RANSAC algorithm:
1. Select a random subset of samples to be *inliers* and fit the model
2. Test all other data points against the fitted model, and add those points that fall within a user-defined tolerance to inliers
3. Refit the model using all inliers.
4. Estimate the error of the fitted model vs. the inliers
5. Terminate if the performance meets a user-defined threshold, or if a fixed number of iterations has been reached.
```
from sklearn.linear_model import RANSACRegressor
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
loss='absolute_loss',
residual_threshold=5.0, # problem-specific
random_state=0)
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask],
c='blue', marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask],
c='lightgreen', marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='red')
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/ransac_fit.png', dpi=300)
plt.show()
print('Slope: %.3f' % ransac.estimator_.coef_[0])
print('Intercept: %.3f' % ransac.estimator_.intercept_)
```
<br>
<br>
# Evaluating the performance of linear regression models
```
from sklearn.model_selection import train_test_split
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
```
#### Residual Plot
- It's not easy to plot linear regression line in general, since the model uses multiple explanatory variables
- Residual plots are used for:
- detect nonlinearity
- detect outliers
- check if errors are randomly distributed
```
plt.scatter(y_train_pred, y_train_pred - y_train,
c='blue', marker='o', label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
```
If we see patterns in residual plot, it implies that our model didn't capture some explanatory information which leaked into the pattern.
#### MSE (Mean-Square Error)
$$
\text{MSE} = \frac{1}{n} \sum_{i=1}^n \left( y^{(i)} - \hat y^{(i)} \right)^2
$$
#### $R^2$ score
- The fraction of variance captured by the model
- $R^2=1$ : the model fits the data perfectly
$$
R^2 = 1 - \frac{SSE}{SST}, \;\; SST = \sum_{i=1}^n \left( y^{(i)}-\mu_y\right)^2
$$
$$
R^2 = 1 - \frac{MSE}{Var(y)}
$$
```
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
The gap in MSE (between train and test) indicates overfitting
<br>
<br>
# Using regularized methods for regression
#### Ridge Regression
$$
J(w) = \frac12 \sum_{i=1}^n (y^{(i)}-\hat y^{(i)})^2 + \lambda \|w\|_2^2
$$
#### LASSO (Least Absolute Shrinkage and Selection Operator)
$$
J(w) = \frac12 \sum_{i=1}^n (y^{(i)}-\hat y^{(i)})^2 + \lambda \|w\|_1
$$
#### Elastic-Net
$$
J(w) = \frac12 \sum_{i=1}^n (y^{(i)}-\hat y^{(i)})^2 + \lambda_1 \|w\|_2^2 + \lambda_2 \|w\|_1
$$
```
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
ridge = Ridge(alpha=1.0)
lasso = Lasso(alpha=1.0)
enet = ElasticNet(alpha=1.0, l1_ratio=0.5)
ridge.fit(X_train, y_train)
lasso.fit(X_train, y_train)
enet.fit(X_train, y_train)
#y_train_pred = lasso.predict(X_train)
y_test_pred_r = ridge.predict(X_test)
y_test_pred_l = lasso.predict(X_test)
y_test_pred_e = enet.predict(X_test)
print("Ridge = ", ridge.coef_)
print("LASSO = ", lasso.coef_)
print("ENET = ",enet.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
```
<br>
<br>
# Turning a linear regression model into a curve - polynomial regression
$$
y = w_0 + w_1 x + w_2 x^2 + \dots + w_d x^d
$$
```
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0, 586.0])[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
from sklearn.preprocessing import PolynomialFeatures
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2)
X_quad = quadratic.fit_transform(X)
# fit linear features
lr.fit(X, y)
X_fit = np.arange(250, 600, 10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# fit quadratic features
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# plot results
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit, label='quadratic fit')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/poly_example.png', dpi=300)
plt.show()
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
print('Training MSE linear: %.3f, quadratic: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('Training R^2 linear: %.3f, quadratic: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
```
<br>
<br>
## Modeling nonlinear relationships in the Housing Dataset
```
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# create quadratic features
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# fit features
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quadratic_r2 = r2_score(y, regr.predict(X_quad))
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# plot results
plt.scatter(X, y, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='cubic (d=3), $R^2=%.2f$' % cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/polyhouse_example.png', dpi=300)
plt.show()
```
As the model complexity increases, the chance of overfitting increases as well
Transforming the dataset:
```
X = df[['LSTAT']].values
y = df['MEDV'].values
# transform features
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# fit features
X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis]
regr = regr.fit(X_log, y_sqrt)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# plot results
plt.scatter(X_log, y_sqrt, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/transform_example.png', dpi=300)
plt.show()
```
<br>
<br>
# Dealing with nonlinear relationships using random forests
We use Information Gain (IG) to find the feature to split, which will lead to the maximal IG:
$$
IG(D_p, x_i) = I(D_p) - \frac{N_{left}}{N_p} I(D_{left}) - \frac{N_{right}}{N_p} I(D_{right})
$$
where $I$ is the impurity measure.
We've used e.g. entropy for discrete features. Here, we use MSE at node $t$ instead for continuous features:
$$
I(t) = MSE(t) = \frac{1}{N_t} \sum_{i \in D_t} (y^{(i)} - \bar y_t)^2
$$
where $\bar y_t$ is the sample mean,
$$
\bar y_t = \frac{1}{N_t} \sum_{i \in D_t} y^{(i)}
$$
## Decision tree regression
```
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort()
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
# plt.savefig('./figures/tree_regression.png', dpi=300)
plt.show()
r2 = r2_score(y, tree.predict(X))
print("R^2 = ", r2)
```
Disadvantage: it does not capture the continuity and differentiability of the desired prediction
<br>
<br>
## Random forest regression
Advantages:
- better generalization than individual trees
- less sensitive to outliers in the dataset
- don't require much parameter tuning
```
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=1)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
```
<br>
<br>
# Summary
- Univariate and multivariate linear models
- RANSAC to deal with outliers
- Regularization: control model complexity to avoid overfitting
| true |
code
| 0.67405 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.