path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
tutorials/OnnxCntkImport.ipynb | ###Markdown
Importing models from ONNX to CNTK In this tutorial, we will demonstrate how to import ONNX models into CNTK. Installation To import from ONNX, simply make sure you have CNTK 2.3.1 or higher installed. Follow CNTK installation instructions __[here](https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine)__. API Usage To load an ONNX model, specify the ONNX format for the format parameter of the load function. **Using Python API** ```pythonimport cntk as Cz = C.Function.load(, format=C.ModelFormat.ONNX)``` **Using C API** ```csharpFunction modelFunc = Function.load(, ModelFormat.ONNX);``` Trying it out with VGG-19 Now let's go through an example of loading a pretrained ONNX model into CNTK using Python. Step 1: Prepare an ONNX model to import You can find a collection of pretrained ONNX models [here](https://github.com/onnx/models). For this tutorial, we will be using the pretrained VGG-19 model.Download this file https://s3.amazonaws.com/download.onnx/models/vgg19.tar.gz to your working directory, and extract the tarball of the model. Inside the extracted folder you will find a protobuf file `model.pb`, which is the serialized ONNX model. Step 2: Import the ONNX model into CNTK Now let's load this ONNX model into CNTK.
###Code
import cntk as C
z = C.Function.load("vgg19/model.pb", device=C.device.cpu(), format=C.ModelFormat.ONNX)
###Output
_____no_output_____
###Markdown
Step 3: Prepare an image for inference Now that we've successfully loaded our model into CNTK, let's run inference on an input image to test it out. Here we will use an image of a husky.
###Code
import numpy as np
from PIL import Image
from IPython.core.display import display
img = Image.open("assets/dog.jpg")
display(img) #show the image
###Output
_____no_output_____
###Markdown
In the following code block, we prepare our input image for evaluation. First, we resize the image and perform mean subtraction. Then, we flip the image's channel order from RGB to BGR (PIL loads the image in RGB order, but CNTK expects BGR). Finally, we transpose the indices from `(image_width, image_height, num_color_channels)` to `(num_color_channels, image_width, image_height)` using the `np.rollaxis()` function, to adhere to the shape expected by CNTK.
###Code
img = img.resize((224,224))
rgb_img = np.asarray(img, dtype=np.float32) - 128
bgr_img = rgb_img[..., [2,1,0]]
img_data = np.ascontiguousarray(np.rollaxis(bgr_img,2))
###Output
_____no_output_____
###Markdown
Step 4: Evaluate model on image Now let's evaluate our image using the VGG-19 model and examine the results, in this case the top category identified by the network.
###Code
predictions = np.squeeze(z.eval({z.arguments[0]:[img_data]}))
top_class = np.argmax(predictions)
print(top_class)
###Output
248
###Markdown
In order to more easily comprehend the results, let's go ahead and download a [pickled dictionary](https://gist.github.com/yrevar/6135f1bd8dcf2e0cc683) that maps the 1000 ImageNet class IDs to human-readable labels. Extract the pickle file to your working directory.
###Code
import pickle
labels_dict = pickle.load(open("imagenet1000_clsid_to_human.pkl", "rb"))
print(labels_dict[top_class])
###Output
Eskimo dog, husky
|
machine_learning/.ipynb_checkpoints/cabi_ml_fitting_sandbox-checkpoint.ipynb | ###Markdown
CaBi ML fitting sandbox5/27: Sandbox created from copying the Champion nb.* At this point, I've found that dc_pop is more predictive than the dock/station variables and cabi_active_members_day_key and daylight_hours is more predictive than cabi_active_members_monthly* Now we can try tweaking other things* After changing the cross-validation to include shuffling, everything performs better, including Ridge * This is probably a good thing? It shows that the model is more generalizable, and that any issues we had in CV earlier were because the non-shuffled folds weren't each representative of the full sample 0. Data load, shaping, and split* Read in data from AWS * Check for high pairwise correlation* Encode time variable (day_of_year) as cyclical* Split into Xtrain, Xtest, ytrain, ytest based on date * Specify feature and target columns
###Code
# Read in data from AWS
from util_functions import *
import numpy as np
import pandas as pd
import time
start_time = time.perf_counter()
set_env_path()
conn, cur = aws_connect()
# fullquery contains all of the variables within consideration
fullquery = """
SELECT
EXTRACT(DOY FROM date) as day_of_year,
date,
daylight_hours,
apparenttemperaturehigh,
apparenttemperaturelow,
cloudcover,
dewpoint,
humidity,
precipaccumulation,
precipintensitymax,
precipprobability,
rain,
snow,
visibility,
windspeed,
us_holiday,
nats_single,
nats_double,
dc_bike_event,
dc_pop,
cabi_bikes_avail,
cabi_stations_alx,
cabi_stations_arl,
cabi_stations_ffx,
cabi_stations_mcn,
cabi_stations_mcs,
cabi_stations_wdc,
cabi_docks_alx,
cabi_docks_arl,
cabi_docks_ffx,
cabi_docks_mcn,
cabi_docks_mcs,
cabi_docks_wdc,
cabi_stations_tot,
cabi_docks_tot,
cabi_dur_empty_wdc,
cabi_dur_full_wdc,
cabi_dur_empty_arl,
cabi_dur_full_arl,
cabi_dur_full_alx,
cabi_dur_empty_alx,
cabi_dur_empty_mcs,
cabi_dur_full_mcs,
cabi_dur_full_mcn,
cabi_dur_empty_mcn,
cabi_dur_full_ffx,
cabi_dur_empty_ffx,
cabi_dur_empty_tot,
cabi_dur_full_tot,
cabi_active_members_day_key,
cabi_active_members_monthly,
cabi_active_members_annual,
cabi_trips_wdc_to_wdc,
cabi_trips_wdc_to_wdc_casual
from final_db"""
query = """
SELECT
EXTRACT(DOY FROM date) as day_of_year,
date,
daylight_hours,
apparenttemperaturehigh,
cloudcover,
humidity,
precipaccumulation,
precipintensitymax,
precipprobability,
rain,
snow,
visibility,
windspeed,
us_holiday,
nats_single,
nats_double,
dc_bike_event,
dc_pop,
cabi_dur_empty_arl,
cabi_dur_full_arl,
cabi_dur_full_alx,
cabi_dur_empty_alx,
cabi_dur_empty_mcs,
cabi_dur_full_mcs,
cabi_dur_full_mcn,
cabi_dur_empty_mcn,
cabi_trips_wdc_to_wdc,
cabi_trips_wdc_to_wdc_casual
from final_db"""
pd.options.display.max_rows = None
pd.options.display.max_columns = None
df = pd.read_sql(query, con=conn)
# Setting date to index for easier splitting
df.set_index(df.date, drop=True, inplace=True)
df.index = pd.to_datetime(df.index)
print("We have {} instances and {} features".format(*df.shape))
# Summary statistics
df.describe(percentiles=[.5]).round(3).transpose()
def print_highly_correlated(df, features, threshold=0.75):
"""
Prints highly correlated feature pairs in df.
"""
corr_df = df[features].corr()
# Select pairs above threshold
correlated_features = np.where(np.abs(corr_df) > threshold)
# Avoid duplication
correlated_features = [(corr_df.iloc[x,y], x, y) for x, y in zip(*correlated_features) if x != y and x < y]
# Sort by abs(correlation)
s_corr_list = sorted(correlated_features, key=lambda x: -abs(x[0]))
print("There are {} feature pairs with pairwise correlation above {}".format(len(s_corr_list), threshold))
for v, i, j in s_corr_list:
cols = df[features].columns
print("{} and {} = {:0.3f}".format(corr_df.index[i], corr_df.columns[j], v))
print_highly_correlated(df, df.columns)
# Encode day_of_year as cyclical
df['sin_day_of_year'] = np.sin(2*np.pi*df.day_of_year/365)
df['cos_day_of_year'] = np.cos(2*np.pi*df.day_of_year/365)
df.sample(100).plot.scatter('sin_day_of_year','cos_day_of_year').set_aspect('equal')
###Output
_____no_output_____
###Markdown
* Split into Xtrain, Xtest, ytrain, ytest based on date * Training dates = 2013-01-01 to 2016-12-31 * Test dates = 2017-01-01 to 2017-09-08 * New data (coincides with beginning of dockless pilot) = 2017-09-09 to present
###Code
# Train test split
# This can be tweaked, but we use 5-fold cross-validation to pick the model so that shouldn't change
train = df.loc['2013-01-01':'2016-12-31']
test = df.loc['2017-01-01':'2017-09-08']
print(train.shape, test.shape)
tr = train.shape[0]
te = test.shape[0]
trpct = tr/(tr+te)
tepct = te/(tr+te)
print("{:0.3f} percent of the data is in the training set and {:0.3f} percent is in the test set".format(trpct, tepct))
# Specify columns to keep and drop for X and y
drop_cols = ['date', 'day_of_year']
y_cols = ['cabi_trips_wdc_to_wdc', 'cabi_trips_wdc_to_wdc_casual']
feature_cols = [col for col in df.columns if (col not in y_cols) & (col not in drop_cols)]
# X y split
Xtrain_raw = train[feature_cols]
# Our target variable here is all DC to DC trips
ytrain = train[y_cols[0]]
Xtest_raw = test[feature_cols]
ytest = test[y_cols[0]]
print(Xtrain_raw.shape, ytrain.shape, Xtest_raw.shape, ytest.shape)
###Output
(1461, 26) (1461,) (251, 26) (251,)
###Markdown
1. PreprocessingWe want to use PolynomialFeatures and StandardScaler in a Pipeline, but we only want to scale continuous features.Here, I do the polynomial transformation first and then feed it through a pipeline because I wasn't able to get it all working in one pipeline.* Use PolynomialFeatures to create quadratic and interaction terms * Convert back to DataFrame * Drop redundant variables* Use Pipeline and FeatureUnion to selectively scale/ignore certain variables* Fit and transform using pipeline to get final Xtrain and Xtest
###Code
# Imports and custom classes
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import PolynomialFeatures, StandardScaler, MinMaxScaler
from sklearn.base import BaseEstimator, TransformerMixin
class Columns(BaseEstimator, TransformerMixin):
"""
This is a custom transformer for splitting the data into subsets for FeatureUnion.
"""
def __init__(self, names=None):
self.names = names
def fit(self, X, y=None, **fit_params):
return self
def transform(self, X):
return X[self.names]
# Use PolynomialFeatures to create quadratic and interaction terms
# Should ultimately be part of a Pipeline, but I had issues because
# PF returns an array and Columns requires a df
pf = PolynomialFeatures(1, include_bias=False)
Xtrain_pf_array = pf.fit_transform(Xtrain_raw)
Xtest_pf_array = pf.transform(Xtest_raw)
# Get feature names
Xtrain_cols = pf.get_feature_names(Xtrain_raw.columns)
# Convert arrays to dfs with the new pf column names
Xtrain_pf = pd.DataFrame(Xtrain_pf_array, columns=Xtrain_cols)
Xtest_pf = pd.DataFrame(Xtest_pf_array, columns=Xtrain_cols)
print(Xtrain_pf.shape, Xtest_pf.shape)
# A lot of these variables are redundant, especially squared dummy variables
# All of these variables listed next are 'binary' but only some are meaningful
bin_vars = [col for col in Xtrain_pf.columns if Xtrain_pf[col].nunique() == 2]
bin_vars
# Dropping squared dummies and nonsensical interaction terms
# This part can be expanded. There's a lot of noise after PF
to_drop = [
'rain^2', 'snow^2', 'us_holiday^2', 'nats_single^2', 'nats_double^2',
'dc_bike_event^2', 'sin_day_of_year^2', 'cos_day_of_year^2',
'sin_day_of_year cos_day_of_year'
]
'''
Xtrain_pf2 = Xtrain_pf.drop(labels=to_drop, axis=1)
Xtest_pf2 = Xtest_pf.drop(labels=to_drop, axis=1)
'''
Xtrain_pf2 = Xtrain_pf.copy()
Xtest_pf2 = Xtest_pf.copy()
print(Xtrain_pf2.shape, Xtest_pf2.shape)
Xtrain_pf2.head()
# Defining binary and continuous variables
# We have normal 0,1 binary variables, binary variables outside 0,1 that were created by PF, and continuous variables
# We want to ignore the 0,1s, MinMaxScale the non 0,1 binary variables, and StandardScale the continuous variables
binary = ['rain', 'snow', 'us_holiday', 'nats_single', 'nats_double', 'dc_bike_event']
cont = [col for col in Xtrain_pf2.columns if (col not in binary)]
# FeatureUnion in our pipeline shifts the ordering of the variables so we need to save the ordering here
cols = binary + cont
pipeline = Pipeline([
('features', FeatureUnion([
('binarypf', Pipeline([
('binpfcols', Columns(names=binary)),
('minmax', MinMaxScaler())
])),
('continuous', Pipeline([
('contcols', Columns(names=cont)),
('scaler', StandardScaler())
]))
]))
])
# Fit and transform to create our final Xtrain and Xtest
pipeline.fit(Xtrain_pf2)
Xtrain_scaled = pipeline.transform(Xtrain_pf2)
Xtest_scaled = pipeline.transform(Xtest_pf2)
# Put everything back into dfs
Xtrain = pd.DataFrame(Xtrain_scaled, columns=cols)
Xtest = pd.DataFrame(Xtest_scaled, columns=cols)
print(Xtrain.shape, Xtest.shape)
Xtrain.describe(percentiles=[.5]).round(3).transpose()
# Appending train and test to get full dataset for cross-validation
Xfull = Xtrain.append(Xtest)
yfull = ytrain.append(ytest)
print(Xfull.shape, yfull.shape)
###Output
(1712, 26) (1712,)
###Markdown
2. Model Fitting
###Code
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import mean_absolute_error as mae
from sklearn.metrics import median_absolute_error as medae
from sklearn.metrics import explained_variance_score as evs
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
def score_model(model, alpha=False):
"""
Fits a model using the training set, predicts using the test set, and then calculates
and reports goodness of fit metrics and alpha if specified and available.
"""
model.fit(Xtrain, ytrain)
yhat = model.predict(Xtest)
r2 = r2_score(ytest, yhat)
me = mse(ytest, yhat)
ae = mae(ytest, yhat)
mede = medae(ytest, yhat)
ev = evs(ytest, yhat)
if alpha == True:
print("Results from {}: \nr2={:0.3f} \nMSE={:0.3f} \
\nMAE={:0.3f} \nMEDAE={:0.3f} \nEVS={:0.3f} \nalpha={:0.3f}".format(model, r2, me,
ae, mede, ev, model.alpha_))
else:
print("Results from {}: \nr2={:0.3f} \nMSE={:0.3f} \
\nMAE={:0.3f} \nMEDAE={:0.3f} \nEVS={:0.3f}".format(model, r2, me, ae, mede, ev))
def cv_score(model, cv=5):
"""
Evaluates a model by 5-fold cross-validation and prints mean and 2*stdev of scores.
Shuffles before cross-validation but sets random_state=7 for reproducibility.
"""
kf = KFold(n_splits=cv, shuffle=True, random_state=7)
scores = cross_val_score(model, Xfull, yfull, cv=kf)
print(scores)
print("Accuracy: {:0.3f} (+/- {:0.3f})".format(scores.mean(), scores.std() * 2))
'''Elastic Net'''
from sklearn.linear_model import ElasticNetCV
t = time.perf_counter()
# Alphas to search over
# Our alpha is usually in the low double digits
# This sets our search space to 250 steps between 10^0=1 and 10^2=100
alphas = np.logspace(0, 2, 250)
# Suggested l1_ratio from docs
l1_ratio = [.1, .5, .7, .9, .95, .99, 1]
en = ElasticNetCV(l1_ratio=l1_ratio, alphas=alphas, fit_intercept=True, normalize=False)
score_model(en, alpha=True)
print("L1 ratio=",en.l1_ratio_)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
'''Lasso'''
from sklearn.linear_model import LassoCV
t = time.perf_counter()
lasso = LassoCV(alphas=alphas, n_alphas=250, fit_intercept=True, normalize=False)
score_model(lasso, alpha=True)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
# Which variables were selected?
# Put coefficients and variable names in df
lassodf = pd.DataFrame(lasso.coef_, index=Xtrain.columns)
# Select nonzeros
results = lassodf[(lassodf.T != 0).any()]
# Sort by magnitude
results['sorted'] = results[0].abs()
results.sort_values(by='sorted', inplace=True, ascending=False)
print("Lasso chooses {} variables".format(len(results)))
print(results)
'''Ridge'''
from sklearn.linear_model import RidgeCV
t = time.perf_counter()
rr = RidgeCV(alphas=alphas, fit_intercept=True, normalize=False)
score_model(rr, alpha=True)
cv_score(rr)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
'''RF'''
from sklearn.ensemble import RandomForestRegressor
t = time.perf_counter()
rf = RandomForestRegressor()
score_model(rf)
cv_score(rf)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
t = time.perf_counter()
cv_score(lasso)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
end_time = (time.perf_counter() - start_time)/60
print("This notebook took {:0.2f} minutes to run".format(end_time))
###Output
This notebook took 0.12 minutes to run
|
notebooks/asroma.ipynb | ###Markdown
Single website pages histogramPlot a histogram of pages crawled from a single website. 1. Read file
###Code
%matplotlib inline
# Importing libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
FILEPATH_PREFIX = '../tag_hist/data'
SPIDERNAME = 'asroma'
FILENAME = '2019-03-24T17-27-53.csv'
FILEPATH = '{}/{}/{}'.format(FILEPATH_PREFIX, SPIDERNAME, FILENAME)
FILEPATH
df = pd.read_csv(FILEPATH)
###Output
_____no_output_____
###Markdown
2. Data analysis
###Code
print("First 5 rows")
print("------------")
df.head()
print("No. of rows and columns")
print("-----------------------")
df.shape
print("Check null values")
print("-----------------")
df.isnull().any()
print("Check duplicate values")
print("----------------------")
len(df['url'].unique()) != df.shape[0]
print("DataFrame column types")
print("----------------------")
df.info()
print("Some stats")
print("----------------")
df.describe()
###Output
Some stats
----------------
###Markdown
3. Plot a histogram
###Code
df.hist(column = 'tags', bins = 1000, figsize = (12,12))
print("most frequent value")
print("-------------------")
df['tags'].value_counts().idxmax()
###Output
most frequent value
-------------------
###Markdown
Zoomed in histogram
###Code
RANGE_MIN = 1000
RANGE_MAX = 2000
df.hist(column = 'tags', bins = 1000, figsize = (12,12))
plt.xlim(RANGE_MIN, RANGE_MAX)
###Output
_____no_output_____ |
2D_Flow/vini_overland_flow_hugosite.ipynb | ###Markdown
The deAlmeida Overland Flow Component For more Landlab tutorials, click here: https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html This notebook illustrates running the deAlmeida overland flow component in an extremely simple-minded way on a real topography, then shows it creating a flood sequence along an inclined surface with an oscillating water surface at one end.First, import what we'll need:
###Code
from landlab.components.overland_flow import OverlandFlow
from landlab.plot.imshow import imshow_grid
from landlab.plot.colors import water_colormap
from landlab import RasterModelGrid
from landlab.io.esri_ascii import read_esri_ascii
from landlab.plot.graph import plot_graph
from landlab.plot.graph import plot_nodes
from matplotlib.pyplot import figure
import numpy as np
from time import time
%matplotlib inline
###Output
_____no_output_____
###Markdown
Pick the initial and run conditions
###Code
run_time = 100 # duration of run, (s)
h_init = 0.1 # initial thin layer of water (m)
n = 0.01 # roughness coefficient, (s/m^(1/3))
g = 9.8 # gravity (m/s^2)
alpha = 0.7 # time-step factor (nondimensional; from Bates et al., 2010)
u = 0.4 # constant velocity (m/s, de Almeida et al., 2012)
run_time_slices = (10, 50, 100)
###Output
_____no_output_____
###Markdown
Elapsed time starts at 1 second. This prevents errors when setting our boundary conditions.
###Code
elapsed_time = 1.0
###Output
_____no_output_____
###Markdown
Import hugo_site.asc dem
###Code
gm,z = read_esri_ascii("espin/lessons/landlab/hugo_site.asc", name='topographic__elevation')
# Indicates that a boundary node is closed
gm.status_at_node[z<0.0] = gm.BC_NODE_IS_CLOSED
###Output
_____no_output_____
###Markdown
Use Landlab methods to import an ARC ascii grid, and load the data into the field that the component needs to look at to get the data. This loads the elevation data, z, into a "field" in the grid itself, defined on the nodes. We can get at this data with this syntax:
###Code
np.all(gm.at_node['topographic__elevation'] == z)
# check DEM
imshow_grid(gm, 'topographic__elevation')
my_outlet_node = 100 # This DEM was generated using Landlab and the outlet node ID was known
gm.status_at_node[my_outlet_node] = gm.BC_NODE_IS_FIXED_VALUE
import matplotlib.pyplot as plt
plt.plot(gm.x_of_node, gm.y_of_node, '.')
###Output
_____no_output_____
###Markdown
Now initialize a couple more grid fields that the component is going to need:
###Code
# rmg.add_zeros('surface_water__depth', at='node') # water depth (m)
gm.add_zeros('surface_water__depth', at='node') # water depth (m)
# rmg.at_node['surface_water__depth'] += h_init
gm.at_node['surface_water__depth'] += h_init
###Output
_____no_output_____
###Markdown
Now instantiate the component itself
###Code
of = OverlandFlow(
gm, steep_slopes=True
) #for stability in steeper environments, we set the steep_slopes flag to True
###Output
_____no_output_____
###Markdown
Now we're going to run the loop that drives the component:
###Code
while elapsed_time < run_time:
# First, we calculate our time step.
dt = of.calc_time_step()
# Now, we can generate overland flow.
of.overland_flow()
# Increased elapsed time
print('Elapsed time: ', elapsed_time)
elapsed_time += dt
imshow_grid(gm, 'surface_water__depth', cmap='Blues')
###Output
_____no_output_____
###Markdown
Now let's get clever, and run a set of time slices:
###Code
elapsed_time = 1.
for t in run_time_slices:
while elapsed_time < t:
# First, we calculate our time step.
dt = of.calc_time_step()
# Now, we can generate overland flow.
of.overland_flow()
# Increased elapsed time
elapsed_time += dt
figure(t)
imshow_grid(gm, 'surface_water__depth', cmap='Blues')
###Output
_____no_output_____ |
notebooks/Clustering/Sentence_embeddings_src.ipynb | ###Markdown
Lang8
###Code
texts = read_lines("../../data_parallel/lang8/lang8_src")
len(texts)
vectors = model.encode(texts, show_progress_bar=True, batch_size=64)
import pickle
with open("data/lang8_train_src_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
###Output
_____no_output_____
###Markdown
NUCLE
###Code
texts = read_lines("../../data_parallel/nucle/nucle_src")
len(texts)
vectors = model.encode(texts, show_progress_bar=True)
len(vectors)
import pickle
with open("data/nucle_train_src_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
###Output
_____no_output_____
###Markdown
FCE
###Code
texts = read_lines("../../data_parallel/fce/fce_train_src")
vectors = model.encode(texts, show_progress_bar=True,batch_size=16)
import pickle
with open("data/fce_train_src_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
###Output
_____no_output_____
###Markdown
WL
###Code
texts = read_lines("../../data_parallel/wi+locness/train_src")
len(texts)
vectors = model.encode(texts, show_progress_bar=True)
import pickle
with open("data/wl_train_src_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
texts = read_lines("../../data_parallel/new_1bw/train_target")
len(texts)
texts = texts[:20000]
vectors = model.encode(texts, show_progress_bar=True)
import pickle
with open("data/1bw_train_tgt_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
###Output
_____no_output_____
###Markdown
PIE
###Code
texts = read_lines("../../data_parallel/synthetic/a1/a1_corr_train_98.txt")
len(texts)
texts = texts[:20000]
vectors = model.encode(texts, show_progress_bar=True)
with open("data/pie_train_tgt_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
###Output
_____no_output_____
###Markdown
Blogs
###Code
texts = read_lines("../../data_parallel/blogs/train_tgt")
len(texts)
texts = texts[:30000]
vectors = model.encode(texts, show_progress_bar=True)
with open("data/blogs_train_tgt_embed.pickle", "wb") as f:
pickle.dump(vectors, f)
###Output
_____no_output_____ |
Solutions/Solution.ipynb | ###Markdown
Iris Data Set This data set contains refers to the iris plant.It contains 3 classes of 50 instances each.The 3 Classes are:* Iris Setosa * Iris Versicolour * Iris VirginicaThe figures in the data set for each class are based on:1. sepal length in cm 2. sepal width in cm 3. petal length in cm 4. petal width in cm references: https://archive.ics.uci.edu/ml/datasets/iris 1. Get and load the dataSearch online for Fisher’s iris data set, find a copy of the data, download it and save it to your repository. If it is not in CSV format, use whatever means (Excel, notepad++, visual studio code, python) to convert it to CSV and save the CSV version to your repository also. Open your Jupyter notebook for this problem sheet, creating a new one if needed, and load the CSV file into a numpy array.
###Code
import numpy as np
#Loading in IRIS.csv
sepalLength, sepalWidth, petalLength, petalWidth = np.loadtxt('iris.csv', delimiter=',', usecols=(0,1,2,3), unpack=True, dtype=float)
classes = np.loadtxt('iris.csv', delimiter=',', usecols=(4), unpack=True, dtype=str)
#printing out the data
for i in range(len(sepalLength)):
print(str(sepalLength[i])+' '+ str(sepalWidth[i]) + ' ' + str(petalLength[i]) + ' ' +str(classes[i]))
###Output
5.1 3.5 1.4 setosa
4.9 3.0 1.4 setosa
4.7 3.2 1.3 setosa
4.6 3.1 1.5 setosa
5.0 3.6 1.4 setosa
5.4 3.9 1.7 setosa
4.6 3.4 1.4 setosa
5.0 3.4 1.5 setosa
4.4 2.9 1.4 setosa
4.9 3.1 1.5 setosa
5.4 3.7 1.5 setosa
4.8 3.4 1.6 setosa
4.8 3.0 1.4 setosa
4.3 3.0 1.1 setosa
5.8 4.0 1.2 setosa
5.7 4.4 1.5 setosa
5.4 3.9 1.3 setosa
5.1 3.5 1.4 setosa
5.7 3.8 1.7 setosa
5.1 3.8 1.5 setosa
5.4 3.4 1.7 setosa
5.1 3.7 1.5 setosa
4.6 3.6 1.0 setosa
5.1 3.3 1.7 setosa
4.8 3.4 1.9 setosa
5.0 3.0 1.6 setosa
5.0 3.4 1.6 setosa
5.2 3.5 1.5 setosa
5.2 3.4 1.4 setosa
4.7 3.2 1.6 setosa
4.8 3.1 1.6 setosa
5.4 3.4 1.5 setosa
7.0 3.2 4.7 versicolor
6.4 3.2 4.5 versicolor
6.9 3.1 4.9 versicolor
5.5 2.3 4.0 versicolor
6.5 2.8 4.6 versicolor
5.7 2.8 4.5 versicolor
6.3 3.3 4.7 versicolor
4.9 2.4 3.3 versicolor
6.6 2.9 4.6 versicolor
5.2 2.7 3.9 versicolor
5.0 2.0 3.5 versicolor
5.9 3.0 4.2 versicolor
6.0 2.2 4.0 versicolor
6.1 2.9 4.7 versicolor
5.6 2.9 3.6 versicolor
6.7 3.1 4.4 versicolor
5.6 3.0 4.5 versicolor
5.8 2.7 4.1 versicolor
6.2 2.2 4.5 versicolor
5.6 2.5 3.9 versicolor
5.9 3.2 4.8 versicolor
6.1 2.8 4.0 versicolor
6.3 2.5 4.9 versicolor
6.1 2.8 4.7 versicolor
6.4 2.9 4.3 versicolor
6.6 3.0 4.4 versicolor
6.8 2.8 4.8 versicolor
6.7 3.0 5.0 versicolor
6.0 2.9 4.5 versicolor
5.7 2.6 3.5 versicolor
5.5 2.4 3.8 versicolor
5.5 2.4 3.7 versicolor
5.8 2.7 3.9 versicolor
6.0 2.7 5.1 versicolor
6.3 3.3 6.0 virginica
5.8 2.7 5.1 virginica
7.1 3.0 5.9 virginica
6.3 2.9 5.6 virginica
6.5 3.0 5.8 virginica
7.6 3.0 6.6 virginica
4.9 2.5 4.5 virginica
7.3 2.9 6.3 virginica
6.7 2.5 5.8 virginica
7.2 3.6 6.1 virginica
6.5 3.2 5.1 virginica
6.4 2.7 5.3 virginica
6.8 3.0 5.5 virginica
5.7 2.5 5.0 virginica
5.8 2.8 5.1 virginica
6.4 3.2 5.3 virginica
6.5 3.0 5.5 virginica
7.7 3.8 6.7 virginica
7.7 2.6 6.9 virginica
6.0 2.2 5.0 virginica
6.9 3.2 5.7 virginica
5.6 2.8 4.9 virginica
7.7 2.8 6.7 virginica
6.3 2.7 4.9 virginica
6.7 3.3 5.7 virginica
7.2 3.2 6.0 virginica
6.2 2.8 4.8 virginica
6.1 3.0 4.9 virginica
6.4 2.8 5.6 virginica
7.2 3.0 5.8 virginica
7.4 2.8 6.1 virginica
7.9 3.8 6.4 virginica
6.4 2.8 5.6 virginica
6.3 2.8 5.1 virginica
###Markdown
3. Create a simple plotThe dataset contains five variables: sepal length, sepal width, petal length, petal width, and species. Use pyplot to create a scatter plot of sepal length on the x-axis versus sepal width on the y-axis. Add axis labels and a title to the plot.
###Code
import matplotlib.pyplot as pl
#setting the size of the graph
pl.rcParams['figure.figsize'] = (10.0, 10.0)
pl.scatter(sepalLength, sepalWidth, marker='.')
pl.title('Sepal Width vs Sepal Length')
pl.xlabel('Sepal Length')
pl.ylabel('Sepal Width')
pl.show()
###Output
_____no_output_____
###Markdown
4. Create a more complex plotRe-create the above plot, but this time plot the setosa data points in red, the versicolor data point in green, and the virginica data points in blue. Setosa, versicolor, and virginica are the three possible values of the species variable. Add a legend to the plot showing which species is in which colour.
###Code
import matplotlib.pyplot as pl
#setting the size of the graph
pl.rcParams['figure.figsize'] = (10.0, 10.0)
# adapted from https://stackoverflow.com/questions/19385639/duplicate-items-in-legend-in-matplotlib
s, vc, v = False, False, False
#plotting colours for each type of class
for i in range(len(classes)):
if classes[i] == 'setosa':
pl.scatter(sepalLength[i], sepalWidth[i], color='r',marker='o', label="setosa" if s == False else "")
s = True
elif classes[i] == 'versicolor':
pl.scatter(sepalLength[i], sepalWidth[i], color='g',marker='x', label="versicolor" if vc == False else "")
vc = True
elif classes[i] == 'virginica':
pl.scatter(sepalLength[i], sepalWidth[i], color='b',marker='.', label="virginica" if v == False else "")
v = True
pl.title('Sepal Width vs Sepal Length')
pl.xlabel('Sepal Length')
pl.ylabel('Sepal Width')
#display legend & graph
pl.legend()
pl.show()
###Output
_____no_output_____
###Markdown
5. Use seabornUse the seaborn library to create a scatterplot matrix of all five variables.
###Code
import pandas as pd
names = ['Sepal Length', 'Sepal Width','Petal Length', 'Petal Width', 'Iris Class']
data = [sepalLength, sepalWidth, petalLength, petalWidth, classes]
# prepare data for seaborn to work
df = pd.DataFrame(dict(zip(names, data)))
df
# adapted from http://seaborn.pydata.org/examples/scatterplot_matrix.html
import seaborn as sns
%matplotlib inline
sns.set(style="ticks")
df = sns.load_dataset("iris")
sns.pairplot(df, hue="species")
###Output
_____no_output_____
###Markdown
6. Fit a lineFit a straight line to the variables petal length and petal width for the whole data set. Plot the data points in a scatter plot with the best fit line shown.
###Code
import matplotlib.pyplot as pl
m, c = np.polyfit( petalLength, petalWidth, 1)
#plotting best fit line
pl.scatter(petalLength, petalWidth, marker='.', label='Petal Measurements')
pl.plot(petalLength, m * petalLength + c, 'r', label='Best fit line')
pl.title('Petal Width vs Petal Length')
pl.xlabel('Petal Length')
pl.ylabel('Petal Width')
pl.legend()
pl.show()
###Output
_____no_output_____
###Markdown
7. Calculate the R-squared valueCalculate the R-squared value for your line above.
###Code
#r-squared value of line using numpy
np.corrcoef(petalLength, petalWidth)[0][1]**2
###Output
_____no_output_____
###Markdown
8. Fit another lineUse numpy to select only the data points where species is setosa. Fit a straight line to the variables petal length and petal width. Plot the data points in a scatter plot with the best fit line shown.
###Code
#joining arrays together
irisData = np.column_stack((sepalLength, sepalWidth, petalLength, petalWidth,classes))
# filter to data that have setosa class in column 4
filterSetosa = (irisData[np.in1d(irisData[:,4],'setosa')])
setosaTrans = filterSetosa.transpose()
# removing string with class value (setosa) from arrays and converting DType to float
setosaData = (np.delete(setosaTrans, (4), axis=0)).astype(np.float)
#using numpy to get best fit line
m, c = np.polyfit(setosaData[2],setosaData[3], 1)
#plotting best fit line
pl.scatter(setosaData[2], setosaData[3], marker='.', label='Setosa Petal Measurements')
pl.plot(setosaData[2], m * setosaData[2] + c, 'r', label='Best fit line')
pl.title('Petal Width vs Petal Length ')
pl.xlabel('Petal Length')
pl.ylabel('Petal Width')
pl.legend()
pl.show()
###Output
_____no_output_____
###Markdown
9. Calculate the R-squared valueCalculate the R-squared value for your line above.
###Code
#r-squared value of line using numpy
np.corrcoef(setosaData[2], setosaData[3])[0][1]**2
###Output
_____no_output_____
###Markdown
10. Use gradient descent
###Code
#Partial derivitives where m is dynamic
def grad_m(x, y, m, c):
return -2.0 * np.sum(x * (y - m * x - c))
#Partial derivitives where c is dynamic
def grad_c(x, y, m , c):
return -2.0 * np.sum(y - m * x - c)
###Output
_____no_output_____ |
Chapter 5/R Lab/5.3.3 k-Fold Cross-Validation.ipynb | ###Markdown
Preprocessing
###Code
# import relevant statistical packages
import numpy as np
import pandas as pd
# import relevant data visualisation packages
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
url = "/Users/arpanganguli/Documents/Professional/Finance/ISLR/Datasets/Auto.csv"
df = pd.read_csv(url)
df.head()
df.horsepower.dtype
df['hp'] = df.horsepower.astype(float)
df.head()
df.hp.dtype
###Output
_____no_output_____
###Markdown
*Okay cool!* k-Fold Cross-Validation
###Code
from sklearn.model_selection import KFold as KF
kf = KF(n_splits=10) # k = 10
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
MSE_all = pd.DataFrame()
from sklearn.preprocessing import PolynomialFeatures as PF
for i in range(1,11):
MSE = 0
X = df[['hp']]
X_ = pd.DataFrame(PF(i).fit_transform(X))
X_.drop(columns=0, inplace=True)
y = df[['mpg']]
for train_index, test_index in kf.split(X):
X_train, X_test = X_.iloc[train_index], X_.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
lmfit = LinearRegression().fit(X_train, y_train)
lmpred = lmfit.predict(X_test)
MSE += mean_squared_error(y_test, lmpred)
MSE_mean = MSE/10
MSE_all = MSE_all.append([MSE_mean])
MSE_all.columns = [['MSE']]
MSE_all.reset_index(drop=True, inplace=True)
round(MSE_all, 2)
plt.xkcd()
plt.figure(figsize = (25, 10))
plt.plot(MSE_all, color='green', marker='o', linestyle='dashed',
linewidth=2, markersize=12, markerfacecolor = 'orange')
plt.title("MSE vs order of regression")
plt.xlabel("order")
plt.ylabel("MSE")
###Output
_____no_output_____ |
4.Recurrent Networks/1_intro-to-rnns/1_Anna_KaRNNa_Solution.ipynb | ###Markdown
Anna KaRNNaIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
###Code
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
###Code
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text)) # ['\n', ' ', '!', '"', '$', '%',
vocab_to_int = {c: i for i, c in enumerate(vocab)} #{'w': 79, '1': 16, '7': 22, 't': 76, 'O'
int_to_vocab = dict(enumerate(vocab)) #{0: '\n', 1: ' ', 2: '!', 3: '"',
print("text=n", text[:100])
print("vocab=\n", vocab)
print("vocab_to_int=\n", vocab_to_int)
print("int_to_vocab=\n", int_to_vocab)
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) #array([31, 64, 57, 72,
###Output
text=n Chapter 1
Happy families are all alike; every unhappy family is unhappy in its own
way.
Everythin
vocab=
['\n', ' ', '!', '"', '$', '%', '&', "'", '(', ')', '*', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '?', '@', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '_', '`', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
vocab_to_int=
{'\n': 0, ' ': 1, '!': 2, '"': 3, '$': 4, '%': 5, '&': 6, "'": 7, '(': 8, ')': 9, '*': 10, ',': 11, '-': 12, '.': 13, '/': 14, '0': 15, '1': 16, '2': 17, '3': 18, '4': 19, '5': 20, '6': 21, '7': 22, '8': 23, '9': 24, ':': 25, ';': 26, '?': 27, '@': 28, 'A': 29, 'B': 30, 'C': 31, 'D': 32, 'E': 33, 'F': 34, 'G': 35, 'H': 36, 'I': 37, 'J': 38, 'K': 39, 'L': 40, 'M': 41, 'N': 42, 'O': 43, 'P': 44, 'Q': 45, 'R': 46, 'S': 47, 'T': 48, 'U': 49, 'V': 50, 'W': 51, 'X': 52, 'Y': 53, 'Z': 54, '_': 55, '`': 56, 'a': 57, 'b': 58, 'c': 59, 'd': 60, 'e': 61, 'f': 62, 'g': 63, 'h': 64, 'i': 65, 'j': 66, 'k': 67, 'l': 68, 'm': 69, 'n': 70, 'o': 71, 'p': 72, 'q': 73, 'r': 74, 's': 75, 't': 76, 'u': 77, 'v': 78, 'w': 79, 'x': 80, 'y': 81, 'z': 82}
int_to_vocab=
{0: '\n', 1: ' ', 2: '!', 3: '"', 4: '$', 5: '%', 6: '&', 7: "'", 8: '(', 9: ')', 10: '*', 11: ',', 12: '-', 13: '.', 14: '/', 15: '0', 16: '1', 17: '2', 18: '3', 19: '4', 20: '5', 21: '6', 22: '7', 23: '8', 24: '9', 25: ':', 26: ';', 27: '?', 28: '@', 29: 'A', 30: 'B', 31: 'C', 32: 'D', 33: 'E', 34: 'F', 35: 'G', 36: 'H', 37: 'I', 38: 'J', 39: 'K', 40: 'L', 41: 'M', 42: 'N', 43: 'O', 44: 'P', 45: 'Q', 46: 'R', 47: 'S', 48: 'T', 49: 'U', 50: 'V', 51: 'W', 52: 'X', 53: 'Y', 54: 'Z', 55: '_', 56: '`', 57: 'a', 58: 'b', 59: 'c', 60: 'd', 61: 'e', 62: 'f', 63: 'g', 64: 'h', 65: 'i', 66: 'j', 67: 'k', 68: 'l', 69: 'm', 70: 'n', 71: 'o', 72: 'p', 73: 'q', 74: 'r', 75: 's', 76: 't', 77: 'u', 78: 'v', 79: 'w', 80: 'x', 81: 'y', 82: 'z'}
###Markdown
Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
###Code
text[:100]
###Output
_____no_output_____
###Markdown
And we can see the characters encoded as integers.
###Code
encoded[:100]
###Output
_____no_output_____
###Markdown
Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
###Code
len(vocab)
###Output
_____no_output_____
###Markdown
Making training mini-batchesHere is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:We start with our text encoded as integers in one long array in `encoded`. Let's create a function that will give us an iterator for our batches. I like using [generator functions](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/) to do this. Then we can pass `encoded` into this function and get our batch generator.The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.After that, we need to split `arr` into $N$ sequences. You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (`batch_size` below), let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.
###Code
def get_batches(arr, batch_size, n_steps):
'''Create a generator that returns batches of size
batch_size x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
batch_size: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
chars_per_batch = batch_size * n_steps
n_batches = len(arr)//chars_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * chars_per_batch]
# Reshape into batch_size rowsloaded_graph
y_temp = arr[:, n+1:n+n_steps+1]
# For the very last batch, y will be one character short at the end of
# the sequences which breaks things. To get around this, I'll make an
# array of the appropriate size first, of all zeros, then add the targets.
# This will introduce a small artifact in the last batch, but it won't matter.
y = np.zeros(x.shape, dtype=x.dtype)
y[:,:y_temp.shape[1]] = y_temp
yield x, y
###Output
_____no_output_____
###Markdown
Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
###Code
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
###Output
x
[[31 64 57 72 76 61 74 1 16 0]
[ 1 57 69 1 70 71 76 1 63 71]
[78 65 70 13 0 0 3 53 61 75]
[70 1 60 77 74 65 70 63 1 64]
[ 1 65 76 1 65 75 11 1 75 65]
[ 1 37 76 1 79 57 75 0 71 70]
[64 61 70 1 59 71 69 61 1 62]
[26 1 58 77 76 1 70 71 79 1]
[76 1 65 75 70 7 76 13 1 48]
[ 1 75 57 65 60 1 76 71 1 64]]
y
[[64 57 72 76 61 74 1 16 0 0]
[57 69 1 70 71 76 1 63 71 65]
[65 70 13 0 0 3 53 61 75 11]
[ 1 60 77 74 65 70 63 1 64 65]
[65 76 1 65 75 11 1 75 65 74]
[37 76 1 79 57 75 0 71 70 68]
[61 70 1 59 71 69 61 1 62 71]
[ 1 58 77 76 1 70 71 79 1 75]
[ 1 65 75 70 7 76 13 1 48 64]
[75 57 65 60 1 76 71 1 64 61]]
###Markdown
If you implemented `get_batches` correctly, the above output should look something like ```x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]]y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] ``` although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`. Building the modelBelow is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. InputsFirst off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called `keep_prob`.
###Code
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
###Output
_____no_output_____
###Markdown
LSTM CellHere we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.We first create a basic LSTM cell with```pythonlstm = tf.contrib.rnn.BasicLSTMCell(num_units)```where `num_units` is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with ```pythontf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)```You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with [`tf.contrib.rnn.MultiRNNCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/rnn/MultiRNNCell). With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this```pythontf.contrib.rnn.MultiRNNCell([cell]*num_layers)```This might look a little weird if you know Python well because this will create a list of the same `cell` object. However, TensorFlow 1.0 will create different weight matrices for all `cell` objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like```pythondef build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])```Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.We also need to create an initial cell state of all zeros. This can be done like so```pythoninitial_state = cell.zero_state(batch_size, tf.float32)```Below, we implement the `build_lstm` function to create these LSTM cells and the initial state.
###Code
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
###Output
_____no_output_____
###Markdown
RNN OutputHere we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with `tf.variable_scope(scope_name)` because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
###Code
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
###Output
_____no_output_____
###Markdown
Training lossNext up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.Then we run the logits and targets through `tf.nn.softmax_cross_entropy_with_logits` and find the mean to get the loss.
###Code
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
###Output
_____no_output_____
###Markdown
OptimizerHere we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
###Code
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
###Output
_____no_output_____
###Markdown
Build the networkNow we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn). This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as `final_state` so we can pass it to the first LSTM cell in the the next mini-batch run. For `tf.nn.dynamic_rnn`, we pass in the cell and initial state we get from `build_lstm`, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
###Code
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
###Output
_____no_output_____
###Markdown
HyperparametersHere I'm defining the hyperparameters for the network. * `batch_size` - Number of sequences running through the network in one pass.* `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.* `lstm_size` - The number of units in the hidden layers.* `num_layers` - Number of hidden LSTM layers to use* `learning_rate` - Learning rate for training* `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this.Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnntips-and-tricks).> Tips and Tricks> Monitoring Validation Loss vs. Training Loss>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)> Approximate number of parameters> The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are:> - The number of parameters in your model. This is printed when you start training.> - The size of your dataset. 1MB file is approximately 1 million characters.>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger.> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.> Best models strategy>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
###Code
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
###Output
_____no_output_____
###Markdown
Time for trainingThis is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I save a checkpoint.Here I'm saving checkpoints with the format`i{iteration number}_l{ hidden layer units}.ckpt`
###Code
epochs = 20
# Print losses every N interations
print_every_n = 50
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
if (counter % print_every_n == 0):
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
###Output
Epoch: 1/20... Training Step: 50... Training loss: 3.1686... 0.3132 sec/batch
Epoch: 1/20... Training Step: 100... Training loss: 3.0933... 0.3199 sec/batch
Epoch: 1/20... Training Step: 150... Training loss: 2.9019... 0.3161 sec/batch
Epoch: 2/20... Training Step: 200... Training loss: 2.5019... 0.3149 sec/batch
Epoch: 2/20... Training Step: 250... Training loss: 2.3809... 0.3180 sec/batch
Epoch: 2/20... Training Step: 300... Training loss: 2.2646... 0.3159 sec/batch
Epoch: 2/20... Training Step: 350... Training loss: 2.2072... 0.3177 sec/batch
Epoch: 3/20... Training Step: 400... Training loss: 2.0870... 0.3151 sec/batch
Epoch: 3/20... Training Step: 450... Training loss: 2.0180... 0.3168 sec/batch
Epoch: 3/20... Training Step: 500... Training loss: 1.9367... 0.3173 sec/batch
Epoch: 3/20... Training Step: 550... Training loss: 1.9202... 0.3188 sec/batch
Epoch: 4/20... Training Step: 600... Training loss: 1.8128... 0.3140 sec/batch
Epoch: 4/20... Training Step: 650... Training loss: 1.8220... 0.3174 sec/batch
Epoch: 4/20... Training Step: 700... Training loss: 1.7590... 0.3193 sec/batch
Epoch: 4/20... Training Step: 750... Training loss: 1.7367... 0.3142 sec/batch
Epoch: 5/20... Training Step: 800... Training loss: 1.6989... 0.3155 sec/batch
Epoch: 5/20... Training Step: 850... Training loss: 1.6590... 0.3169 sec/batch
Epoch: 5/20... Training Step: 900... Training loss: 1.6392... 0.3189 sec/batch
Epoch: 5/20... Training Step: 950... Training loss: 1.6240... 0.3181 sec/batch
Epoch: 6/20... Training Step: 1000... Training loss: 1.5910... 0.3162 sec/batch
Epoch: 6/20... Training Step: 1050... Training loss: 1.6140... 0.3190 sec/batch
Epoch: 6/20... Training Step: 1100... Training loss: 1.5685... 0.3173 sec/batch
Epoch: 6/20... Training Step: 1150... Training loss: 1.5593... 0.3159 sec/batch
Epoch: 7/20... Training Step: 1200... Training loss: 1.5137... 0.3158 sec/batch
Epoch: 7/20... Training Step: 1250... Training loss: 1.5671... 0.3203 sec/batch
Epoch: 7/20... Training Step: 1300... Training loss: 1.4726... 0.3183 sec/batch
Epoch: 7/20... Training Step: 1350... Training loss: 1.4844... 0.3144 sec/batch
Epoch: 8/20... Training Step: 1400... Training loss: 1.4874... 0.3162 sec/batch
Epoch: 8/20... Training Step: 1450... Training loss: 1.4666... 0.3163 sec/batch
Epoch: 8/20... Training Step: 1500... Training loss: 1.4106... 0.3171 sec/batch
Epoch: 8/20... Training Step: 1550... Training loss: 1.4141... 0.3170 sec/batch
Epoch: 9/20... Training Step: 1600... Training loss: 1.3664... 0.3165 sec/batch
Epoch: 9/20... Training Step: 1650... Training loss: 1.3850... 0.3173 sec/batch
Epoch: 9/20... Training Step: 1700... Training loss: 1.3349... 0.3161 sec/batch
Epoch: 9/20... Training Step: 1750... Training loss: 1.3577... 0.3163 sec/batch
Epoch: 10/20... Training Step: 1800... Training loss: 1.3773... 0.3165 sec/batch
Epoch: 10/20... Training Step: 1850... Training loss: 1.3250... 0.3174 sec/batch
Epoch: 10/20... Training Step: 1900... Training loss: 1.3453... 0.3172 sec/batch
Epoch: 10/20... Training Step: 1950... Training loss: 1.4028... 0.3149 sec/batch
Epoch: 11/20... Training Step: 2000... Training loss: 1.3505... 0.3156 sec/batch
Epoch: 11/20... Training Step: 2050... Training loss: 1.3034... 0.3175 sec/batch
Epoch: 11/20... Training Step: 2100... Training loss: 1.2971... 0.3171 sec/batch
Epoch: 11/20... Training Step: 2150... Training loss: 1.3103... 0.3176 sec/batch
Epoch: 12/20... Training Step: 2200... Training loss: 1.3189... 0.3151 sec/batch
Epoch: 12/20... Training Step: 2250... Training loss: 1.3085... 0.3158 sec/batch
Epoch: 12/20... Training Step: 2300... Training loss: 1.2337... 0.3188 sec/batch
Epoch: 12/20... Training Step: 2350... Training loss: 1.2579... 0.3168 sec/batch
Epoch: 13/20... Training Step: 2400... Training loss: 1.2815... 0.3148 sec/batch
Epoch: 13/20... Training Step: 2450... Training loss: 1.2543... 0.3154 sec/batch
Epoch: 13/20... Training Step: 2500... Training loss: 1.2573... 0.3167 sec/batch
Epoch: 13/20... Training Step: 2550... Training loss: 1.2494... 0.3187 sec/batch
Epoch: 14/20... Training Step: 2600... Training loss: 1.2134... 0.3164 sec/batch
Epoch: 14/20... Training Step: 2650... Training loss: 1.2625... 0.3144 sec/batch
Epoch: 14/20... Training Step: 2700... Training loss: 1.2117... 0.3182 sec/batch
Epoch: 14/20... Training Step: 2750... Training loss: 1.2203... 0.3171 sec/batch
Epoch: 15/20... Training Step: 2800... Training loss: 1.2614... 0.3171 sec/batch
Epoch: 15/20... Training Step: 2850... Training loss: 1.2340... 0.3163 sec/batch
Epoch: 15/20... Training Step: 2900... Training loss: 1.2196... 0.3141 sec/batch
Epoch: 15/20... Training Step: 2950... Training loss: 1.2600... 0.3168 sec/batch
Epoch: 16/20... Training Step: 3000... Training loss: 1.2256... 0.3181 sec/batch
Epoch: 16/20... Training Step: 3050... Training loss: 1.2201... 0.3201 sec/batch
Epoch: 16/20... Training Step: 3100... Training loss: 1.1738... 0.3175 sec/batch
Epoch: 16/20... Training Step: 3150... Training loss: 1.1802... 0.3159 sec/batch
Epoch: 17/20... Training Step: 3200... Training loss: 1.1861... 0.3206 sec/batch
Epoch: 17/20... Training Step: 3250... Training loss: 1.2088... 0.3165 sec/batch
Epoch: 17/20... Training Step: 3300... Training loss: 1.1759... 0.3157 sec/batch
Epoch: 17/20... Training Step: 3350... Training loss: 1.2027... 0.3144 sec/batch
Epoch: 18/20... Training Step: 3400... Training loss: 1.1978... 0.3149 sec/batch
Epoch: 18/20... Training Step: 3450... Training loss: 1.2055... 0.3166 sec/batch
Epoch: 18/20... Training Step: 3500... Training loss: 1.1891... 0.3193 sec/batch
Epoch: 18/20... Training Step: 3550... Training loss: 1.1717... 0.3166 sec/batch
Epoch: 19/20... Training Step: 3600... Training loss: 1.1833... 0.3178 sec/batch
Epoch: 19/20... Training Step: 3650... Training loss: 1.1801... 0.3177 sec/batch
Epoch: 19/20... Training Step: 3700... Training loss: 1.1806... 0.3163 sec/batch
Epoch: 19/20... Training Step: 3750... Training loss: 1.1655... 0.3177 sec/batch
Epoch: 20/20... Training Step: 3800... Training loss: 1.1383... 0.3179 sec/batch
Epoch: 20/20... Training Step: 3850... Training loss: 1.1513... 0.3147 sec/batch
Epoch: 20/20... Training Step: 3900... Training loss: 1.1932... 0.3188 sec/batch
Epoch: 20/20... Training Step: 3950... Training loss: 1.1526... 0.3194 sec/batch
###Markdown
Saved checkpointsRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
###Code
tf.train.get_checkpoint_state('checkpoints')
###Output
_____no_output_____
###Markdown
SamplingNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
###Code
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
###Output
_____no_output_____
###Markdown
Here, pass in the path to a checkpoint and sample from the network.
###Code
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
###Output
INFO:tensorflow:Restoring parameters from checkpoints/i1200_l512.ckpt
Farling his, which with a serval to a more somated
he drown to the himself with him, the mistres, and what when
it, so still at the whele of the moment, this howed the
peasant, when he was a getteran with a conversation and, trough and
callanted it, and the chail
as some a strong would be any sore and her hand,
that he was to this to at that has so say the mading as
shall, the mare, and a satharal of his finter that she would see him at a semf in
some off his she saw, to the same as a mone and troughts, and as a sorted
and any to the stains of a cheeticully. "That would be a sume tanking an the sone of the
precoses."
"You coll not to to this, I should have a contrasing at it. He saw her, and to dreas a she will that he will be a little. Wanted a mar of hard
to
have meen," he said, the chishing of simple of the hore
of the sather, the peasant with the plear stated at her
forter her hand that is to the some one true there of her hand.
Chapter
13
"Yes, yis nut all the cleck that he seou
|
example/generator/load-generator.ipynb | ###Markdown
Text augmentationLet say you have a very limited labelled corpus, and you want to add more, but labelling is very costly.So, text augmentation! You can use word2vec to replace words with similar semantics!
###Code
string = 'saya suka makan ayam dan ikan'
embedded_wiki = malaya.word2vec.load_wiki()
word_vector_wiki = malaya.word2vec.word2vec(embedded_wiki['nce_weights'],
embedded_wiki['dictionary'])
augmented = malaya.generator.w2v_augmentation(string,
word_vector_wiki,
soft=True,
augment_counts=3)
augmented
###Output
_____no_output_____
###Markdown
Let we compare word mover distance with original.
###Code
malaya.word_mover.distance(string.split(), augmented[0].split(), word_vector_wiki)
malaya.word_mover.distance(string.split(), augmented[1].split(), word_vector_wiki)
malaya.word_mover.distance(string.split(), augmented[2].split(), word_vector_wiki)
###Output
_____no_output_____
###Markdown
They are pretty good in term of sentence similarity! **Distance that higher than 2 ratios are assumed bad**.
###Code
augmented = malaya.generator.w2v_augmentation('kerajaan sebenarnya sangat sayangkan rakyatnya',
word_vector_wiki,
soft=True,
augment_counts=5)
augmented
bahdanau_entities = malaya.entity.deep_model('bahdanau')
bahdanau_pos = malaya.pos.deep_model('bahdanau')
string = 'KUALA LUMPUR: Sempena sambutan Aidilfitri minggu depan, Perdana Menteri Tun Dr Mahathir Mohamad dan Menteri Pengangkutan Anthony Loke Siew Fook menitipkan pesanan khas kepada orang ramai yang mahu pulang ke kampung halaman masing-masing. Dalam video pendek terbitan Jabatan Keselamatan Jalan Raya (JKJR) itu, Dr Mahathir menasihati mereka supaya berhenti berehat dan tidur sebentar sekiranya mengantuk ketika memandu.'
result_entities = bahdanau_entities.predict(string)
result_pos = bahdanau_pos.predict(string)
###Output
_____no_output_____
###Markdown
Generate ngram sentences
###Code
malaya.generator.sentence_ngram(string, ngram = (3, 5))
###Output
_____no_output_____
###Markdown
Generate ngram sentences for selected POS and Entities
###Code
generated_grams = malaya.generator.pos_entities_ngram(
result_pos,
result_entities,
ngram = (1, 3),
accept_pos = ['NOUN', 'PROPN', 'VERB'],
accept_entities = ['law', 'location', 'organization', 'person', 'time'],
)
generated_grams
###Output
_____no_output_____
###Markdown
Wordvector augmentationLet say you have a very limited labelled corpus, and you want to add more, but labelling is very costly.So, text augmentation! You can use wordvector to replace words with similar semantics!```pythondef wordvector_augmentation( string, wordvector, threshold = 0.5, top_n = 5, soft = False, cleaning_function = None,): """ augmenting a string using wordvector. Parameters ---------- string: str wordvector: object wordvector interface object. threshold: float, optional (default=0.5) random selection for a word. soft: bool, optional (default=False) if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio. if False, it will throw an exception if a word not in the dictionary. top_n: int, (default=5) number of nearest neighbors returned. cleaning_function: function, (default=None) function to clean text. Returns ------- result: list """```
###Code
string = 'saya suka makan ayam dan ikan'
vocab_wiki, embedded_wiki = malaya.wordvector.load_wiki()
word_vector_wiki = malaya.wordvector.load(embedded_wiki, vocab_wiki)
augmented = malaya.generator.wordvector_augmentation(string,
word_vector_wiki,
soft=True)
augmented
text = 'Perdana Menteri berkata, beliau perlu memperoleh maklumat terperinci berhubung isu berkenaan sebelum kerajaan dapat mengambil sebarang tindakan lanjut. Bagaimanapun, beliau yakin masalah itu dapat diselesaikan dan pentadbiran kerajaan boleh berfungsi dengan baik.'
augmented = malaya.generator.wordvector_augmentation(text,
word_vector_wiki,
soft=True)
augmented
###Output
_____no_output_____
###Markdown
Transformer augmentationProblem with wordvector, it just replaced a word for near synonym without understood the whole sentence context, so, Transformer comes to the rescue!```pythondef transformer_augmentation( string, model, threshold = 0.5, top_p = 0.8, top_k = 100, temperature = 0.8, top_n = 5, cleaning_function = None,): """ augmenting a string using transformer + nucleus sampling / top-k sampling. Parameters ---------- string: str model: object transformer interface object. Right now only supported BERT. threshold: float, optional (default=0.5) random selection for a word. top_p: float, optional (default=0.8) cumulative sum of probabilities to sample a word. If top_n bigger than 0, the model will use nucleus sampling, else top-k sampling. top_k: int, optional (default=100) k for top-k sampling. temperature: float, optional (default=0.8) logits * temperature. top_n: int, (default=5) number of nearest neighbors returned. cleaning_function: function, (default=None) function to clean text. Returns ------- result: list """```
###Code
model = malaya.transformer.load(model = 'bert', size = 'small')
augmented = malaya.generator.transformer_augmentation(text, model)
augmented
###Output
_____no_output_____
###Markdown
Base size give much better context! But beware, the model is quite big.
###Code
model = malaya.transformer.load(model = 'bert', size = 'base')
augmented = malaya.generator.transformer_augmentation(text, model)
augmented
###Output
_____no_output_____
###Markdown
GPT2Malaya provided Pretrained GTP2 model, specific to Malay, we called it GTP2-Bahasa. This interface not able us to use it to do custom training.GPT2-Bahasa was pretrained on ~0.9 billion words, and below is the list of dataset we trained,1. [dumping wikipedia (222MB)](https://github.com/huseinzol05/Malaya-Datasetwikipedia-1).2. [local news (257MB)](https://github.com/huseinzol05/Malaya-Datasetpublic-news).3. [local parliament text (45MB)](https://github.com/huseinzol05/Malaya-Datasetparliament).4. [IIUM Confession (74MB)](https://github.com/huseinzol05/Malaya-Datasetiium-confession).5. [Wattpad (74MB)](https://github.com/huseinzol05/Malaya-Datasetwattpad).6. [Academia PDF (42MB)](https://github.com/huseinzol05/Malaya-Datasetacademia-pdf).7. [Common-Crawl (3GB)](https://github.com/huseinzol05/malaya-datasetcommon-crawl). If you want to download pretrained model for GPT2-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/gpt2, some notebooks to help you get started.**Here we hope these models are not use to finetune for spreading fake news**. Or you can simply use [Transformers](https://huggingface.co/models?filter=malay&search=gpt2) to try GPT2-Bahasa models from Malaya, simply check available models from here, https://huggingface.co/models?filter=malay&search=gpt2
###Code
from IPython.core.display import Image, display
display(Image('gpt2.png', width=500))
###Output
_____no_output_____
###Markdown
load modelGPT2-Bahasa only available `117M` and `345M` models.1. `117M` size around 442MB.2. `345M` is around 1.2GB.
###Code
model = malaya.generator.gpt2(model = '117M')
string = 'ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, '
print(model.generate(string))
model = malaya.generator.gpt2(model = '345M')
string = 'ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, '
print(model.generate(string))
###Output
ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, omputeh-uteh cerita lama-lama, seram tak boleh bayang
Sebelum kejadian, dalam 2 jam aku buat panggilan polis , lepas tu kira la sendiri nak ke lokasi.
Tengok cerita lama..
Sekarang ni, apa yang aku lalui, kita yang jaga diri, kita yang jaga kesihatan dan juga kita yang jaga minda dalam hidup.
Maka, inilah jalan penyelesaian terbaiknya.
Jangan lupakan manusia
Orang yang paling ditakuti untuk berjaya dalam hidup, tidak akan jumpa yang tersayang!
Jangan rosakkan masa depannya, ingatlah apa yang kita nak buat, walaupun pahit untuk ditelan.
Jangan lupakan orang lain - masa depan mereka.
Jangan lupakan orang - masa itulah kita yang lebih dicintai.
Jangan lupakan orang - orang yang kita sayang, mereka bukan orang yang tersayang!
Jangan lupakan orang - orang yang kita cinta, mereka cinta pada kita.
Jangan lupakan diri - diri kita - yang kita punya, yang kita tinggal adalah masa lalu kita.
Jangan lupakan orang lain - orang yang kita cinta, lebih indah dari masa lalu kita.
Jangan lupakan semua orang - orang yang tinggal ataupun hidup.
Jangan cuba lupakan diri kita - kerja keras dan selalu ada masa depan kita.
Jangan pernah putus rasa - kecewa kerana kita telah banyak berubah.
Jangan pernah putus putus asa kerana kita
###Markdown
Wordvector augmentationLet say you have a very limited labelled corpus, and you want to add more, but labelling is very costly.So, text augmentation! You can use wordvector to replace words with similar semantics!```pythondef wordvector_augmentation( string, wordvector, threshold = 0.5, top_n = 5, soft = False, cleaning_function = None,): """ augmenting a string using wordvector. Parameters ---------- string: str wordvector: object wordvector interface object. threshold: float, optional (default=0.5) random selection for a word. soft: bool, optional (default=False) if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio. if False, it will throw an exception if a word not in the dictionary. top_n: int, (default=5) number of nearest neighbors returned. cleaning_function: function, (default=None) function to clean text. Returns ------- result: list """```
###Code
string = 'saya suka makan ayam dan ikan'
vocab_wiki, embedded_wiki = malaya.wordvector.load_wiki()
word_vector_wiki = malaya.wordvector.load(embedded_wiki, vocab_wiki)
augmented = malaya.generator.wordvector_augmentation(string,
word_vector_wiki,
soft=True)
augmented
text = 'Perdana Menteri berkata, beliau perlu memperoleh maklumat terperinci berhubung isu berkenaan sebelum kerajaan dapat mengambil sebarang tindakan lanjut. Bagaimanapun, beliau yakin masalah itu dapat diselesaikan dan pentadbiran kerajaan boleh berfungsi dengan baik.'
augmented = malaya.generator.wordvector_augmentation(text,
word_vector_wiki,
soft=True)
augmented
###Output
_____no_output_____
###Markdown
Transformer augmentationProblem with wordvector, it just replaced a word for near synonym without understood the whole sentence context, so, Transformer comes to the rescue!```pythondef transformer_augmentation( string, model, threshold = 0.5, top_p = 0.8, top_k = 100, temperature = 0.8, top_n = 5, cleaning_function = None,): """ augmenting a string using transformer + nucleus sampling / top-k sampling. Parameters ---------- string: str model: object transformer interface object. Right now only supported BERT. threshold: float, optional (default=0.5) random selection for a word. top_p: float, optional (default=0.8) cumulative sum of probabilities to sample a word. If top_n bigger than 0, the model will use nucleus sampling, else top-k sampling. top_k: int, optional (default=100) k for top-k sampling. temperature: float, optional (default=0.8) logits * temperature. top_n: int, (default=5) number of nearest neighbors returned. cleaning_function: function, (default=None) function to clean text. Returns ------- result: list """```
###Code
model = malaya.transformer.load(model = 'albert')
augmented = malaya.generator.transformer_augmentation(text, model)
augmented
###Output
_____no_output_____
###Markdown
ngramsYou can generate ngrams pretty easy using this interface,```pythondef ngrams( sequence, n: int, pad_left = False, pad_right = False, left_pad_symbol = None, right_pad_symbol = None,): """ generate ngrams. Parameters ---------- sequence : List[str] list of tokenize words. n : int ngram size Returns ------- ngram: list """```
###Code
string = 'saya suka makan ayam'
list(malaya.generator.ngrams(string.split(), n = 2))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True,
left_pad_symbol = 'START'))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True,
left_pad_symbol = 'START', right_pad_symbol = 'END'))
###Output
_____no_output_____
###Markdown
List available T5 Model
###Code
malaya.generator.available_t5()
###Output
_____no_output_____
###Markdown
Load T5T5 in Malaya is quite unique, most of the text generative model we found on the internet like GPT2 or Markov, simply just continue prefix input from user, but not for T5 Malaya. We want to generate an article or karangan like high school when the users give 'isi penting'.```pythondef t5(model: str = 'base', **kwargs): """ Load T5 model to generate a string given a isu penting. Parameters ---------- model : str, optional (default='base') Model architecture supported. Allowed values: * ``'base'`` - T5 Base parameters. * ``'small'`` - T5 Small parameters. Returns ------- result: malaya.model.t5.GENERATOR class """```
###Code
model = malaya.generator.t5()
isi_penting = ['Dr M perlu dikekalkan sebagai perdana menteri',
'Muhyiddin perlulah menolong Dr M',
'rakyat perlu menolong Muhyiddin']
###Output
_____no_output_____
###Markdown
I just want to test the model given this isi penting, because we all know, Dr M and Muhyiddin are not supporting each others in the real world. generate`model.generate` accepts list of strings.```pythondef generate(self, strings: List[str]): """ generate a long text given a isi penting. Parameters ---------- strings: List[str] Returns ------- result: str """```
###Code
pprint(model.generate(isi_penting))
###Output
(': Presiden Bersatu, Tan Sri Muhyiddin Yassin perlu mengekalkan Tun Dr '
'Mahathir Mohamad sebagai perdana menteri berbanding Datuk Seri Anwar Ibrahim '
'yang hanya minta bantuan untuk menyelesaikan kemelut kedudukan '
'negara.Muhyiddin berkata, ini kerana semua pihak tahu masalah yang dihadapi '
'oleh Perdana Menteri adalah di luar bidang kuasa beliau sendiri.Katanya, '
'Muhyiddin perlu membantu beliau kerana beliau percaya rakyat Malaysia tahu '
'apa yang berlaku di luar bidang kuasa beliau."Apa yang berlaku di luar '
'bidang kuasa Dr Mahathir... semua tahu bahawa ini berlaku di bawah '
'kepimpinan Anwar."Muhyiddin dan seluruh rakyat yang tahu apa yang berlaku di '
'Johor."Ini kerana di Johor ini, majoriti menteri-menteri dalam Pakatan '
'Harapan banyak sangat ketua-ketua parti."Jadi Muhyiddin perlu bantu Dr '
'Mahathir sebab rakyat tahu apa yang berlaku di Johor Bahru," katanya dalam '
'satu kenyataan di sini, pada Jumaat.Dalam pada itu, Muhyiddin berkata, '
'rakyat juga perlu menolong Muhyiddin untuk menyelesaikan masalah yang '
'melanda negara ketika ini.Menurutnya, Muhyiddin perlu menggalas tugas dengan '
'baik dan memastikan keadaan negara berada dalam keadaan baik.')
###Markdown
Pretty good!
###Code
isi_penting = ['Neelofa tetap dengan keputusan untuk berkahwin akhir tahun ini',
'Long Tiger sanggup membantu Neelofa',
'Tiba-tiba Long Tiger bergaduh dengan Husein']
###Output
_____no_output_____
###Markdown
We also can give any isi penting even does not make any sense.
###Code
pprint(model.generate(isi_penting))
###Output
('Kuala Lumpur: Pelakon, Neelofa tetap dengan keputusan dibuat untuk berkahwin '
'penutup tahun ini, selepas mengadakan pertemuan dengan Long Tiger. Neelofa '
'atau nama sebenarnya, Mohd Neelofa Ahmad Noor berkata, dia tidak pernah '
'merancang untuk berkahwin, namun menegaskan dirinya lebih mengutamakan masa '
'depan. "Saya seronok bersama keluarga. Kalau kami berkahwin awal tahun ini, '
'ia mengambil masa yang lama. Itu impian saya tetapi biarlah, selepas setahun '
'saya berehat, saya akan mula bekerja. "Jadi, apabila sering sesi pertemuan '
'dengan Long Tiger, saya kena tegas mengenai perkara ini. Bukan soal nak '
'memalukan diri sendiri tetapi siapa yang boleh menghentam saya," katanya '
'kepada Bh Online. Dalam sesi pertemuan itu, Neelofa yang juga pengacara '
'acara Top 5, bergaduh dengan Husein, dalam pergaduhan yang berlaku di '
'Kompleks Mahkamah Tinggi Syariah di sini, baru-baru ini. Ditanya mengenai '
'hubungannya dengan wanita itu, Neelofa berkata, mereka masih belum '
'menyelesaikan perkara itu dengan baik. "Saya tidak tahu pasal semua ini, '
'tetapi ia akan diselesaikan menerusi cara baik. Tidak kiralah apa yang kami '
'tidak cakap pun. "Pada mulanya kami hanya mahu membebaskan mereka daripada '
'sebarang isu, namun selepas beberapa hari bergaduh, kami akhirnya mengambil '
'keputusan untuk berkahwin dengan Hadiza Aziz. "Jika mereka mahu, kami akan '
'membendung, namun pada masa yang sama, kami tidak mahu bergaduh dengan '
'lelaki yang digelar Long Tiger," katanya.')
###Markdown
How about karangan like high school?
###Code
# http://mieadham86.blogspot.com/2016/09/isi-isi-penting-karangan-bahasa-melayu.html
# KEBAIKAN AMALAN BERGOTONG-ROYONG
isi_penting = ['Dapat memupuk semangat kerjasama',
'Dapat mengeratkan hubungan silaturahim.',
'Kebersihan kawasan persekitaran terpelihara.',
'Terhindar daripada wabak penyakit seperti Denggi',
'Mengisi masa lapang',
'Menerapkan nilai-nilai murni dalam kehidupan']
pprint(model.generate(isi_penting))
# http://mieadham86.blogspot.com/2016/09/isi-isi-penting-karangan-bahasa-melayu.html
# CARA MENJADI MURID CEMERLANG
isi_penting = ['Rajin berusaha – tidak mudah putus asa',
'Menghormati orang yang lebih tua – mendapat keberkatan',
'Melibatkan diri secara aktif dalam bidang kokurikulum',
'Memberi tumpuan ketika guru mengajar.',
'Berdisiplin – menepati jadual yang disediakan.',
'Bercita-cita tinggi – mempunyai keazaman yang tinggi untuk berjaya']
pprint(model.generate(isi_penting))
###Output
('Sejak akhir-akhir ini, pelbagai isu yang hangat diperkatakan oleh masyarakat '
'yang berkait dengan sambutan Hari Raya Aidilfitri. Pelbagai faktor yang '
'melatari perkara yang berlaku dalam kalangan masyarakat hari ini, khususnya '
'bagi golongan muda. Dikatakan bahawa kehidupan kita hari ini semakin '
'mencabar terutamanya kesibukan dalam menjalankan tugas dan mengajar. '
'Justeru, tidak dinafikan apabila semakin jauh kita, semakin ramai yang '
'memilih untuk lalai atau tidak mematuhi arahan yang telah ditetapkan. '
'Mendepani cabaran ini, golongan muda terpaksa menempuhi segala cabaran untuk '
'menjadi lebih baik dan lebih baik. Minda yang perlu diterapkan, terutama di '
'dalam kelas untuk mempelajari ilmu pengetahuan. Jika tidak, kita akan '
'menjadi lebih mudah untuk menilai dan menyelesaikan masalah yang dihadapi. '
'Oleh itu, kita perlu berfikir untuk menetapkan langkah yang patut atau perlu '
'dilaksanakan bagi mengatasi masalah yang berlaku. Selain itu, guru-guru juga '
'harus mendidik peserta-peserta dalam kelas supaya dapat menjalankan kegiatan '
'dengan lebih serius dan berkesan. Guru-Guru juga seharusnya berusaha untuk '
'meningkatkan kemahiran mereka dalam kalangan pelajar. Seperti peribahasa '
'Melayu, melentur buluh biarlah dari rebungnya. Setiap insan mempunyai '
'peranan masing-masing dan tanggungjawab yang masing-masing. Kesempatan untuk '
'memberikan nasihat dan teguran adalah lebih penting dan membantu secara '
'halus dan bijaksana dalam melakukan sesuatu. Selain itu, guru-guru hendaklah '
'berani untuk melakukan sesuatu perkara yang memberi manfaat kepada para '
'pelajar yang lain. Cara ini adalah dengan melakukan aktiviti-aktiviti yang '
'boleh memberi manfaat kepada para pelajar. Selain itu, guru-guru juga '
'perlulah menjaga disiplin mereka dengan sebaik-baiknya. Dalam menyampaikan '
'nasihat dan teguran secara berterusan, pelajar juga boleh melakukan perkara '
'yang boleh mendatangkan mudarat. Anak-Anak awal pelajar dan rakan-rakan '
'mereka juga boleh melakukan tugas yang bermanfaat. Keadaan ini membolehkan '
'mereka untuk lebih berusaha dan memberikan nasihat yang berguna kepada kaum '
'lain. Oleh itu, mereka perlu sentiasa mengingati dan mendidik pelajar dengan '
'nilai-nilai yang murni. Setiap orang mempunyai impian yang tinggi untuk '
'berjaya. Sama ada kita berjaya atau tidak, pencapaian yang diperoleh setelah '
'tamat belajar akan memberikan kita nilai yang baik dan perlu menjadi contoh '
'yang baik untuk negara kita.')
###Markdown
Load GPT2Malaya provided Pretrained GPT2 model, specific to Malay, we called it GPT2-Bahasa. This interface not able us to use it to do custom training.GPT2-Bahasa was pretrained on ~0.9 billion words, and below is the list of dataset we trained,1. [dumping wikipedia (222MB)](https://github.com/huseinzol05/Malaya-Datasetwikipedia-1).2. [local news (257MB)](https://github.com/huseinzol05/Malaya-Datasetpublic-news).3. [local parliament text (45MB)](https://github.com/huseinzol05/Malaya-Datasetparliament).4. [IIUM Confession (74MB)](https://github.com/huseinzol05/Malaya-Datasetiium-confession).5. [Wattpad (74MB)](https://github.com/huseinzol05/Malaya-Datasetwattpad).6. [Academia PDF (42MB)](https://github.com/huseinzol05/Malaya-Datasetacademia-pdf).7. [Common-Crawl (3GB)](https://github.com/huseinzol05/malaya-datasetcommon-crawl). If you want to download pretrained model for GPT2-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/gpt2, some notebooks to help you get started.**Here we hope these models are not use to finetune for spreading fake news**. Or you can simply use [Transformers](https://huggingface.co/models?filter=malay&search=gpt2) to try GPT2-Bahasa models from Malaya, simply check available models from here, https://huggingface.co/models?filter=malay&search=gpt2
###Code
from IPython.core.display import Image, display
display(Image('gpt2.png', width=500))
###Output
_____no_output_____
###Markdown
load modelGPT2-Bahasa only available `117M` and `345M` models.1. `117M` size around 442MB.2. `345M` is around 1.2GB.```pythondef gpt2( model: str = '345M', generate_length: int = 256, temperature: float = 1.0, top_k: int = 40, **kwargs): """ Load GPT2 model to generate a string given a prefix string. Parameters ---------- model : str, optional (default='345M') Model architecture supported. Allowed values: * ``'117M'`` - GPT2 117M parameters. * ``'345M'`` - GPT2 345M parameters. generate_length : int, optional (default=256) length of sentence to generate. temperature : float, optional (default=1.0) temperature value, value should between 0 and 1. top_k : int, optional (default=40) top-k in nucleus sampling selection. Returns ------- result: malaya.transformers.gpt2.Model class """```
###Code
model = malaya.generator.gpt2(model = '117M')
string = 'ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, '
###Output
_____no_output_____
###Markdown
generate`model.generate` accepts a string.```pythondef generate(self, string: str): """ generate a text given an initial string. Parameters ---------- string : str Returns ------- result: str """```
###Code
print(model.generate(string))
model = malaya.generator.gpt2(model = '345M')
string = 'ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, '
print(model.generate(string))
###Output
ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, omputeh-uteh cerita lama-lama, seram tak boleh bayang
Sebelum kejadian, dalam 2 jam aku buat panggilan polis , lepas tu kira la sendiri nak ke lokasi.
Tengok cerita lama..
Sekarang ni, apa yang aku lalui, kita yang jaga diri, kita yang jaga kesihatan dan juga kita yang jaga minda dalam hidup.
Maka, inilah jalan penyelesaian terbaiknya.
Jangan lupakan manusia
Orang yang paling ditakuti untuk berjaya dalam hidup, tidak akan jumpa yang tersayang!
Jangan rosakkan masa depannya, ingatlah apa yang kita nak buat, walaupun pahit untuk ditelan.
Jangan lupakan orang lain - masa depan mereka.
Jangan lupakan orang - masa itulah kita yang lebih dicintai.
Jangan lupakan orang - orang yang kita sayang, mereka bukan orang yang tersayang!
Jangan lupakan orang - orang yang kita cinta, mereka cinta pada kita.
Jangan lupakan diri - diri kita - yang kita punya, yang kita tinggal adalah masa lalu kita.
Jangan lupakan orang lain - orang yang kita cinta, lebih indah dari masa lalu kita.
Jangan lupakan semua orang - orang yang tinggal ataupun hidup.
Jangan cuba lupakan diri kita - kerja keras dan selalu ada masa depan kita.
Jangan pernah putus rasa - kecewa kerana kita telah banyak berubah.
Jangan pernah putus putus asa kerana kita
###Markdown
Load TransformerWe also can generate a text like GPT2 using Transformer-Bahasa. Right now only supported BERT, ALBERT and ELECTRA.```pythondef transformer( string: str, model, generate_length: int = 30, leed_out_len: int = 1, temperature: float = 1.0, top_k: int = 100, burnin: int = 15, batch_size: int = 5,): """ Use pretrained transformer models to generate a string given a prefix string. https://github.com/nyu-dl/bert-gen, https://arxiv.org/abs/1902.04094 Parameters ---------- string: str model: object transformer interface object. Right now only supported BERT, ALBERT. generate_length : int, optional (default=256) length of sentence to generate. leed_out_len : int, optional (default=1) length of extra masks for each iteration. temperature: float, optional (default=1.0) logits * temperature. top_k: int, optional (default=100) k for top-k sampling. burnin: int, optional (default=15) for the first burnin steps, sample from the entire next word distribution, instead of top_k. batch_size: int, optional (default=5) generate sentences size of batch_size. Returns ------- result: List[str] """```
###Code
electra = malaya.transformer.load(model = 'electra')
malaya.generator.transformer(string, electra)
###Output
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/transformers/babble.py:30: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
###Markdown
ngramsYou can generate ngrams pretty easy using this interface,```pythondef ngrams( sequence, n: int, pad_left = False, pad_right = False, left_pad_symbol = None, right_pad_symbol = None,): """ generate ngrams. Parameters ---------- sequence : List[str] list of tokenize words. n : int ngram size Returns ------- ngram: list """```
###Code
string = 'saya suka makan ayam'
list(malaya.generator.ngrams(string.split(), n = 2))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True,
left_pad_symbol = 'START'))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True,
left_pad_symbol = 'START', right_pad_symbol = 'END'))
###Output
_____no_output_____
###Markdown
Generator This tutorial is available as an IPython notebook at [Malaya/example/generator](https://github.com/huseinzol05/Malaya/tree/master/example/generator).
###Code
%%time
import malaya
from pprint import pprint
###Output
CPU times: user 4.74 s, sys: 901 ms, total: 5.64 s
Wall time: 8.03 s
###Markdown
List available Transformer This module trained heavily on news structure.
###Code
malaya.generator.available_transformer()
###Output
_____no_output_____
###Markdown
Load TransformerTransformer Generator in Malaya is quite unique, most of the text generative model we found on the internet like GPT2 or Markov, simply just continue prefix input from user, but not for Transformer Generator. We want to generate an article or karangan like high school when the users give 'isi penting'.```pythondef transformer(model: str = 't5', quantized: bool = False, **kwargs): """ Load Transformer model to generate a string given a isu penting. Parameters ---------- model : str, optional (default='base') Model architecture supported. Allowed values: * ``'t5'`` - T5 BASE parameters. * ``'small-t5'`` - T5 SMALL parameters. quantized : bool, optional (default=False) if True, will load 8-bit quantized model. Quantized model not necessary faster, totally depends on the machine. Returns ------- result: malaya.model.t5.Generator class """```
###Code
model = malaya.generator.transformer(model = 't5', quantized = True)
isi_penting = ['Dr M perlu dikekalkan sebagai perdana menteri',
'Muhyiddin perlulah menolong Dr M',
'rakyat perlu menolong Muhyiddin']
###Output
_____no_output_____
###Markdown
I just want to test the model given this isi penting, because we all know, Dr M and Muhyiddin are not supporting each others in the real world. generate```pythondef greedy_decoder(self, strings: List[str]): """ generate a long text given a isi penting. Decoder is greedy decoder with beam width size 1, alpha 0.5 . Parameters ---------- strings: List[str] Returns ------- result: str """```
###Code
pprint(model.greedy_decoder(isi_penting))
###Output
(': Presiden Bersatu, Tan Sri Muhyiddin Yassin perlu mengekalkan Tun Dr '
'Mahathir Mohamad sebagai perdana menteri berbanding Datuk Seri Anwar Ibrahim '
'yang hanya minta bantuan untuk menyelesaikan kemelut kedudukan '
'negara.Muhyiddin berkata, ini kerana semua pihak tahu masalah yang dihadapi '
'oleh Perdana Menteri adalah di luar bidang kuasa beliau sendiri.Katanya, '
'Muhyiddin perlu membantu beliau kerana beliau percaya rakyat Malaysia tahu '
'apa yang berlaku di luar bidang kuasa beliau."Apa yang berlaku di luar '
'bidang kuasa Dr Mahathir... semua tahu bahawa ini berlaku di bawah '
'kepimpinan Anwar."Muhyiddin dan seluruh rakyat yang tahu apa yang berlaku di '
'Johor."Ini kerana di Johor ini, majoriti menteri-menteri dalam Pakatan '
'Harapan banyak sangat ketua-ketua parti."Jadi Muhyiddin perlu bantu Dr '
'Mahathir sebab rakyat tahu apa yang berlaku di Johor Bahru," katanya dalam '
'satu kenyataan di sini, pada Jumaat.Dalam pada itu, Muhyiddin berkata, '
'rakyat juga perlu menolong Muhyiddin untuk menyelesaikan masalah yang '
'melanda negara ketika ini.Menurutnya, Muhyiddin perlu menggalas tugas dengan '
'baik dan memastikan keadaan negara berada dalam keadaan baik.')
###Markdown
Pretty good!
###Code
isi_penting = ['Neelofa tetap dengan keputusan untuk berkahwin akhir tahun ini',
'Long Tiger sanggup membantu Neelofa',
'Tiba-tiba Long Tiger bergaduh dengan Husein']
###Output
_____no_output_____
###Markdown
We also can give any isi penting even does not make any sense.
###Code
pprint(model.greedy_decoder(isi_penting))
###Output
('Kuala Lumpur: Pelakon, Neelofa tetap dengan keputusan dibuat untuk berkahwin '
'penutup tahun ini, selepas mengadakan pertemuan dengan Long Tiger. Neelofa '
'atau nama sebenarnya, Mohd Neelofa Ahmad Noor berkata, dia tidak pernah '
'merancang untuk berkahwin, namun menegaskan dirinya lebih mengutamakan masa '
'depan. "Saya seronok bersama keluarga. Kalau kami berkahwin awal tahun ini, '
'ia mengambil masa yang lama. Itu impian saya tetapi biarlah, selepas setahun '
'saya berehat, saya akan mula bekerja. "Jadi, apabila sering sesi pertemuan '
'dengan Long Tiger, saya kena tegas mengenai perkara ini. Bukan soal nak '
'memalukan diri sendiri tetapi siapa yang boleh menghentam saya," katanya '
'kepada Bh Online. Dalam sesi pertemuan itu, Neelofa yang juga pengacara '
'acara Top 5, bergaduh dengan Husein, dalam pergaduhan yang berlaku di '
'Kompleks Mahkamah Tinggi Syariah di sini, baru-baru ini. Ditanya mengenai '
'hubungannya dengan wanita itu, Neelofa berkata, mereka masih belum '
'menyelesaikan perkara itu dengan baik. "Saya tidak tahu pasal semua ini, '
'tetapi ia akan diselesaikan menerusi cara baik. Tidak kiralah apa yang kami '
'tidak cakap pun. "Pada mulanya kami hanya mahu membebaskan mereka daripada '
'sebarang isu, namun selepas beberapa hari bergaduh, kami akhirnya mengambil '
'keputusan untuk berkahwin dengan Hadiza Aziz. "Jika mereka mahu, kami akan '
'membendung, namun pada masa yang sama, kami tidak mahu bergaduh dengan '
'lelaki yang digelar Long Tiger," katanya.')
###Markdown
How about karangan like high school?
###Code
# http://mieadham86.blogspot.com/2016/09/isi-isi-penting-karangan-bahasa-melayu.html
# KEBAIKAN AMALAN BERGOTONG-ROYONG
isi_penting = ['Dapat memupuk semangat kerjasama',
'Dapat mengeratkan hubungan silaturahim.',
'Kebersihan kawasan persekitaran terpelihara.',
'Terhindar daripada wabak penyakit seperti Denggi',
'Mengisi masa lapang',
'Menerapkan nilai-nilai murni dalam kehidupan']
pprint(model.greedy_decoder(isi_penting))
# http://mieadham86.blogspot.com/2016/09/isi-isi-penting-karangan-bahasa-melayu.html
# CARA MENJADI MURID CEMERLANG
isi_penting = ['Rajin berusaha – tidak mudah putus asa',
'Menghormati orang yang lebih tua – mendapat keberkatan',
'Melibatkan diri secara aktif dalam bidang kokurikulum',
'Memberi tumpuan ketika guru mengajar.',
'Berdisiplin – menepati jadual yang disediakan.',
'Bercita-cita tinggi – mempunyai keazaman yang tinggi untuk berjaya']
pprint(model.greedy_decoder(isi_penting))
###Output
('Sejak akhir-akhir ini, pelbagai isu yang hangat diperkatakan oleh masyarakat '
'yang berkait dengan sambutan Hari Raya Aidilfitri. Pelbagai faktor yang '
'melatari perkara yang berlaku dalam kalangan masyarakat hari ini, khususnya '
'bagi golongan muda. Dikatakan bahawa kehidupan kita hari ini semakin '
'mencabar terutamanya kesibukan dalam menjalankan tugas dan mengajar. '
'Justeru, tidak dinafikan apabila semakin jauh kita, semakin ramai yang '
'memilih untuk lalai atau tidak mematuhi arahan yang telah ditetapkan. '
'Mendepani cabaran ini, golongan muda terpaksa menempuhi segala cabaran untuk '
'menjadi lebih baik dan lebih baik. Minda yang perlu diterapkan, terutama di '
'dalam kelas untuk mempelajari ilmu pengetahuan. Jika tidak, kita akan '
'menjadi lebih mudah untuk menilai dan menyelesaikan masalah yang dihadapi. '
'Oleh itu, kita perlu berfikir untuk menetapkan langkah yang patut atau perlu '
'dilaksanakan bagi mengatasi masalah yang berlaku. Selain itu, guru-guru juga '
'harus mendidik peserta-peserta dalam kelas supaya dapat menjalankan kegiatan '
'dengan lebih serius dan berkesan. Guru-Guru juga seharusnya berusaha untuk '
'meningkatkan kemahiran mereka dalam kalangan pelajar. Seperti peribahasa '
'Melayu, melentur buluh biarlah dari rebungnya. Setiap insan mempunyai '
'peranan masing-masing dan tanggungjawab yang masing-masing. Kesempatan untuk '
'memberikan nasihat dan teguran adalah lebih penting dan membantu secara '
'halus dan bijaksana dalam melakukan sesuatu. Selain itu, guru-guru hendaklah '
'berani untuk melakukan sesuatu perkara yang memberi manfaat kepada para '
'pelajar yang lain. Cara ini adalah dengan melakukan aktiviti-aktiviti yang '
'boleh memberi manfaat kepada para pelajar. Selain itu, guru-guru juga '
'perlulah menjaga disiplin mereka dengan sebaik-baiknya. Dalam menyampaikan '
'nasihat dan teguran secara berterusan, pelajar juga boleh melakukan perkara '
'yang boleh mendatangkan mudarat. Anak-Anak awal pelajar dan rakan-rakan '
'mereka juga boleh melakukan tugas yang bermanfaat. Keadaan ini membolehkan '
'mereka untuk lebih berusaha dan memberikan nasihat yang berguna kepada kaum '
'lain. Oleh itu, mereka perlu sentiasa mengingati dan mendidik pelajar dengan '
'nilai-nilai yang murni. Setiap orang mempunyai impian yang tinggi untuk '
'berjaya. Sama ada kita berjaya atau tidak, pencapaian yang diperoleh setelah '
'tamat belajar akan memberikan kita nilai yang baik dan perlu menjadi contoh '
'yang baik untuk negara kita.')
###Markdown
Load GPT2Malaya provided Pretrained GPT2 model, specific to Malay, we called it GPT2-Bahasa. This interface not able us to use it to do custom training.GPT2-Bahasa was pretrained on ~0.9 billion words, and below is the list of dataset we trained,1. [dumping wikipedia (222MB)](https://github.com/huseinzol05/Malaya-Datasetwikipedia-1).2. [local news (257MB)](https://github.com/huseinzol05/Malaya-Datasetpublic-news).3. [local parliament text (45MB)](https://github.com/huseinzol05/Malaya-Datasetparliament).4. [IIUM Confession (74MB)](https://github.com/huseinzol05/Malaya-Datasetiium-confession).5. [Wattpad (74MB)](https://github.com/huseinzol05/Malaya-Datasetwattpad).6. [Academia PDF (42MB)](https://github.com/huseinzol05/Malaya-Datasetacademia-pdf).7. [Common-Crawl (3GB)](https://github.com/huseinzol05/malaya-datasetcommon-crawl). If you want to download pretrained model for GPT2-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/gpt2, some notebooks to help you get started.**Here we hope these models are not use to finetune for spreading fake news**. load modelGPT2-Bahasa only available `117M` and `345M` models.1. `117M` size around 442MB.2. `345M` is around 1.2GB.```pythondef gpt2( model: str = '345M', generate_length: int = 256, temperature: float = 1.0, top_k: int = 40, **kwargs): """ Load GPT2 model to generate a string given a prefix string. Parameters ---------- model : str, optional (default='345M') Model architecture supported. Allowed values: * ``'117M'`` - GPT2 117M parameters. * ``'345M'`` - GPT2 345M parameters. generate_length : int, optional (default=256) length of sentence to generate. temperature : float, optional (default=1.0) temperature value, value should between 0 and 1. top_k : int, optional (default=40) top-k in nucleus sampling selection. Returns ------- result: malaya.transformers.gpt2.Model class """```
###Code
model = malaya.generator.gpt2(model = '117M')
string = 'ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, '
###Output
_____no_output_____
###Markdown
generate`model.generate` accepts a string.```pythondef generate(self, string: str): """ generate a text given an initial string. Parameters ---------- string : str Returns ------- result: str """```
###Code
print(model.generate(string))
model = malaya.generator.gpt2(model = '345M')
string = 'ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, '
print(model.generate(string))
###Output
ceritanya sebegini, aku bangun pagi baca surat khabar berita harian, tetiba aku nampak cerita seram, omputeh-uteh cerita lama-lama, seram tak boleh bayang
Sebelum kejadian, dalam 2 jam aku buat panggilan polis , lepas tu kira la sendiri nak ke lokasi.
Tengok cerita lama..
Sekarang ni, apa yang aku lalui, kita yang jaga diri, kita yang jaga kesihatan dan juga kita yang jaga minda dalam hidup.
Maka, inilah jalan penyelesaian terbaiknya.
Jangan lupakan manusia
Orang yang paling ditakuti untuk berjaya dalam hidup, tidak akan jumpa yang tersayang!
Jangan rosakkan masa depannya, ingatlah apa yang kita nak buat, walaupun pahit untuk ditelan.
Jangan lupakan orang lain - masa depan mereka.
Jangan lupakan orang - masa itulah kita yang lebih dicintai.
Jangan lupakan orang - orang yang kita sayang, mereka bukan orang yang tersayang!
Jangan lupakan orang - orang yang kita cinta, mereka cinta pada kita.
Jangan lupakan diri - diri kita - yang kita punya, yang kita tinggal adalah masa lalu kita.
Jangan lupakan orang lain - orang yang kita cinta, lebih indah dari masa lalu kita.
Jangan lupakan semua orang - orang yang tinggal ataupun hidup.
Jangan cuba lupakan diri kita - kerja keras dan selalu ada masa depan kita.
Jangan pernah putus rasa - kecewa kerana kita telah banyak berubah.
Jangan pernah putus putus asa kerana kita
###Markdown
Using Babble methodWe also can generate a text like GPT2 using Transformer-Bahasa. Right now only supported BERT, ALBERT and ELECTRA.```pythondef babble( string: str, model, generate_length: int = 30, leed_out_len: int = 1, temperature: float = 1.0, top_k: int = 100, burnin: int = 15, batch_size: int = 5,): """ Use pretrained transformer models to generate a string given a prefix string. https://github.com/nyu-dl/bert-gen, https://arxiv.org/abs/1902.04094 Parameters ---------- string: str model: object transformer interface object. Right now only supported BERT, ALBERT. generate_length : int, optional (default=256) length of sentence to generate. leed_out_len : int, optional (default=1) length of extra masks for each iteration. temperature: float, optional (default=1.0) logits * temperature. top_k: int, optional (default=100) k for top-k sampling. burnin: int, optional (default=15) for the first burnin steps, sample from the entire next word distribution, instead of top_k. batch_size: int, optional (default=5) generate sentences size of batch_size. Returns ------- result: List[str] """``` Make sure you already installed `tensorflow-probability`,```bashpip3 install tensorflow-probability==0.7.0```
###Code
# !pip3 install tensorflow-probability==0.7.0
electra = malaya.transformer.load(model = 'electra')
malaya.generator.babble(string, electra)
###Output
_____no_output_____
###Markdown
ngramsYou can generate ngrams pretty easy using this interface,```pythondef ngrams( sequence, n: int, pad_left = False, pad_right = False, left_pad_symbol = None, right_pad_symbol = None,): """ generate ngrams. Parameters ---------- sequence : List[str] list of tokenize words. n : int ngram size Returns ------- ngram: list """```
###Code
string = 'saya suka makan ayam'
list(malaya.generator.ngrams(string.split(), n = 2))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True,
left_pad_symbol = 'START'))
list(malaya.generator.ngrams(string.split(), n = 2, pad_left = True, pad_right = True,
left_pad_symbol = 'START', right_pad_symbol = 'END'))
###Output
_____no_output_____ |
Array Sequence Interview Questions - Solved/String Compression .ipynb | ###Markdown
String Compression ProblemGiven a string in the form 'AAAABBBBCCCCCDDEEEE' compress it to become 'A4B4C5D2E4'. For this problem, you can falsely "compress" strings of single or double letters. For instance, it is okay for 'AAB' to return 'A2B1' even though this technically takes more space. The function should also be case sensitive, so that a string 'AAAaaa' returns 'A3a3'. SolutionFill out your solution below:
###Code
def compress(s):
result = ''
count = 0
if len(s) == 0:
return result
elif len(s) == 1:
return s+'1'
else:
pointer = s[0]
for i in range(len(s)):
if s[i] == pointer:
count+=1
else:
result+=s[i-1]+str(count)
pointer = s[i]
count = 1
result+=s[i-1]+str(count)
return result
compress('AAAAABBBBCCCC')
###Output
_____no_output_____
###Markdown
Test Your Solution
###Code
"""
RUN THIS CELL TO TEST YOUR SOLUTION
"""
from nose.tools import assert_equal
class TestCompress(object):
def test(self, sol):
assert_equal(sol(''), '')
assert_equal(sol('AABBCC'), 'A2B2C2')
assert_equal(sol('AAABCCDDDDD'), 'A3B1C2D5')
print ('ALL TEST CASES PASSED')
# Run Tests
t = TestCompress()
t.test(compress)
###Output
ALL TEST CASES PASSED
|
paper-materials/M3-river-classification.ipynb | ###Markdown
Visualize river classification system
###Code
import matplotlib.pyplot as plt
import numpy as np
import netCDF4 as nc
import pickle
from matplotlib import colors
%matplotlib inline
###Output
_____no_output_____
###Markdown
Parameters:
###Code
# domain dimensions:
imin, imax = 1479, 2179
jmin, jmax = 159, 799
# runoff period:
rf_year = 2015
rf_month = 8 # september
# colours:
c_continent = '#ce9169'
c_glacier = '#36ab92'
c_other = 'w'
###Output
_____no_output_____
###Markdown
Load files: River runoff forcing
###Code
# Load river runoff used in ANHA12 from Paul Myers' group (http://knossos.eas.ualberta.ca/anha/anhatable.php)
rf_file = nc.Dataset(f'/ocean/brogalla/GEOTRACES/data/runoff/'+\
f'ANHA12_runoff_monthly_combined_Dai_Trenberth_Bamber_y{rf_year}.nc','r')
lon_rf = np.array(rf_file.variables['nav_lon'])
lat_rf = np.array(rf_file.variables['nav_lat'])
rf = np.array(rf_file.variables['runoff'][rf_month])
# Place NaNs where there is no runoff
rf[rf == 0] = np.nan
lon_rf[rf == 0.0] = np.nan
lat_rf[rf == 0.0] = np.nan
###Output
_____no_output_____
###Markdown
River classification file1. Glaciers2. Continental3. Other
###Code
# river classification produced in /forcing/river---create-river-classification
ncd = nc.Dataset('/ocean/brogalla/GEOTRACES/data/river_class-202005.nc')
river_class = np.array(ncd.variables['rclass'])
###Output
_____no_output_____
###Markdown
Meshmask
###Code
# ANHA12 grid
mesh = nc.Dataset('/ocean/brogalla/GEOTRACES/data/ANHA12/ANHA12_mesh1.nc')
lon = np.array(mesh.variables['nav_lon'])
lat = np.array(mesh.variables['nav_lat'])
###Output
_____no_output_____
###Markdown
Map with bathymetry background
###Code
fig, ax1, proj1 = pickle.load(open('/ocean/brogalla/GEOTRACES/pickles/surface-land-map.pickle','rb'))
# Sub-domain map: ---------------------------------------------------------------------------
x_sub, y_sub = proj1(lon, lat)
x_rf, y_rf = proj1(lon_rf, lat_rf)
proj1.plot(x_sub[imin:imax,jmax] , y_sub[imin:imax,jmax], 'w-', lw=1.0, zorder=2)
proj1.plot(x_sub[imin:imax,jmin] , y_sub[imin:imax,jmin], 'w-', lw=1.0, zorder=2)
proj1.plot(x_sub[imin,jmin:jmax] , y_sub[imin,jmin:jmax], 'w-', lw=1.0, zorder=2)
proj1.plot(x_sub[imax,jmin:jmax] , y_sub[imax,jmin:jmax], 'w-', lw=1.0, zorder=2)
colormap = colors.ListedColormap([c_glacier, c_continent, c_other, c_other])
proj1.scatter(x_rf[imin:imax,jmin:jmax], y_rf[imin:imax,jmin:jmax], c=river_class[imin:imax,jmin:jmax],\
s=rf[imin:imax,jmin:jmax]*1e4, alpha=0.8, cmap=colormap, edgecolor='k', linewidths=0.4, zorder=4)
for a in ['0.001', '0.005', '0.010']:
proj1.scatter([], [], c=c_continent, alpha=1, s=float(a)*1e4, label=a + ' kg/m$^2$/s', \
edgecolors='k', linewidths=0.4, zorder=4)
ax1.legend(scatterpoints=1, frameon=False, labelspacing=0.3, fontsize=6, loc=(0.7,0.8))
fig.savefig('/ocean/brogalla/GEOTRACES/figures/paper1-202110/M3-river-classification.png', bbox_inches='tight', dpi=300)
fig.savefig('/ocean/brogalla/GEOTRACES/figures/paper1-202110/M3-river-classification.svg', bbox_inches='tight', dpi=300, \
format='svg')
###Output
_____no_output_____ |
Tutorial-ETK_thorn-FishboneMoncriefID.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
from outputC import lhrh,outputC # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
import FishboneMoncriefID.FishboneMoncriefID as fmid # Stores closed-form SymPy expressions for F-M initial data.
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1d: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module. This
# sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
# Step 1f: Create directories for the thorn if they don't exist.
Ccodesdir = "FishboneMoncriefID"
cmd.mkdir(Ccodesdir)
cmd.mkdir(os.path.join(Ccodesdir,"src"))
# Step 1g: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"src","KerrSchild.h"), "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_velocities.h"), "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_rho_initial.h"), "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_hm1.h"), "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$Here we construct `InitialData.c`, which contains C driver functions that pull in the necessary NRPy+ C-code kernels.First we set up driver routines to specify the Kerr-Schild metric and the Fishbone-Moncrief disk velocity at a given gridpoint.
###Code
%%writefile $Ccodesdir/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
###Output
Writing FishboneMoncriefID/src/InitialData.c
###Markdown
Next we set up the driver function for setting all metric and hydrodynamical fields $\rho,P,\epsilon,v^i$.**Important**: Suppose the Fishbone-Moncrief initial data yield a density $\rho(r,\theta)$ (which is valid for all Fishbone-Moncrief disks centered at the origin, $r=0$, as F-M disks are axisymmetric). Then the disk will have pressure$$P = \kappa \rho^\Gamma.$$Since the disk is not self-gravitating, we are allowed to rescale the maximum density in the disk to be one in code units; i.e., $\rho_{\rm max}=1$. This may be incompatible with the initial choice of polytropic constant $\kappa$, as rescaling the density results in a rescaling of pressure $P$, as follows.When we rescale $\rho$ so that the maximum density in the disk is one, we make the following transformation:$$\rho \to \rho' = \frac{\rho}{\rho_{\rm max}}.$$Since pressure has units of $\rho c^2$, and we use $G=c=1$ units, pressure must therefore be rescaled by the same factor:\begin{align}P \to P' &= \frac{P}{\rho_{\rm max}} \\&= \frac{\kappa \rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{\rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{(\rho' \rho_{\rm max})^\Gamma}{\rho_{\rm max}} \\&= \kappa \rho_{\rm max}^{\Gamma-1} (\rho')^\Gamma \\&= \kappa' (\rho')^\Gamma\end{align}Thus the polytropic equation of state is still valid, but only if $$\kappa' = \kappa \rho_{\rm max}^{\Gamma-1} = \frac{P_{\rm max}}{\rho_{\rm max}}.$$As e.g., `IllinoisGRMHD` requires that the initial $P'$ be given as a polytropic equation of state, with $P'_{\rm cold} = \kappa' (\rho')^\Gamma$, $\kappa'$ must be input into the `FishboneMoncriefID` (and `IllinoisGRMHD`) thorns instead of $\kappa$. If this does not happen, the code will error out, providing the correct value for $\kappa'$ that must be set in the parameter file.
###Code
%%writefile -a $Ccodesdir/src/InitialData.c
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum pressure and density
CCTK_REAL P_max, rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
P_max = kappa * pow(rho_max, gamma);
}
// We enforce units such that rho_max = 1.0; if these units are not obeyed, then
// we error out. If we did not error out, then the value of kappa used in all
// EOS routines would need to be changed, and generally these appear as
// read-only parameters.
if(fabs(P_max/rho_max - kappa) > 1e-8) {
printf("Error: To ensure that P = kappa*rho^Gamma, where rho_max = 1.0,\n");
printf(" you must set (in your parfile) the polytropic constant kappa = P_max/rho_max = %.15e\n\n",P_max/rho_max);
printf(" Needed values for kappa, for common values of Gamma:\n");
printf(" For Gamma =4/3, use kappa=K_initial=K_poly = 4.249572342020724e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma =5/3, use kappa=K_initial=K_poly = 6.799315747233158e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma = 2, use kappa=K_initial=K_poly = 8.499144684041449e-03 to ensure rho_max = 1.0\n");
exit(1);
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Appending to FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile $Ccodesdir/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Writing FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile $Ccodesdir/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Writing FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile $Ccodesdir/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::z(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::vel(Everywhere) # Note that vel is a vector gridfunction.
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Writing FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile $Ccodesdir/src/make.code.defn
SRCS = InitialData.c
###Output
Writing FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-FishboneMoncriefID")
###Output
Created Tutorial-ETK_thorn-FishboneMoncriefID.tex, and compiled LaTeX file
to PDF file Tutorial-ETK_thorn-FishboneMoncriefID.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module.
import FishboneMoncriefID.FishboneMoncriefID as fmid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1e: Set up the Fishbone-Moncrief initial data. This sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions for the Weyl scalars.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
#KerrSchild_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# KerrSchild_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_velocities_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_rho_initial_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time"))
# Step 1f: Create directories for the thorn if they don't exist.
!mkdir FishboneMoncriefID 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir FishboneMoncriefID/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1g: Write the C code kernel to file.
with open("FishboneMoncriefID/src/KerrSchild.h", "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_velocities.h", "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_rho_initial.h", "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open("FishboneMoncriefID/src/FMdisk_GRHD_hm1.h", "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile FishboneMoncriefID/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum density
CCTK_REAL rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Writing FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile FishboneMoncriefID/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Writing FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile FishboneMoncriefID/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Writing FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile FishboneMoncriefID/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::velx(Everywhere)
WRITES: hydrobase::vely(Everywhere)
WRITES: hydrobase::velz(Everywhere)
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Writing FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile FishboneMoncriefID/src/make.code.defn
SRCS = InitialData.c
###Output
Writing FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-FishboneMoncriefID.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
import FishboneMoncriefID.FishboneMoncriefID as fmid # Stores closed-form SymPy expressions for F-M initial data.
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1d: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module. This
# sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions for the Weyl scalars.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
# Step 1f: Create directories for the thorn if they don't exist.
Ccodesdir = "FishboneMoncriefID"
cmd.mkdir(Ccodesdir)
cmd.mkdir(os.path.join(Ccodesdir,"src"))
# Step 1g: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"src","KerrSchild.h"), "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_velocities.h"), "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_rho_initial.h"), "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_hm1.h"), "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile $Ccodesdir/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum density
CCTK_REAL rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Overwriting FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile $Ccodesdir/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Overwriting FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile $Ccodesdir/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Overwriting FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile $Ccodesdir/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::z(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::vel(Everywhere) # Note that vel is a vector gridfunction.
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Overwriting FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile $Ccodesdir/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-FishboneMoncriefID.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
from outputC import lhrh,outputC # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
import FishboneMoncriefID.FishboneMoncriefID as fmid # Stores closed-form SymPy expressions for F-M initial data.
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1d: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module. This
# sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
# Step 1f: Create directories for the thorn if they don't exist.
Ccodesdir = "FishboneMoncriefID"
cmd.mkdir(Ccodesdir)
cmd.mkdir(os.path.join(Ccodesdir,"src"))
# Step 1g: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"src","KerrSchild.h"), "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_velocities.h"), "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_rho_initial.h"), "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_hm1.h"), "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$Here we construct `InitialData.c`, which contains C driver functions that pull in the necessary NRPy+ C-code kernels.First we set up driver routines to specify the Kerr-Schild metric and the Fishbone-Moncrief disk velocity at a given gridpoint.
###Code
%%writefile $Ccodesdir/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
###Output
Overwriting FishboneMoncriefID/src/InitialData.c
###Markdown
Next we set up the driver function for setting all metric and hydrodynamical fields $\rho,P,\epsilon,v^i$.**Important**: Suppose the Fishbone-Moncrief initial data yield a density $\rho(r,\theta)$ (which is valid for all Fishbone-Moncrief disks centered at the origin, $r=0$, as F-M disks are axisymmetric). Then the disk will have pressure$$P = \kappa \rho^\Gamma.$$Since the disk is not self-gravitating, we are allowed to rescale the maximum density in the disk to be one in code units; i.e., $\rho_{\rm max}=1$. This may be incompatible with the initial choice of polytropic constant $\kappa$, as rescaling the density results in a rescaling of pressure $P$, as follows.When we rescale $\rho$ so that the maximum density in the disk is one, we make the following transformation:$$\rho \to \rho' = \frac{\rho}{\rho_{\rm max}}.$$Since pressure has units of $\rho c^2$, and we use $G=c=1$ units, pressure must therefore be rescaled by the same factor:\begin{align}P \to P' &= \frac{P}{\rho_{\rm max}} \\&= \frac{\kappa \rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{\rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{(\rho' \rho_{\rm max})^\Gamma}{\rho_{\rm max}} \\&= \kappa \rho_{\rm max}^{\Gamma-1} (\rho')^\Gamma \\&= \kappa' (\rho')^\Gamma\end{align}Thus the polytropic equation of state is still valid, but only if $$\kappa' = \kappa \rho_{\rm max}^{\Gamma-1} = \frac{P_{\rm max}}{\rho_{\rm max}}.$$As e.g., `IllinoisGRMHD` requires that the initial $P'$ be given as a polytropic equation of state, with $P'_{\rm cold} = \kappa' (\rho')^\Gamma$, $\kappa'$ must be input into the `FishboneMoncriefID` (and `IllinoisGRMHD`) thorns instead of $\kappa$. If this does not happen, the code will error out, providing the correct value for $\kappa'$ that must be set in the parameter file.
###Code
%%writefile -a $Ccodesdir/src/InitialData.c
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum pressure and density
CCTK_REAL P_max, rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
P_max = kappa * pow(rho_max, gamma);
}
// We enforce units such that rho_max = 1.0; if these units are not obeyed, then
// we error out. If we did not error out, then the value of kappa used in all
// EOS routines would need to be changed, and generally these appear as
// read-only parameters.
if(fabs(P_max/rho_max - kappa) > 1e-8) {
printf("Error: To ensure that P = kappa*rho^Gamma, where rho_max = 1.0,\n");
printf(" you must set (in your parfile) the polytropic constant kappa = P_max/rho_max = %.15e\n\n",P_max/rho_max);
printf(" Needed values for kappa, for common values of Gamma:\n");
printf(" For Gamma =4/3, use kappa=K_initial=K_poly = 4.249572342020724e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma =5/3, use kappa=K_initial=K_poly = 6.799315747233158e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma = 2, use kappa=K_initial=K_poly = 8.499144684041449e-03 to ensure rho_max = 1.0\n");
exit(1);
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Appending to FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile $Ccodesdir/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Overwriting FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile $Ccodesdir/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Overwriting FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile $Ccodesdir/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::z(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::vel(Everywhere) # Note that vel is a vector gridfunction.
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Overwriting FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile $Ccodesdir/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-FishboneMoncriefID")
###Output
Created Tutorial-ETK_thorn-FishboneMoncriefID.tex, and compiled LaTeX file
to PDF file Tutorial-ETK_thorn-FishboneMoncriefID.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
from outputC import lhrh,outputC # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
import FishboneMoncriefID.FishboneMoncriefID as fmid # Stores closed-form SymPy expressions for F-M initial data.
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1d: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module. This
# sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
# Step 1f: Create directories for the thorn if they don't exist.
Ccodesdir = "FishboneMoncriefID"
cmd.mkdir(Ccodesdir)
cmd.mkdir(os.path.join(Ccodesdir,"src"))
# Step 1g: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"src","KerrSchild.h"), "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_velocities.h"), "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_rho_initial.h"), "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_hm1.h"), "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$Here we construct `InitialData.c`, which contains C driver functions that pull in the necessary NRPy+ C-code kernels.First we set up driver routines to specify the Kerr-Schild metric and the Fishbone-Moncrief disk velocity at a given gridpoint.
###Code
%%writefile $Ccodesdir/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
###Output
Overwriting FishboneMoncriefID/src/InitialData.c
###Markdown
Next we set up the driver function for setting all metric and hydrodynamical fields $\rho,P,\epsilon,v^i$.**Important**: Suppose the Fishbone-Moncrief initial data yield a density $\rho(r,\theta)$ (which is valid for all Fishbone-Moncrief disks centered at the origin, $r=0$, as F-M disks are axisymmetric). Then the disk will have pressure$$P = \kappa \rho^\Gamma.$$Since the disk is not self-gravitating, we are allowed to rescale the maximum density in the disk to be one in code units; i.e., $\rho_{\rm max}=1$. This may be incompatible with the initial choice of polytropic constant $\kappa$, as rescaling the density results in a rescaling of pressure $P$, as follows.When we rescale $\rho$ so that the maximum density in the disk is one, we make the following transformation:$$\rho \to \rho' = \frac{\rho}{\rho_{\rm max}}.$$Since pressure has units of $\rho c^2$, and we use $G=c=1$ units, pressure must therefore be rescaled by the same factor:\begin{align}P \to P' &= \frac{P}{\rho_{\rm max}} \\&= \frac{\kappa \rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{\rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{(\rho' \rho_{\rm max})^\Gamma}{\rho_{\rm max}} \\&= \kappa \rho_{\rm max}^{\Gamma-1} (\rho')^\Gamma \\&= \kappa' (\rho')^\Gamma\end{align}Thus the polytropic equation of state is still valid, but only if $$\kappa' = \kappa \rho_{\rm max}^{\Gamma-1} = \frac{P_{\rm max}}{\rho_{\rm max}}.$$As e.g., `IllinoisGRMHD` requires that the initial $P'$ be given as a polytropic equation of state, with $P'_{\rm cold} = \kappa' (\rho')^\Gamma$, $\kappa'$ must be input into the `FishboneMoncriefID` (and `IllinoisGRMHD`) thorns instead of $\kappa$. If this does not happen, the code will error out, providing the correct value for $\kappa'$ that must be set in the parameter file.
###Code
%%writefile -a $Ccodesdir/src/InitialData.c
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum pressure and density
CCTK_REAL P_max, rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
P_max = kappa * pow(rho_max, gamma);
}
// We enforce units such that rho_max = 1.0; if these units are not obeyed, then
// we error out. If we did not error out, then the value of kappa used in all
// EOS routines would need to be changed, and generally these appear as
// read-only parameters.
if(fabs(P_max/rho_max - kappa) > 1e-8) {
printf("Error: To ensure that P = kappa*rho^Gamma, where rho_max = 1.0,\n");
printf(" you must set (in your parfile) the polytropic constant kappa = P_max/rho_max = %.15e\n\n",P_max/rho_max);
printf(" Needed values for kappa, for common values of Gamma:\n");
printf(" For Gamma =4/3, use kappa=K_initial=K_poly = 4.249572342020724e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma =5/3, use kappa=K_initial=K_poly = 6.799315747233158e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma = 2, use kappa=K_initial=K_poly = 8.499144684041449e-03 to ensure rho_max = 1.0\n");
exit(1);
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Appending to FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile $Ccodesdir/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Overwriting FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile $Ccodesdir/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Overwriting FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile $Ccodesdir/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::z(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::vel(Everywhere) # Note that vel is a vector gridfunction.
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Overwriting FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile $Ccodesdir/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-FishboneMoncriefID")
###Output
Created Tutorial-ETK_thorn-FishboneMoncriefID.tex, and compiled LaTeX file
to PDF file Tutorial-ETK_thorn-FishboneMoncriefID.pdf
###Markdown
FishboneMoncriefID: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark **While this compiles, it has not been validated against the old version of the code.** NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial module, we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This module is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set $\text{GridFuncMemAccess}$ to $\text{ETK}$. SymPy expressions for plane wave initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module.
import FishboneMoncriefID.FishboneMoncriefID as fmid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1e: Set up the Fishbone-Moncrief initial data. This sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions for the Weyl scalars.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
#KerrSchild_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# KerrSchild_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_velocities_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_rho_initial_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time"))
# Step 1f: Create directories for the thorn if they don't exist.
!mkdir FishboneMoncriefID 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir FishboneMoncriefID/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1g: Write the C code kernel to file.
with open("FishboneMoncriefID/src/KerrSchild.h", "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_velocities.h", "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_rho_initial.h", "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open("FishboneMoncriefID/src/FMdisk_GRHD_hm1.h", "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile FishboneMoncriefID/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // For drand48()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
printf("Fishbone-Moncrief Disk Initial data.\n");
printf("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e\n",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum density
CCTK_REAL rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
printf("===== OUTPUTS =====\n");
printf("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
printf("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e\n",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*drand48();
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Overwriting FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. $\text{interface.ccl}$: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile FishboneMoncriefID/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Overwriting FishboneMoncriefID/interface.ccl
###Markdown
2. $\text{param.ccl}$: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile FishboneMoncriefID/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
-1.0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Overwriting FishboneMoncriefID/param.ccl
###Markdown
3. $\text{schedule.ccl}$: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile FishboneMoncriefID/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::velx(Everywhere)
WRITES: hydrobase::vely(Everywhere)
WRITES: hydrobase::velz(Everywhere)
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Overwriting FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need $\text{make.code.defn}$, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile FishboneMoncriefID/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ETK_thorn-FishboneMoncriefID.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-ETK_thorn-FishboneMoncriefID.ipynb to latex
[NbConvertApp] Writing 78853 bytes to Tutorial-ETK_thorn-FishboneMoncriefID.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module.
import FishboneMoncriefID.FishboneMoncriefID as fmid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1e: Set up the Fishbone-Moncrief initial data. This sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions for the Weyl scalars.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
#KerrSchild_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# KerrSchild_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_velocities_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_rho_initial_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time"))
# Step 1f: Create directories for the thorn if they don't exist.
!mkdir FishboneMoncriefID 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir FishboneMoncriefID/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1g: Write the C code kernel to file.
with open("FishboneMoncriefID/src/KerrSchild.h", "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_velocities.h", "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_rho_initial.h", "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open("FishboneMoncriefID/src/FMdisk_GRHD_hm1.h", "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile FishboneMoncriefID/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum density
CCTK_REAL rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Writing FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile FishboneMoncriefID/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Writing FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile FishboneMoncriefID/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Writing FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile FishboneMoncriefID/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::velx(Everywhere)
WRITES: hydrobase::vely(Everywhere)
WRITES: hydrobase::velz(Everywhere)
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Writing FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile FishboneMoncriefID/src/make.code.defn
SRCS = InitialData.c
###Output
Writing FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-FishboneMoncriefID.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-ETK_thorn-FishboneMoncriefID.ipynb to latex
[NbConvertApp] Writing 80115 bytes to Tutorial-ETK_thorn-FishboneMoncriefID.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module.
import FishboneMoncriefID.FishboneMoncriefID as fmid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1e: Set up the Fishbone-Moncrief initial data. This sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions for the Weyl scalars.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
#KerrSchild_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# KerrSchild_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_velocities_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_rho_initial_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time"))
# Step 1f: Create directories for the thorn if they don't exist.
!mkdir FishboneMoncriefID 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir FishboneMoncriefID/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1g: Write the C code kernel to file.
with open("FishboneMoncriefID/src/KerrSchild.h", "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_velocities.h", "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_rho_initial.h", "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open("FishboneMoncriefID/src/FMdisk_GRHD_hm1.h", "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile FishboneMoncriefID/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum density
CCTK_REAL rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Writing FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile FishboneMoncriefID/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Writing FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile FishboneMoncriefID/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Writing FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile FishboneMoncriefID/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::velx(Everywhere)
WRITES: hydrobase::vely(Everywhere)
WRITES: hydrobase::velz(Everywhere)
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Writing FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile FishboneMoncriefID/src/make.code.defn
SRCS = InitialData.c
###Output
Writing FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-FishboneMoncriefID.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); `FishboneMoncriefID`: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Notebook Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for [Fishbone-Moncrief initial data](Tutorial-FishboneMoncriefID.ipynb) Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial notebook, we used NRPy+ to construct the SymPy expressions for Fishbone-Moncrief initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to `ETK`. SymPy expressions for Fishbone-Moncrief initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
from outputC import lhrh,outputC # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
import FishboneMoncriefID.FishboneMoncriefID as fmid # Stores closed-form SymPy expressions for F-M initial data.
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1d: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module. This
# sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
# Step 1f: Create directories for the thorn if they don't exist.
Ccodesdir = "FishboneMoncriefID"
cmd.mkdir(Ccodesdir)
cmd.mkdir(os.path.join(Ccodesdir,"src"))
# Step 1g: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"src","KerrSchild.h"), "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_velocities.h"), "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_rho_initial.h"), "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open(os.path.join(Ccodesdir,"src","FMdisk_GRHD_hm1.h"), "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$Here we construct `InitialData.c`, which contains C driver functions that pull in the necessary NRPy+ C-code kernels.First we set up driver routines to specify the Kerr-Schild metric and the Fishbone-Moncrief disk velocity at a given gridpoint.
###Code
%%writefile $Ccodesdir/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
###Output
Overwriting FishboneMoncriefID/src/InitialData.c
###Markdown
Next we set up the driver function for setting all metric and hydrodynamical fields $\rho,P,\epsilon,v^i$.**Important**: Suppose the Fishbone-Moncrief initial data yield a density $\rho(r,\theta)$ (which is valid for all Fishbone-Moncrief disks centered at the origin, $r=0$, as F-M disks are axisymmetric). Then the disk will have pressure$$P = \kappa \rho^\Gamma.$$Since the disk is not self-gravitating, we are allowed to rescale the maximum density in the disk to be one in code units; i.e., $\rho_{\rm max}=1$. This may be incompatible with the initial choice of polytropic constant $\kappa$, as rescaling the density results in a rescaling of pressure $P$, as follows.When we rescale $\rho$ so that the maximum density in the disk is one, we make the following transformation:$$\rho \to \rho' = \frac{\rho}{\rho_{\rm max}}.$$Since pressure has units of $\rho c^2$, and we use $G=c=1$ units, pressure must therefore be rescaled by the same factor:\begin{align}P \to P' &= \frac{P}{\rho_{\rm max}} \\&= \frac{\kappa \rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{\rho^\Gamma}{\rho_{\rm max}} \\&= \kappa \frac{(\rho' \rho_{\rm max})^\Gamma}{\rho_{\rm max}} \\&= \kappa \rho_{\rm max}^{\Gamma-1} (\rho')^\Gamma \\&= \kappa' (\rho')^\Gamma\end{align}Thus the polytropic equation of state is still valid, but only if $$\kappa' = \kappa \rho_{\rm max}^{\Gamma-1} = \frac{P_{\rm max}}{\rho_{\rm max}}.$$As e.g., `IllinoisGRMHD` requires that the initial $P'$ be given as a polytropic equation of state, with $P'_{\rm cold} = \kappa' (\rho')^\Gamma$, $\kappa'$ must be input into the `FishboneMoncriefID` (and `IllinoisGRMHD`) thorns instead of $\kappa$. If this does not happen, the code will error out, providing the correct value for $\kappa'$ that must be set in the parameter file.
###Code
%%writefile -a $Ccodesdir/src/InitialData.c
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum pressure and density
CCTK_REAL P_max, rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
P_max = kappa * pow(rho_max, gamma);
}
// We enforce units such that rho_max = 1.0; if these units are not obeyed, then
// we error out. If we did not error out, then the value of kappa used in all
// EOS routines would need to be changed, and generally these appear as
// read-only parameters.
if(fabs(P_max/rho_max - kappa) > 1e-8) {
printf("Error: To ensure that P = kappa*rho^Gamma, where rho_max = 1.0,\n");
printf(" you must set (in your parfile) the polytropic constant kappa = P_max/rho_max = %.15e\n\n",P_max/rho_max);
printf(" Needed values for kappa, for common values of Gamma:\n");
printf(" For Gamma =4/3, use kappa=K_initial=K_poly = 4.249572342020724e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma =5/3, use kappa=K_initial=K_poly = 6.799315747233158e-03 to ensure rho_max = 1.0\n");
printf(" For Gamma = 2, use kappa=K_initial=K_poly = 8.499144684041449e-03 to ensure rho_max = 1.0\n");
exit(1);
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Appending to FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile $Ccodesdir/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Overwriting FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile $Ccodesdir/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Overwriting FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile $Ccodesdir/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::z(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::vel(Everywhere) # Note that vel is a vector gridfunction.
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Overwriting FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile $Ccodesdir/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-FishboneMoncriefID")
###Output
Created Tutorial-ETK_thorn-FishboneMoncriefID.tex, and compiled LaTeX file
to PDF file Tutorial-ETK_thorn-FishboneMoncriefID.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); FishboneMoncriefID: An Einstein Toolkit Initial Data Thorn for Fishbone-Moncrief initial data Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)**Module Status:** Validated **Validation Notes:** Agrees with trusted Fishbone-Moncrief initial data module in HARM3D. Also generates results in agreement with trusted version sent to Event Horizon Telescope (EHT) GRMHD code comparison project collaborators. This thorn was used for the [IllinoisGRMHD](http://illinoisgrmhd.net) contribution to the [EHT GRMHD code comparison project](https://arxiv.org/abs/1904.04923). NRPy+ Source Code for this module: [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py) [\[tutorial\]](Tutorial-FishboneMoncriefID.ipynb) Constructs SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up Fishbone-Moncrief initial data. In the [Tutorial-FishboneMoncriefID](Tutorial-FishboneMoncriefID.ipynb) tutorial module, we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This module is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Call on NRPy+ to convert the SymPy expression for the Fishbone-Moncrief initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set `GridFuncMemAccess` to $\text{ETK}$. SymPy expressions for plane wave initial data are written inside [FishboneMoncriefID/FishboneMoncriefID.py](../edit/FishboneMoncriefID/FishboneMoncriefID.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression for the
# Fishbone-Moncrief initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 1c: Call the FishboneMoncriefID() function from within the
# FishboneMoncriefID/FishboneMoncriefID.py module.
import FishboneMoncriefID.FishboneMoncriefID as fmid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
xcoord,ycoord,zcoord = gri.register_gridfunctions("AUX",["xcoord","ycoord","zcoord"])
gri.xx[0] = xcoord
gri.xx[1] = ycoord
gri.xx[2] = zcoord
# Step 1e: Set up the Fishbone-Moncrief initial data. This sets all the ID gridfunctions.
fmid.FishboneMoncriefID()
Valencia3velocityU = ixp.register_gridfunctions_for_single_rank1("EVOL","Valencia3velocityU")
# -={ Spacetime quantities: Generate C code from expressions and output to file }=-
KerrSchild_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","alpha"),rhs=fmid.IDalpha),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU0"),rhs=fmid.IDbetaU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU1"),rhs=fmid.IDbetaU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","betaU2"),rhs=fmid.IDbetaU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD00"),rhs=fmid.IDgammaDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD01"),rhs=fmid.IDgammaDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD02"),rhs=fmid.IDgammaDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD11"),rhs=fmid.IDgammaDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD12"),rhs=fmid.IDgammaDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaDD22"),rhs=fmid.IDgammaDD[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD00"),rhs=fmid.IDKDD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD01"),rhs=fmid.IDKDD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD02"),rhs=fmid.IDKDD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD11"),rhs=fmid.IDKDD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD12"),rhs=fmid.IDKDD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","KDD22"),rhs=fmid.IDKDD[2][2]),\
]
# Force outCverbose=False for this module to avoid gigantic C files
# filled with the non-CSE expressions for the Weyl scalars.
KerrSchild_CcodeKernel = fin.FD_outputC("returnstring",KerrSchild_to_print,params="outCverbose=False")
# -={ GRMHD quantities: Generate C code from expressions and output to file }=-
FMdisk_GRHD_rho_initial_to_print = [lhrh(lhs=gri.gfaccess("out_gfs","rho_initial"),rhs=fmid.rho_initial)]
FMdisk_GRHD_rho_initial_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_rho_initial_to_print)
FMdisk_GRHD_velocities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU0"),rhs=fmid.IDValencia3velocityU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU1"),rhs=fmid.IDValencia3velocityU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","Valencia3velocityU2"),rhs=fmid.IDValencia3velocityU[2]),\
]
FMdisk_GRHD_velocities_CcodeKernel = fin.FD_outputC("returnstring",FMdisk_GRHD_velocities_to_print)
#KerrSchild_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# KerrSchild_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_velocities_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time"))
#FMdisk_GRHD_rho_initial_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time"))
# Step 1f: Create directories for the thorn if they don't exist.
!mkdir FishboneMoncriefID 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir FishboneMoncriefID/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1g: Write the C code kernel to file.
with open("FishboneMoncriefID/src/KerrSchild.h", "w") as file:
file.write(str(KerrSchild_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_velocities.h", "w") as file:
file.write(str(FMdisk_GRHD_velocities_CcodeKernel.replace("time","cctk_time")))
with open("FishboneMoncriefID/src/FMdisk_GRHD_rho_initial.h", "w") as file:
file.write(str(FMdisk_GRHD_rho_initial_CcodeKernel.replace("time","cctk_time")))
hm1string = outputC(fmid.hm1,"hm1",filename="returnstring")
with open("FishboneMoncriefID/src/FMdisk_GRHD_hm1.h", "w") as file:
file.write(str(hm1string))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile FishboneMoncriefID/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h> // Needed for rand()
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
// Alias for "vel" vector gridfunction:
#define velx (&vel[0*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define vely (&vel[1*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
#define velz (&vel[2*cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]])
void FishboneMoncrief_KerrSchild(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *alphaGF,CCTK_REAL *betaU0GF,CCTK_REAL *betaU1GF,CCTK_REAL *betaU2GF,
CCTK_REAL *gammaDD00GF,CCTK_REAL *gammaDD01GF,CCTK_REAL *gammaDD02GF,CCTK_REAL *gammaDD11GF,CCTK_REAL *gammaDD12GF,CCTK_REAL *gammaDD22GF,
CCTK_REAL *KDD00GF,CCTK_REAL *KDD01GF,CCTK_REAL *KDD02GF,CCTK_REAL *KDD11GF,CCTK_REAL *KDD12GF,CCTK_REAL *KDD22GF)
{
DECLARE_CCTK_PARAMETERS
#include "KerrSchild.h"
}
void FishboneMoncrief_FMdisk_GRHD_velocities(const cGH* restrict const cctkGH,const CCTK_INT *cctk_lsh,
const CCTK_INT i0,const CCTK_INT i1,const CCTK_INT i2,
const CCTK_REAL *xcoordGF,const CCTK_REAL *ycoordGF,const CCTK_REAL *zcoordGF,
CCTK_REAL *Valencia3velocityU0GF, CCTK_REAL *Valencia3velocityU1GF, CCTK_REAL *Valencia3velocityU2GF)
{
DECLARE_CCTK_PARAMETERS
#include "FMdisk_GRHD_velocities.h"
}
void FishboneMoncrief_ET_GRHD_initial(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
CCTK_VINFO("Fishbone-Moncrief Disk Initial data.");
CCTK_VINFO("Using input parameters of\n a = %e,\n M = %e,\nr_in = %e,\nr_at_max_density = %e\nkappa = %e\ngamma = %e",a,M,r_in,r_at_max_density,kappa,gamma);
// First compute maximum density
CCTK_REAL rho_max;
{
CCTK_REAL hm1;
CCTK_REAL xcoord = r_at_max_density;
CCTK_REAL ycoord = 0.0;
CCTK_REAL zcoord = 0.0;
{
#include "FMdisk_GRHD_hm1.h"
}
rho_max = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) );
}
#pragma omp parallel for
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
CCTK_REAL xcoord = x[idx];
CCTK_REAL ycoord = y[idx];
CCTK_REAL zcoord = z[idx];
CCTK_REAL rr = r[idx];
FishboneMoncrief_KerrSchild(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
alp,betax,betay,betaz,
gxx,gxy,gxz,gyy,gyz,gzz,
kxx,kxy,kxz,kyy,kyz,kzz);
CCTK_REAL hm1;
bool set_to_atmosphere=false;
if(rr > r_in) {
{
#include "FMdisk_GRHD_hm1.h"
}
if(hm1 > 0) {
rho[idx] = pow( hm1 * (gamma-1.0) / (kappa*gamma), 1.0/(gamma-1.0) ) / rho_max;
press[idx] = kappa*pow(rho[idx], gamma);
// P = (\Gamma - 1) rho epsilon
eps[idx] = press[idx] / (rho[idx] * (gamma - 1.0));
FishboneMoncrief_FMdisk_GRHD_velocities(cctkGH,cctk_lsh,
i,j,k,
x,y,z,
velx,vely,velz);
} else {
set_to_atmosphere=true;
}
} else {
set_to_atmosphere=true;
}
// Outside the disk? Set to atmosphere all hydrodynamic variables!
if(set_to_atmosphere) {
// Choose an atmosphere such that
// rho = 1e-5 * r^(-3/2), and
// P = k rho^gamma
// Add 1e-100 or 1e-300 to rr or rho to avoid divisions by zero.
rho[idx] = 1e-5 * pow(rr + 1e-100,-3.0/2.0);
press[idx] = kappa*pow(rho[idx], gamma);
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
w_lorentz[idx] = 1.0;
velx[idx] = 0.0;
vely[idx] = 0.0;
velz[idx] = 0.0;
}
}
CCTK_INT final_idx = CCTK_GFINDEX3D(cctkGH,cctk_lsh[0]-1,cctk_lsh[1]-1,cctk_lsh[2]-1);
CCTK_VINFO("===== OUTPUTS =====");
CCTK_VINFO("betai: %e %e %e \ngij: %e %e %e %e %e %e \nKij: %e %e %e %e %e %e\nalp: %e\n",betax[final_idx],betay[final_idx],betaz[final_idx],gxx[final_idx],gxy[final_idx],gxz[final_idx],gyy[final_idx],gyz[final_idx],gzz[final_idx],kxx[final_idx],kxy[final_idx],kxz[final_idx],kyy[final_idx],kyz[final_idx],kzz[final_idx],alp[final_idx]);
CCTK_VINFO("rho: %.15e\nPressure: %.15e\nvx: %.15e\nvy: %.15e\nvz: %.15e",rho[final_idx],press[final_idx],velx[final_idx],vely[final_idx],velz[final_idx]);
}
void FishboneMoncrief_ET_GRHD_initial__perturb_pressure(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
for(CCTK_INT k=0;k<cctk_lsh[2];k++) for(CCTK_INT j=0;j<cctk_lsh[1];j++) for(CCTK_INT i=0;i<cctk_lsh[0];i++) {
CCTK_INT idx = CCTK_GFINDEX3D(cctkGH,i,j,k);
// Generate random number in range [0,1),
// snippet courtesy http://daviddeley.com/random/crandom.htm
CCTK_REAL random_number_between_0_and_1 = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) );
CCTK_REAL random_number_between_min_and_max = random_min + (random_max - random_min)*random_number_between_0_and_1;
press[idx] = press[idx]*(1.0 + random_number_between_min_and_max);
// Add 1e-300 to rho to avoid division by zero when density is zero.
eps[idx] = press[idx] / ((rho[idx] + 1e-300) * (gamma - 1.0));
}
}
###Output
Overwriting FishboneMoncriefID/src/InitialData.c
###Markdown
Step 2.b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl}`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile FishboneMoncriefID/interface.ccl
implements: FishboneMoncriefID
inherits: admbase grid hydrobase
###Output
Writing FishboneMoncriefID/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile FishboneMoncriefID/param.ccl
shares: grid
shares: ADMBase
USES CCTK_INT lapse_timelevels
USES CCTK_INT shift_timelevels
USES CCTK_INT metric_timelevels
USES KEYWORD metric_type
EXTENDS KEYWORD initial_data
{
"FishboneMoncriefID" :: "Initial data from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_lapse
{
"FishboneMoncriefID" :: "Initial lapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_shift
{
"FishboneMoncriefID" :: "Initial shift from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtlapse
{
"FishboneMoncriefID" :: "Initial dtlapse from FishboneMoncriefID solution"
}
EXTENDS KEYWORD initial_dtshift
{
"FishboneMoncriefID" :: "Initial dtshift from FishboneMoncriefID solution"
}
shares: HydroBase
EXTENDS KEYWORD initial_hydro
{
"FishboneMoncriefID" :: "Initial GRHD data from FishboneMoncriefID solution"
}
#["r_in","r_at_max_density","a","M"] A_b, kappa, gamma
restricted:
CCTK_REAL r_in "Fixes the inner edge of the disk"
{
0.0:* :: "Must be positive"
} 6.0
restricted:
CCTK_REAL r_at_max_density "Radius at maximum disk density. Needs to be > r_in"
{
0.0:* :: "Must be positive"
} 12.0
restricted:
CCTK_REAL a "The spin parameter of the black hole"
{
-1.0:1.0 :: "Positive values, up to 1. Negative disallowed, as certain roots are chosen in the hydro fields setup. Check those before enabling negative spins!"
} 0.9375
restricted:
CCTK_REAL M "Kerr-Schild BH mass. Probably should always set M=1."
{
0.0:* :: "Must be positive"
} 1.0
restricted:
CCTK_REAL A_b "Scaling factor for the vector potential"
{
*:* :: ""
} 1.0
restricted:
CCTK_REAL kappa "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.0e-3
restricted:
CCTK_REAL gamma "Equation of state: P = kappa * rho^gamma"
{
0.0:* :: "Positive values"
} 1.3333333333333333333333333333
##################################
# PRESSURE PERTURBATION PARAMETERS
private:
CCTK_REAL random_min "Floor value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} -0.02
private:
CCTK_REAL random_max "Ceiling value of random perturbation to initial pressure, where perturbed pressure = pressure*(1.0 + (random_min + (random_max-random_min)*RAND[0,1)))"
{
*:* :: "Any value"
} 0.02
###Output
Writing FishboneMoncriefID/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile FishboneMoncriefID/schedule.ccl
STORAGE: ADMBase::metric[metric_timelevels], ADMBase::curv[metric_timelevels], ADMBase::lapse[lapse_timelevels], ADMBase::shift[shift_timelevels]
schedule FishboneMoncrief_ET_GRHD_initial IN HydroBase_Initial
{
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: admbase::alp(Everywhere)
WRITES: admbase::betax(Everywhere)
WRITES: admbase::betay(Everywhere)
WRITES: admbase::betaz(Everywhere)
WRITES: admbase::kxx(Everywhere)
WRITES: admbase::kxy(Everywhere)
WRITES: admbase::kxz(Everywhere)
WRITES: admbase::kyy(Everywhere)
WRITES: admbase::kyz(Everywhere)
WRITES: admbase::kzz(Everywhere)
WRITES: admbase::gxx(Everywhere)
WRITES: admbase::gxy(Everywhere)
WRITES: admbase::gxz(Everywhere)
WRITES: admbase::gyy(Everywhere)
WRITES: admbase::gyz(Everywhere)
WRITES: admbase::gzz(Everywhere)
WRITES: hydrobase::velx(Everywhere)
WRITES: hydrobase::vely(Everywhere)
WRITES: hydrobase::velz(Everywhere)
WRITES: hydrobase::rho(Everywhere)
WRITES: hydrobase::eps(Everywhere)
WRITES: hydrobase::press(Everywhere)
} "Set up general relativistic hydrodynamic (GRHD) fields for Fishbone-Moncrief disk"
schedule FishboneMoncrief_ET_GRHD_initial__perturb_pressure IN CCTK_INITIAL AFTER Seed_Magnetic_Fields BEFORE IllinoisGRMHD_ID_Converter
{
LANG: C
} "Add random perturbation to initial pressure, after seed magnetic fields have been set up (in case we'd like the seed magnetic fields to depend on the pristine pressures)"
###Output
Writing FishboneMoncriefID/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile FishboneMoncriefID/src/make.code.defn
SRCS = InitialData.c
###Output
Writing FishboneMoncriefID/src/make.code.defn
###Markdown
Step 3: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-FishboneMoncriefID.pdf](Tutorial-ETK_thorn-FishboneMoncriefID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ETK_thorn-FishboneMoncriefID.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-FishboneMoncriefID.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-ETK_thorn-FishboneMoncriefID.ipynb to latex
[NbConvertApp] Writing 60522 bytes to Tutorial-ETK_thorn-FishboneMoncriefID.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
11 - Introduction to Python/3_Basic Python Syntax/1_Arithmetic Operators (3:23)/Arithmetic Operators - Solution_Py2.ipynb | ###Markdown
Arithmetic operators *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Combine 15 and 23.
###Code
15 + 23
###Output
_____no_output_____
###Markdown
Subtract 50 from 26.
###Code
26 - 50
###Output
_____no_output_____
###Markdown
Divide 20 by 4.
###Code
20 / 4
###Output
_____no_output_____
###Markdown
Divide 22 by 4.
###Code
22 / 4
###Output
_____no_output_____
###Markdown
Obtain the remainder of the division of 22 by 4.
###Code
22 % 4
###Output
_____no_output_____
###Markdown
Divide the float 22 by 4.
###Code
float(22) / 4
###Output
_____no_output_____
###Markdown
or:
###Code
22.0 / 4
###Output
_____no_output_____
###Markdown
Multiply 6 by 8.
###Code
6 * 8
###Output
_____no_output_____
###Markdown
Raise 15 to the power of 2.
###Code
15 ** 2
###Output
_____no_output_____ |
Interview Preparation Kit/3. Dictionaries and Hashmaps/Two Strings.ipynb | ###Markdown
Two Strings
###Code
#!/bin/python3
import math
import os
import random
import re
import sys
# Complete the twoStrings function below.
def twoStrings(s1, s2):
d = dict()
for i in s1:
d[i] = d.get(i, 0) + 1
for j in s2:
if d.get(j): return 'YES'
return 'NO'
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
q = int(input())
for q_itr in range(q):
s1 = input()
s2 = input()
result = twoStrings(s1, s2)
fptr.write(result + '\n')
fptr.close()
###Output
_____no_output_____ |
data_preprocessing/1_data_generation/4_Aggregated_Spatial_Features/.ipynb_checkpoints/accident_spatialattributes-checkpoint.ipynb | ###Markdown
hann city
###Code
#city='new_method/Nurmberg'
city='new_method/hann/clustering/kmeans++'
data_tr1=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/train_acc_munus1assigned.csv")
#data_tr['geohash_five'] = data_tr.apply(lambda row: geohash(row['acc_lat'],row['acc_long']), axis=1)
data_tr1
data_tr1=data_tr1[['acc_id','cluster_id']]
data_tr1
#city='new_method/hann/clustering/dbscan'
data_trOrg=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/train_acc.csv")
#data_tr['geohash_five'] = data_tr.apply(lambda row: geohash(row['acc_lat'],row['acc_long']), axis=1)
data_trOrg
#data_tr=pd.merge(data_trOrg,data_tr1, on='acc_id')
#data_tr.drop(['cluster_id_x'],axis=1,inplace=True)
data_tr=data_trOrg.copy()
data_tr
import geohash as gh
def geohash(lat,long):
geo=gh.encode(float(lat), float(long), precision=5)
return geo
data_tr['geohash_five'] = data_tr.apply(lambda row: geohash(row['acc_lat'],row['acc_long']), axis=1)
data_tr
# road conditions:- clustered according to cluster_id/grid
#city='new_method/som_clustering40x40'
import pandas as pd
df_utype1=data_tr[['cluster_id','UTYP1']]
df_zustand = pd.concat([df_utype1,pd.get_dummies(df_utype1['UTYP1'], prefix='UTYP1')],axis=1)
df_zustand=df_zustand.groupby('cluster_id').mean().reset_index()
df_zustand.drop(['UTYP1'],axis=1,inplace=True)
df_zustand.to_csv('/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/geodata/utypeTrainOnly.csv',index=False)
df_zustand
utype1 = pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/utypeTrainOnly.csv")
display(utype1)
col=utype1[['UTYP1_1','UTYP1_2','UTYP1_3','UTYP1_4','UTYP1_5','UTYP1_6','UTYP1_7']].values
print(col)
NLP_dict_type1={}
for index, row in utype1.iterrows():
NLP_dict_type1[row.cluster_id] = np.array(col[index])
#print( np.array([col.iloc[index]]))
#print(NLP_dict_type1['u1kfry'].shape)
f = open("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/utypeTrainOnly.pkl","wb")
pickle.dump(NLP_dict_type1,f)
f.close()
import pandas as pd
df_utype1=data_tr[['cluster_id','UART']]
df_zustand = pd.concat([df_utype1,pd.get_dummies(df_utype1['UART'], prefix='UART')],axis=1)
#df_zustand=df_zustand[['geohash','STRZUSTAND_0','STRZUSTAND_1','STRZUSTAND_2']]
df_zustand=df_zustand.groupby('cluster_id').mean().reset_index()
#df_uart=df.loc[df['geohash']=='u1kfc4']
df_zustand.drop(['UART'],axis=1,inplace=True)
df_zustand.to_csv('/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/geodata/uartTrainOnly.csv',index=False)
df_zustand
uart = pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/uartTrainOnly.csv")
display(uart)
col=uart[['UART_0','UART_1','UART_2','UART_3','UART_4','UART_5','UART_6','UART_7','UART_8','UART_9']].values
#=NLP_map[['geohash','combine']]
print(col)
#display(NLP_map.dtypes)
NLP_dict_uart={}
for index, row in uart.iterrows():
#print('index',index)
# print('row',str(row.combine))
NLP_dict_uart[row.cluster_id] = np.array(col[index])
#print(NLP_dict_uart['u1kfry'])
f = open("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/uartTrainOnly.pkl","wb")
pickle.dump(NLP_dict_uart,f)
f.close()
import pandas as pd
df_utype1=data_tr[['cluster_id','STRZUSTAND']]
df_zustand = pd.concat([df_utype1,pd.get_dummies(df_utype1['STRZUSTAND'], prefix='STRZUSTAND')],axis=1)
#df_zustand=df_zustand[['geohash','STRZUSTAND_0','STRZUSTAND_1','STRZUSTAND_2']]
df_zustand=df_zustand.groupby('cluster_id').mean().reset_index()
#df_uart=df.loc[df['geohash']=='u1kfc4']
df_zustand.drop(['STRZUSTAND'],axis=1,inplace=True)
df_zustand.to_csv('/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/geodata/zustandTrainOnly.csv',index=False)
df_zustand
import pickle
NLP_map = pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/zustandTrainOnly.csv")
display(NLP_map)
#display(NLP_map)
col=NLP_map[['STRZUSTAND_0','STRZUSTAND_1','STRZUSTAND_2']].values
print(col)
NLP_dict_zu={}
for index, row in NLP_map.iterrows():
NLP_dict_zu[row.cluster_id] = np.array(col[index])
#print( np.array([col.iloc[index]]))
#print(NLP_dict_zu['u1kfry'])
f = open("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/zustandTrainOnly.pkl","wb")
pickle.dump(NLP_dict_zu,f)
f.close()
import pandas as pd
df_acccount=data_tr[['cluster_id','acc_id']]
df_gp=df_acccount.groupby('cluster_id').count()
df_gp=df_gp.reset_index()
df_gp.columns=['cluster_id','acc_count']
df_gp.to_csv('/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/geodata/acc_countTrainOnly.csv',index=False)
print(city)
import pickle
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
import pyforest
#NLP_map = pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/Alldata_baveria/geodata/speed.csv")
NLP_map=df_gp.copy()
display(NLP_map)
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit(NLP_map.loc[:,'acc_count':])
scaled_values = scaler.transform(NLP_map.loc[:,'acc_count':])
NLP_map.loc[:,'acc_count':] = scaled_values
display(NLP_map)
col=NLP_map[['acc_count']].values
print(col)
NLP_dict_outspeedregion={}
for index, row in NLP_map.iterrows():
NLP_dict_outspeedregion[row.cluster_id] = np.array(col[index])
#print( np.array([col.iloc[index]]))
#print(NLP_dict_outspeedregion)
f = open("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/acc_countTrainOnly.pkl","wb")
pickle.dump(NLP_dict_outspeedregion,f)
f.close()
# per cluster geohash 153x153 count
print(city)
geo=pd.read_csv('/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/geohash_cluster7.csv')
geo6togeo7=geo[['geohash','cluster_id']]
geo6togeo7=geo6togeo7.groupby('cluster_id').count()
geo6togeo7=geo6togeo7.reset_index()
geo6togeo7.columns=['cluster_id','count']
geo6togeo7
geo6togeo7.loc[geo6togeo7['cluster_id']==195]
#city='new_method/braun'
print(city)
import pickle
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
import pyforest
#NLP_map = pd.read_csv('/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/geodata/clusterToGeohashCount.csv')
NLP_map=geo6togeo7.copy()
display(NLP_map)
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit(NLP_map.loc[:,'count':])
scaled_values = scaler.transform(NLP_map.loc[:,'count':])
NLP_map.loc[:,'count':] = scaled_values
display(NLP_map)
col=NLP_map[['count']].values
print(col)
NLP_dict_outspeedregion={}
for index, row in NLP_map.iterrows():
NLP_dict_outspeedregion[row.cluster_id] = np.array(col[index])
#print( np.array([col.iloc[index]]))
#print(NLP_dict_outspeedregion)
f = open("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/geodata/clusterToGeohashCountTrainOnly.pkl","wb")
pickle.dump(NLP_dict_outspeedregion,f)
f.close()
#shifted grid
city='new_method'
method='dbscan'
import pandas as pd
# data_train_acc=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/rawdata/all_acc/processed_data/"+city+"/train_acc_actual.csv",header=0)
# data_train_nonac=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/rawdata/all_acc/processed_data/"+city+"/train_nonaccdata.csv",header=0)
data_train_acc=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city+"/train_acc.csv",header=0)
data_train_acc
city1='new_method/geohash_Shifted'
method='dbscan'
import pandas as pd
# data_train_acc=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/rawdata/all_acc/processed_data/"+city+"/train_acc_actual.csv",header=0)
# data_train_nonac=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/rawdata/all_acc/processed_data/"+city+"/train_nonaccdata.csv",header=0)
data_train_acc_shifted=pd.read_csv("/data/dadwal/data/DAP_data/dataPrepTrainTestCluster/Baveria/"+city1+"/train_acc.csv",header=0)
data_train_acc_shifted.drop(['geohash','acc_longShifted'],axis=1,inplace=True)
data_train_acc_shifted=data_train_acc_shifted[['acc_id', 'geohash_shifted', 'acc_long', 'acc_lat', 'UJAHR', 'UMONAT',
'USTUNDE', 'UWOCHENTAG', 'STRZUSTAND', 'UART', 'UTYP1',
'sessiondivided', 'count', 'geohash7']]
data_train_acc_shifted.columns=['acc_id', 'geohash', 'acc_long', 'acc_lat', 'UJAHR', 'UMONAT',
'USTUNDE', 'UWOCHENTAG', 'STRZUSTAND', 'UART', 'UTYP1',
'sessiondivided', 'count', 'geohash7']
data_train_acc_shifted
data_tr=pd.concat([data_train_acc,data_train_acc_shifted])
data_tr
import matplotlib.pyplot as plt
# line 1 points
x1 = ['1x1_O','OS_1x1_TestO','1x1_S','OS_1x1_TestS']
y1 = [0.583,0.577,0.585,0.52]
# plotting the line 1 points
plt.plot(x1, y1, label = "F1")
# line 2 points
plt.xlabel('Exp')
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('F1 score Accident ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
# final reults
import matplotlib.pyplot as plt
# line 1 points
x1 = ['AGAP','DNN','GBC','LR','DAP']
y1 = [0.598,0.57,0.52,0.57,0.499]
# plotting the line 1 points
plt.plot(x1, y1, label = "1x1Grid")
# line 2 points
plt.xlabel('Exp')
y2 = [0.52,0.5,0.16,0.49,0.45]
# plotting the line 1 points
plt.plot(x1, y2, label = "5x5Grid")
y3 = [0.56,0.517,0.52,0.53,0.426]
# plotting the line 1 points
plt.plot(x1, y3, label = "clustering")
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('Hannover ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
import matplotlib.pyplot as plt
# line 1 points
x1 = ['AGAP','DNN','GBC','LR','DAP']
y1 = [0.57,0.528,0.51,0.52,0.497]
# plotting the line 1 points
plt.plot(x1, y1, label = "1x1Grid")
# line 2 points
plt.xlabel('Exp')
y2 = [0.51,0.49,0.26,0.4,0.44]
# plotting the line 1 points
plt.plot(x1, y2, label = "5x5Grid")
y3 = [0.55,0.51,0.51,0.46,0.44]
# plotting the line 1 points
plt.plot(x1, y3, label = "clustering")
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('Munich ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
import matplotlib.pyplot as plt
# line 1 points
x1 = ['AGAP','DNN','GBC','LR','DAP']
y1 = [0.60,0.57,0.55,0.57,0.516]
# plotting the line 1 points
plt.plot(x1, y1, label = "1x1Grid")
# line 2 points
plt.xlabel('Exp')
y2 = [0.53,0.51,0.18,0.48,0.45]
# plotting the line 1 points
plt.plot(x1, y2, label = "5x5Grid")
y3 = [0.573,0.54,0.54,0.535,0.45]
# plotting the line 1 points
plt.plot(x1, y3, label = "clustering")
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('Nuremberg ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
# 5x5 tregion
import matplotlib.pyplot as plt
# line 1 points
x1 = ['AGAP','DNN','LR','DAP','GBC']
y1 = [0.51,0.49,0.4,0.44,0.26]
# plotting the line 1 points
plt.plot(x1, y1, label = "Munich")
# line 2 points
plt.xlabel('Exp')
y2 = [0.53,0.51,0.48,0.45,0.18]
# plotting the line 1 points
plt.plot(x1, y2, label = "Nuremberg")
y3 = [0.52,0.5,0.49,0.45,0.16]
# plotting the line 1 points
plt.plot(x1, y3, label = "Hanover")
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('5x5 grid ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
#1x1 grid
import matplotlib.pyplot as plt
# line 1 points
x1 = ['AGAP','DNN','LR','DAP','GBC']
y1 = [0.57,0.528,0.52,0.497,0.51]
# plotting the line 1 points
plt.plot(x1, y1, label = "Munich")
# line 2 points
plt.xlabel('Exp')
y2 = [0.60,0.57,0.57,0.516,0.55]
# plotting the line 1 points
plt.plot(x1, y2, label = "Nuremberg")
y3 = [0.598,0.57,0.57,0.499,0.52]
# plotting the line 1 points
plt.plot(x1, y3, label = "Hanover")
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('5x5 grid ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
# clustering
import matplotlib.pyplot as plt
# line 1 points
x1 = ['AGAP','DNN','LR','DAP','GBC']
y1 = [0.55,0.51,0.46,0.44,0.51]
# plotting the line 1 points
plt.plot(x1, y1, label = "Munich")
# line 2 points
plt.xlabel('Exp')
y2 = [0.573,0.54,0.535,0.45,0.54]
# plotting the line 1 points
plt.plot(x1, y2, label = "Nuremberg")
y3 = [0.56,0.517,0.53,0.426,0.52]
# plotting the line 1 points
plt.plot(x1, y3, label = "Hanover")
# Set the y axis label of the current axis.
plt.ylabel('F1-score Acc')
# Set a title of the current axes.
plt.title('5x5 grid ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.show()
from shapely import geometry
from shapely.geometry import shape, Point
import geohash as gh
import numpy as np
import pandas as pd
import json
#import dinet_base as dinet
import tensorflow as tf
import os
import logging
import argparse
import configparser
import numpy as np
import sys
import pandas as pd
import datetime
import random
def compute_geohash_tiles_from_polygon(polygon_1):
"""Computes all hex tile in the given polygon
:param polygon: the polygon
:return: a list of geohashes
"""
polygon = shape(polygon_1)
checked_geohashes = set()
geohash_stack = set()
geohashes = []
# get center of bounding, assuming the earth is flat ;) ,
center_latitude = polygon.centroid.coords[0][1]
center_longitude = polygon.centroid.coords[0][0]
# center_latitude = 52.383260
# center_longitude = 9.758040
print(center_latitude)
center_geohash = gh.encode(center_latitude, center_longitude, precision=7)
print(center_geohash)
geohashes.append(center_geohash)
geohash_stack.add(center_geohash)
checked_geohashes.add(center_geohash)
while len(geohash_stack) > 0:
current_geohash = geohash_stack.pop()
neighbors = gh.neighbors(current_geohash)
for neighbor in neighbors:
point = geometry.Point(gh.decode(neighbor)[::-1])
if neighbor not in checked_geohashes and polygon.contains(point):
geohashes.append(neighbor)
geohash_stack.add(neighbor)
checked_geohashes.add(neighbor)
return geohashes
cities = ['saarland']
# method = 'dbscan'
for city in cities:
with open('/data/dadwal/dir_14oct/accident_prediction/data/geojson/'+city+'.geojson') as f:
data = json.load(f)
#print(data)
ge = compute_geohash_tiles_from_polygon(data)
print(len(ge))
listofzeros = [1] * len(ge)
region_grid = pd.DataFrame(
{'geohash': ge,
'count': listofzeros
})
region_grid.to_csv('/data/dadwal//data/DAP_data/dataPrepTrainTestCluster/Baveria/'+city+'/numberofGridRegionGeo7.csv', index=False)
###Output
49.384433342058735
u0ubynp
168866
|
Exploratory Data Analysis - Sports/Exploratory Data Analysis - Sports.ipynb | ###Markdown
Author -Kushal Das Exploratory Data Analysis - Sports- Problem Statement: Perform Exploratory Data Analysis on 'Indian Premiere League' - As a sports analysts, find out the most successful teams, players and factors contributing win or loss of a team.- Suggest teams or players a company should endorse for its products. Importing LIBRARIES:
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("matches.csv")
df.head()
###Output
_____no_output_____
###Markdown
Data information:
###Code
df.info()
df.shape
df.describe()
###Output
_____no_output_____
###Markdown
Matches we have got in the dataset.
###Code
df['id'].max()
###Output
_____no_output_____
###Markdown
Seasons we have got in the dataset.
###Code
df['season'].unique()
len(df['season'].unique())
###Output
_____no_output_____
###Markdown
Team won by Maximum Runs.
###Code
df.iloc[df['win_by_runs'].idxmax()]
df.iloc[df['win_by_runs'].idxmax()]['winner']
###Output
_____no_output_____
###Markdown
Team won by Maximum Wickets.
###Code
df.iloc[df['win_by_wickets'].idxmax()]['winner']
###Output
_____no_output_____
###Markdown
Team won by minimum runs
###Code
df.iloc[df[df['win_by_runs'].ge(1)].win_by_runs.idxmin()]['winner']
###Output
_____no_output_____
###Markdown
Team won by Minimum Wickets.
###Code
df.iloc[df[df['win_by_wickets'].ge(1)].win_by_wickets.idxmin()]
df.iloc[df[df['win_by_wickets'].ge(1)].win_by_wickets.idxmin()]['winner']
###Output
_____no_output_____
###Markdown
Observation :- Mumbai Indians is the team which won by maximum and minimum runs- Kolkata Knight Riders is the team which won by maximum and minimum wickets Season Which had most number of matches
###Code
sns.countplot(x='season', data=df)
plt.show()
###Output
_____no_output_____
###Markdown
> In 2013, we have the most number of matches
###Code
data = df.winner.value_counts()
sns.barplot(y = data.index, x = data, orient='h')
###Output
_____no_output_____
###Markdown
> Mumbai Indians are the winners in most of the matches Top Player of the match winners
###Code
top_players = df.player_of_match.value_counts()[:10]
#sns.barplot(x="day", y="total_bill", data=df)
fig, ax = plt.subplots()
ax.set_ylim([0,20])
ax.set_ylabel("Count")
ax.set_title("Top player of the match Winners")
top_players.plot.bar()
sns.barplot(x = top_players.index, y = top_players, orient='v', palette="Blues");
plt.show()
###Output
_____no_output_____ |
sagemaker-spark/pyspark_mnist/pyspark_mnist_xgboost.ipynb | ###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\
.master("local[*]").getOrCreate()
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
cn_regions = ['cn-north-1', 'cn-northwest-1']
region = boto3.Session().region_name
endpoint_domain = 'com.cn' if region in cn_regions else 'com'
spark._jsc.hadoopConfiguration().set('fs.s3a.endpoint', 's3.{}.amazonaws.{}'.format(region, endpoint_domain))
trainingData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/train/'.format(region))
testData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/test/'.format(region))
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType='ml.m4.xlarge',
trainingInstanceCount=1,
endpointInstanceType='ml.m4.xlarge',
endpointInitialInstanceCount=1)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(testData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption='', xlabel='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap='gray')
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print('\n\n\nCluster {}:'.format(int(cluster)))
digits=[ img for l, img in zip(clusters, images) if int(l.prediction) == cluster ]
height=((len(digits) - 1) // 5) + 1
width=5
plt.rcParams["figure.figsize"] = (width,height)
_, subplots = plt.subplots(height, width)
subplots=np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits):]:
subplot.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\
.master("local[*]").getOrCreate()
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
cn_regions = ['cn-north-1', 'cn-northwest-1']region = boto3.Session().region_name
endpoint_domain = 'com.cn' if region in cn_regions else 'com'spark._jsc.hadoopConfiguration().set('fs.s3a.endpoint', 's3.{}.amazonaws.{}'.format(region, endpoint_domain))
trainingData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/train/'.format(region))
testData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/test/'.format(region))
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType='ml.m4.xlarge',
trainingInstanceCount=1,
endpointInstanceType='ml.m4.xlarge',
endpointInitialInstanceCount=1)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(testData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption='', xlabel='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap='gray')
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print('\n\n\nCluster {}:'.format(int(cluster)))
digits=[ img for l, img in zip(clusters, images) if int(l.prediction) == cluster ]
height=((len(digits) - 1) // 5) + 1
width=5
plt.rcParams["figure.figsize"] = (width,height)
_, subplots = plt.subplots(height, width)
subplots=np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits):]:
subplot.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = (
SparkSession.builder.config("spark.driver.extraClassPath", classpath)
.master("local[*]")
.getOrCreate()
)
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
cn_regions = ["cn-north-1", "cn-northwest-1"]
region = boto3.Session().region_name
endpoint_domain = "com.cn" if region in cn_regions else "com"
spark._jsc.hadoopConfiguration().set(
"fs.s3a.endpoint", "s3.{}.amazonaws.{}".format(region, endpoint_domain)
)
trainingData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.option("vectorType", "dense")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/train/".format(region))
)
testData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.option("vectorType", "dense")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/test/".format(region))
)
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType="ml.m4.xlarge",
trainingInstanceCount=1,
endpointInstanceType="ml.m4.xlarge",
endpointInitialInstanceCount=1,
)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(testData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption="", xlabel="", subplot=None):
if subplot == None:
_, (subplot) = plt.subplots(1, 1)
imgr = img.reshape((28, 28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap="gray")
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print("\n\n\nCluster {}:".format(int(cluster)))
digits = [img for l, img in zip(clusters, images) if int(l.prediction) == cluster]
height = ((len(digits) - 1) // 5) + 1
width = 5
plt.rcParams["figure.figsize"] = (width, height)
_, subplots = plt.subplots(height, width)
subplots = np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits) :]:
subplot.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\
.master("local[*]").getOrCreate()
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
region = boto3.Session().region_name
spark._jsc.hadoopConfiguration().set('fs.s3a.endpoint', 's3.{}.amazonaws.com'.format(region))
trainingData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/train/'.format(region))
testData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/test/'.format(region))
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType='ml.m4.xlarge',
trainingInstanceCount=1,
endpointInstanceType='ml.m4.xlarge',
endpointInitialInstanceCount=1)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(trainingData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption='', xlabel='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap='gray')
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print('\n\n\nCluster {}:'.format(int(cluster)))
digits=[ img for l, img in zip(clusters, images) if int(l.prediction) == cluster ]
height=((len(digits) - 1) // 5) + 1
width=5
plt.rcParams["figure.figsize"] = (width,height)
_, subplots = plt.subplots(height, width)
subplots=np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits):]:
subplot.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = (
SparkSession.builder.config("spark.driver.extraClassPath", classpath)
.master("local[*]")
.getOrCreate()
)
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
cn_regions = ["cn-north-1", "cn-northwest-1"]
region = boto3.Session().region_name
endpoint_domain = "com.cn" if region in cn_regions else "com"
spark._jsc.hadoopConfiguration().set(
"fs.s3a.endpoint", "s3.{}.amazonaws.{}".format(region, endpoint_domain)
)
trainingData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.option("vectorType", "dense")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/train/".format(region))
)
testData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.option("vectorType", "dense")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/test/".format(region))
)
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType="ml.m4.xlarge",
trainingInstanceCount=1,
endpointInstanceType="ml.m4.xlarge",
endpointInitialInstanceCount=1,
)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(testData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption="", xlabel="", subplot=None):
if subplot == None:
_, (subplot) = plt.subplots(1, 1)
imgr = img.reshape((28, 28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap="gray")
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print("\n\n\nCluster {}:".format(int(cluster)))
digits = [img for l, img in zip(clusters, images) if int(l.prediction) == cluster]
height = ((len(digits) - 1) // 5) + 1
width = 5
plt.rcParams["figure.figsize"] = (width, height)
_, subplots = plt.subplots(height, width)
subplots = np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits) :]:
subplot.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\
.master("local[*]").getOrCreate()
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
region = boto3.Session().region_name
spark._jsc.hadoopConfiguration().set('fs.s3a.endpoint', 's3.{}.amazonaws.com'.format(region))
trainingData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/train/'.format(region))
testData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/test/'.format(region))
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType='ml.m4.xlarge',
trainingInstanceCount=1,
endpointInstanceType='ml.m4.xlarge',
endpointInitialInstanceCount=1)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(testData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption='', xlabel='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap='gray')
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print('\n\n\nCluster {}:'.format(int(cluster)))
digits=[ img for l, img in zip(clusters, images) if int(l.prediction) == cluster ]
height=((len(digits) - 1) // 5) + 1
width=5
plt.rcParams["figure.figsize"] = (width,height)
_, subplots = plt.subplots(height, width)
subplots=np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits):]:
subplot.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = (
SparkSession.builder.config("spark.driver.extraClassPath", classpath)
.master("local[*]")
.getOrCreate()
)
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
cn_regions = ["cn-north-1", "cn-northwest-1"]
region = boto3.Session().region_name
endpoint_domain = "com.cn" if region in cn_regions else "com"
spark._jsc.hadoopConfiguration().set(
"fs.s3a.endpoint", "s3.{}.amazonaws.{}".format(region, endpoint_domain)
)
trainingData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.option("vectorType", "dense")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/train/".format(region))
)
testData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.option("vectorType", "dense")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/test/".format(region))
)
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType="ml.m4.xlarge",
trainingInstanceCount=1,
endpointInstanceType="ml.m4.xlarge",
endpointInitialInstanceCount=1,
)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(testData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption="", xlabel="", subplot=None):
if subplot == None:
_, (subplot) = plt.subplots(1, 1)
imgr = img.reshape((28, 28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap="gray")
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print("\n\n\nCluster {}:".format(int(cluster)))
digits = [img for l, img in zip(clusters, images) if int(l.prediction) == cluster]
height = ((len(digits) - 1) // 5) + 1
width = 5
plt.rcParams["figure.figsize"] = (width, height)
_, subplots = plt.subplots(height, width)
subplots = np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits) :]:
subplot.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____
###Markdown
SageMaker PySpark XGBoost MNIST Example1. [Introduction](Introduction)2. [Setup](Setup)3. [Loading the Data](Loading-the-Data)4. [Training and Hosting a Model](Training-and-Hosting-a-Model)5. [Inference](Inference)6. [More on SageMaker Spark](More-on-SageMaker-Spark) IntroductionThis notebook will show how to classify handwritten digits using the XGBoost algorithm on Amazon SageMaker through the SageMaker PySpark library. We will train on Amazon SageMaker using XGBoost on the MNIST dataset, host the trained model on Amazon SageMaker, and then make predictions against that hosted model.Unlike the other notebooks that demonstrate XGBoost on Amazon SageMaker, this notebook uses a SparkSession to manipulate data, and uses the SageMaker Spark library to interact with SageMaker with Spark Estimators and Transformers.You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.You can visit XGBoost's GitHub repository at https://github.com/dmlc/xgboost to learn more about XGBoostThis notebook was created and tested on an ml.m4.xlarge notebook instance. SetupFirst, we import the necessary modules and create the SparkSession with the SageMaker Spark dependencies.
###Code
import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github repo under sagemaker-pyspark-sdk
# to learn how to connect to a remote EMR cluster running Spark from a Notebook Instance.
spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\
.master("local[*]").getOrCreate()
###Output
_____no_output_____
###Markdown
Loading the DataNow, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at`s3://sagemaker-sample-data-[region]/spark/mnist/train/`where `[region]` is replaced with a supported AWS region, such as us-east-1.In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
###Code
import boto3
region = boto3.Session().region_name
trainingData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/train/'.format(region))
testData = spark.read.format('libsvm')\
.option('numFeatures', '784')\
.option('vectorType', 'dense')\
.load('s3a://sagemaker-sample-data-{}/spark/mnist/test/'.format(region))
trainingData.show()
###Output
_____no_output_____
###Markdown
Training and Hosting a ModelNow we create an XGBoostSageMakerEstimator, which uses the XGBoost Amazon SageMaker Algorithm to train on our input data, and uses the XGBoost Amazon SageMaker model image to host our model.Calling fit() on this estimator will train our model on Amazon SageMaker, and then create an Amazon SageMaker Endpoint to host our model.We can then use the SageMakerModel returned by this call to fit() to transform Dataframes using our hosted model.The following cell runs a training job and creates an endpoint to host the resulting model, so this cell can take up to twenty minutes to complete.
###Code
import random
from sagemaker_pyspark import IAMRole, S3DataPath
from sagemaker_pyspark.algorithms import XGBoostSageMakerEstimator
xgboost_estimator = XGBoostSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType='ml.m4.xlarge',
trainingInstanceCount=1,
endpointInstanceType='ml.m4.xlarge',
endpointInitialInstanceCount=1)
xgboost_estimator.setEta(0.2)
xgboost_estimator.setGamma(4)
xgboost_estimator.setMinChildWeight(6)
xgboost_estimator.setSilent(0)
xgboost_estimator.setObjective("multi:softmax")
xgboost_estimator.setNumClasses(10)
xgboost_estimator.setNumRound(10)
# train
model = xgboost_estimator.fit(trainingData)
###Output
_____no_output_____
###Markdown
InferenceNow we transform our DataFrame.To do this, we serialize each row's "features" Vector of Doubles into LibSVM format for inference against the Amazon SageMaker Endpoint. We deserialize the CSV responses from the XGBoost model back into our DataFrame. This serialization and deserialization is handled automatically by the `transform()` method:
###Code
transformedData = model.transform(trainingData)
transformedData.show()
###Output
_____no_output_____
###Markdown
How well did the algorithm perform? Let us display the digits corresponding to each of the labels and manually inspect the results:
###Code
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
# helper function to display a digit
def show_digit(img, caption='', xlabel='', subplot=None):
if subplot==None:
_,(subplot)=plt.subplots(1,1)
imgr=img.reshape((28,28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap='gray')
images = np.array(transformedData.select("features").cache().take(250))
clusters = transformedData.select("prediction").cache().take(250)
for cluster in range(10):
print('\n\n\nCluster {}:'.format(int(cluster)))
digits=[ img for l, img in zip(clusters, images) if int(l.prediction) == cluster ]
height=((len(digits) - 1) // 5) + 1
width=5
plt.rcParams["figure.figsize"] = (width,height)
_, subplots = plt.subplots(height, width)
subplots=np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
show_digit(image, subplot=subplot)
for subplot in subplots[len(digits):]:
subplot.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Since we don't need to make any more inferences, now we delete the endpoint:
###Code
# Delete the endpoint
from sagemaker_pyspark import SageMakerResourceCleanup
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
###Output
_____no_output_____ |
digital-image-processing/notebooks/transform/plot_matching.ipynb | ###Markdown
Robust matching using RANSACIn this simplified example we first generate two synthetic images as if theywere taken from different view points.In the next step we find interest points in both images and findcorrespondences based on a weighted sum of squared differences of a smallneighborhood around them. Note, that this measure is only robust towardslinear radiometric and not geometric distortions and is thus only usable withslight view point changes.After finding the correspondences we end up having a set of source anddestination coordinates which can be used to estimate the geometrictransformation between both images. However, many of the correspondences arefaulty and simply estimating the parameter set with all coordinates is notsufficient. Therefore, the RANSAC algorithm is used on top of the normal modelto robustly estimate the parameter set by detecting outliers.
###Code
import numpy as np
from matplotlib import pyplot as plt
from skimage import data
from skimage.util import img_as_float
from skimage.feature import (corner_harris, corner_subpix, corner_peaks,
plot_matches)
from skimage.transform import warp, AffineTransform
from skimage.exposure import rescale_intensity
from skimage.color import rgb2gray
from skimage.measure import ransac
# generate synthetic checkerboard image and add gradient for the later matching
checkerboard = img_as_float(data.checkerboard())
img_orig = np.zeros(list(checkerboard.shape) + [3])
img_orig[..., 0] = checkerboard
gradient_r, gradient_c = (np.mgrid[0:img_orig.shape[0],
0:img_orig.shape[1]]
/ float(img_orig.shape[0]))
img_orig[..., 1] = gradient_r
img_orig[..., 2] = gradient_c
img_orig = rescale_intensity(img_orig)
img_orig_gray = rgb2gray(img_orig)
# warp synthetic image
tform = AffineTransform(scale=(0.9, 0.9), rotation=0.2, translation=(20, -10))
img_warped = warp(img_orig, tform.inverse, output_shape=(200, 200))
img_warped_gray = rgb2gray(img_warped)
# extract corners using Harris' corner measure
coords_orig = corner_peaks(corner_harris(img_orig_gray), threshold_rel=0.001,
min_distance=5)
coords_warped = corner_peaks(corner_harris(img_warped_gray),
threshold_rel=0.001, min_distance=5)
# determine sub-pixel corner position
coords_orig_subpix = corner_subpix(img_orig_gray, coords_orig, window_size=9)
coords_warped_subpix = corner_subpix(img_warped_gray, coords_warped,
window_size=9)
def gaussian_weights(window_ext, sigma=1):
y, x = np.mgrid[-window_ext:window_ext+1, -window_ext:window_ext+1]
g = np.zeros(y.shape, dtype=np.double)
g[:] = np.exp(-0.5 * (x**2 / sigma**2 + y**2 / sigma**2))
g /= 2 * np.pi * sigma * sigma
return g
def match_corner(coord, window_ext=5):
r, c = np.round(coord).astype(np.intp)
window_orig = img_orig[r-window_ext:r+window_ext+1,
c-window_ext:c+window_ext+1, :]
# weight pixels depending on distance to center pixel
weights = gaussian_weights(window_ext, 3)
weights = np.dstack((weights, weights, weights))
# compute sum of squared differences to all corners in warped image
SSDs = []
for cr, cc in coords_warped:
window_warped = img_warped[cr-window_ext:cr+window_ext+1,
cc-window_ext:cc+window_ext+1, :]
SSD = np.sum(weights * (window_orig - window_warped)**2)
SSDs.append(SSD)
# use corner with minimum SSD as correspondence
min_idx = np.argmin(SSDs)
return coords_warped_subpix[min_idx]
# find correspondences using simple weighted sum of squared differences
src = []
dst = []
for coord in coords_orig_subpix:
src.append(coord)
dst.append(match_corner(coord))
src = np.array(src)
dst = np.array(dst)
# estimate affine transform model using all coordinates
model = AffineTransform()
model.estimate(src, dst)
# robustly estimate affine transform model with RANSAC
model_robust, inliers = ransac((src, dst), AffineTransform, min_samples=3,
residual_threshold=2, max_trials=100)
outliers = inliers == False
# compare "true" and estimated transform parameters
print("Ground truth:")
print(f"Scale: ({tform.scale[1]:.4f}, {tform.scale[0]:.4f}), "
f"Translation: ({tform.translation[1]:.4f}, "
f"{tform.translation[0]:.4f}), "
f"Rotation: {-tform.rotation:.4f}")
print("Affine transform:")
print(f"Scale: ({model.scale[0]:.4f}, {model.scale[1]:.4f}), "
f"Translation: ({model.translation[0]:.4f}, "
f"{model.translation[1]:.4f}), "
f"Rotation: {model.rotation:.4f}")
print("RANSAC:")
print(f"Scale: ({model_robust.scale[0]:.4f}, {model_robust.scale[1]:.4f}), "
f"Translation: ({model_robust.translation[0]:.4f}, "
f"{model_robust.translation[1]:.4f}), "
f"Rotation: {model_robust.rotation:.4f}")
# visualize correspondence
fig, ax = plt.subplots(nrows=2, ncols=1)
plt.gray()
inlier_idxs = np.nonzero(inliers)[0]
plot_matches(ax[0], img_orig_gray, img_warped_gray, src, dst,
np.column_stack((inlier_idxs, inlier_idxs)), matches_color='b')
ax[0].axis('off')
ax[0].set_title('Correct correspondences')
outlier_idxs = np.nonzero(outliers)[0]
plot_matches(ax[1], img_orig_gray, img_warped_gray, src, dst,
np.column_stack((outlier_idxs, outlier_idxs)), matches_color='r')
ax[1].axis('off')
ax[1].set_title('Faulty correspondences')
plt.show()
###Output
_____no_output_____ |
Data Science Academy/Python Fundamentos/Cap05/Notebooks/DSA-Python-Cap05-03-Metodos.ipynb | ###Markdown
Data Science Academy - Python Fundamentos - Capítulo 5 Download: http://github.com/dsacademybr
###Code
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
###Output
Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.8.8
###Markdown
Métodos
###Code
# Criando uma classe chamada Circulo
class Circulo():
# O valor de pi é constante
pi = 3.14
# Quando um objeto desta classe for criado, este método será executado e o valor default do raio será 5.
def __init__(self, raio = 5):
self.raio = raio
# Esse método calcula a área. Self utiliza os atributos deste mesmo objeto
def area(self):
return (self.raio * self.raio) * Circulo.pi
# Método para gerar um novo raio
def setRaio(self, novo_raio):
self.raio = novo_raio
# Método para obter o raio do círculo
def getRaio(self):
return self.raio
# Criando o objeto circ. Uma instância da classe Circulo()
circ = Circulo()
# Executando um método da classe Circulo
circ.getRaio()
# Criando outro objeto chamado circ1. Uma instância da classe Circulo()
# Agora sobrescrevendo o valor do atributo
circ1 = Circulo(7)
# Executando um método da classe Circulo
circ1.getRaio()
# Imprimindo o raio
print ('O raio é: ', circ.getRaio())
# Imprimindo a area
print('Area igual a: ', circ.area())
# Gerando um novo valor para o raio do círculo
circ.setRaio(3)
# Imprimindo o novo raio
print ('Novo raio igual a: ', circ.getRaio())
###Output
Novo raio igual a: 3
|
01_O2O/code/O2O-05_Summary.ipynb | ###Markdown
05 总结与实践 代码构建 整体框架说明代码结构说明: 代码采用分层架构,包括: 工具函数,算法调用;集成函数; 支持方便的特征读取,画学习曲线,网格调参,结果验证、输出等;输出文件名:特征版本+算法keyword+线下成绩+时间;这个代码结构的优点是: 1,便于灵活调整 2,利于复现成绩 3,减少重复逻辑,整体代码量较少 4,便于后续编写脚本自动执行多个实验 算法包及全局变量
###Code
import numpy as np
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import lightgbm as lgb
import xgboost as xgb
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
#########部分SKLearn 集成的算法###############
from sklearn import metrics
from sklearn import tree
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import learning_curve
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings("ignore")
pd.set_option('display.max_columns',10)
pd.set_option('display.max_rows',20)
%matplotlib inline
############全局参数#################################
id_col_names = ['user_id', 'coupon_id', 'date_received']
target_col_name = 'label'
id_target_cols = ['user_id', 'coupon_id', 'date_received', 'label']
myeval = 'roc_auc'
pred_result_col = 'predicted'
#cvscore=0
############目录定义#################################
datapath = '../data/'
featurepath = '../feature/'
resultpath = '../result/'
tmppath = '../tmp/'
scorepath = '../score/'
###Output
/opt/anaconda3/lib/python3.7/site-packages/lightgbm/__init__.py:48: UserWarning: Starting from version 2.2.1, the library file in distribution wheels for macOS is built by the Apple Clang (Xcode_8.3.3) compiler.
This means that in case of installing LightGBM from PyPI via the ``pip install lightgbm`` command, you don't need to install the gcc compiler anymore.
Instead of that, you need to install the OpenMP library, which is required for running LightGBM on the system with the Apple Clang compiler.
You can install the OpenMP library by the following command: ``brew install libomp``.
"You can install the OpenMP library by the following command: ``brew install libomp``.", UserWarning)
###Markdown
工具函数
###Code
###########工具函数#############################################
#返回ID列
def get_id_df(df):
return df[id_col_names]
#返回Target列
def get_target_df(df):
return df[target_col_name]
#返回特征列
def get_predictors_df(df):
predictors = [f for f in df.columns if f not in id_target_cols]
return df[predictors]
#按特征名读取训练集
def read_featurefile_train(featurename):
df=pd.read_csv(featurepath+'train_'+featurename+'.csv', sep=',' , encoding = "utf-8")
df.fillna(0,inplace=True)
return df
#按特征名读取测试集
def read_featurefile_test(featurename):
df=pd.read_csv(featurepath+'test_'+featurename+'.csv', sep=',' , encoding = "utf-8")
df.fillna(0,inplace=True)
return df
# 将特征归一化
def standize_df(train_data,test_data):
from sklearn import preprocessing
features_columns = [f for f in test_data.columns if f not in id_target_cols]
min_max_scaler = preprocessing.MinMaxScaler()
min_max_scaler = min_max_scaler.fit(train_data[features_columns])
train_data_scaler = min_max_scaler.transform(train_data[features_columns])
test_data_scaler = min_max_scaler.transform(test_data[features_columns])
train_data_scaler = pd.DataFrame(train_data_scaler)
train_data_scaler.columns = features_columns
test_data_scaler = pd.DataFrame(test_data_scaler)
test_data_scaler.columns = features_columns
train_data_scaler['label'] = train_data['label']
train_data_scaler[id_col_names] = train_data[id_col_names]
test_data_scaler[id_col_names] = test_data[id_col_names]
return train_data_scaler,test_data_scaler
#按特征名读取数据
def read_data(featurename):
traindf = read_featurefile_train(featurename)
testdf = read_featurefile_test(featurename)
#return traindf,testdf
return standize_df(traindf,testdf)
###Output
_____no_output_____
###Markdown
训练及结果输出
###Code
####################使用sklearn的统一代码框架##########################
#提供的函数包括:
#classifier_single(featurename,classifier,cvnum)
#按满减情况分别预测
#classifier_single_sep_fd(featurename,classifier,cvnum):
####################整合在sklearn的分类算法###############
def get_sklearn_model(model_name, param=None):
#朴素贝叶斯
if model_name == 'NB':
model = MultinomialNB(alpha=0.01)
#逻辑回归
elif model_name == 'LR':
model = LogisticRegression(penalty='l2')
# KNN
elif model_name == 'KNN':
model = KNeighborsClassifier()
#随机森林
elif model_name == 'RF':
model = RandomForestClassifier()
#决策树
elif model_name == 'DT':
model = tree.DecisionTreeClassifier()
#向量机
elif model_name == 'SVC':
model = SVC(kernel='rbf')
#GBDT
elif model_name == 'GBDT':
model = GradientBoostingClassifier()
#XGBoost
elif model_name == 'XGB':
model = XGBClassifier()
#lightGBM
elif model_name == 'LGB':
model = LGBMClassifier()
else:
print("wrong model name!")
return
if param is not None:
model.set_params(**param)
return model
#性能评价函数
#本赛题目标是预测投放的优惠券是否核销。
#针对此任务及一些相关背景知识,使用优惠券核销预测的平均AUC(ROC曲线下面积)作为评价标准。
#即对每个优惠券coupon_id单独计算核销预测的AUC值,再对所有优惠券的AUC值求平均作为最终的评价标准。
# coupon平均auc计算
def myauc(test):
testgroup = test.groupby(['coupon_id'])
aucs = []
for i in testgroup:
coupon_df = i[1]
#测算AUC必须大于1个类别
if len(coupon_df['label'].unique()) < 2:
continue
auc = metrics.roc_auc_score(coupon_df['label'], coupon_df['predicted'])
aucs.append(auc)
return np.average(aucs)
#预测方式:按照购买概率进行预测
def proba_predict(model, df):
pred = model.predict_proba(df)
return pred[:, 1]
#预测
def classifier_pred(traindf, classifier, param=None):
model = get_sklearn_model(classifier, param)
if classifier in ['LGB']:
model.fit(get_predictors_df(traindf), get_target_df(traindf), eval_metric=myeval)
if classifier in ['XGB']:
model.fit(get_predictors_df(traindf), get_target_df(traindf), eval_metric='auc')
else:
model.fit(get_predictors_df(traindf), get_target_df(traindf))
return model
#不分折进行预测
def fit_once(train_feat, test_feat, classifier, param=None):
model = classifier_pred(train_feat, classifier, param)
predicted = pd.DataFrame(proba_predict(model, get_predictors_df(test_feat)))
return predicted, get_target_df(train_feat)
#分折进行预测
def fit_cv(train_feat, test_feat, classifier, cvnum, param=None):
print('开始CV ' + str(cvnum) + '折训练...')
train_preds = np.zeros(train_feat.shape[0])
test_preds = np.zeros((test_feat.shape[0], cvnum))
i = 0
kf = StratifiedKFold(n_splits=cvnum, shuffle=True, random_state=520)
for train_index, test_index in kf.split(get_predictors_df(train_feat), get_target_df(train_feat)):
print('第{}次训练...'.format(i + 1))
train_feat1 = train_feat.iloc[train_index]
train_feat2 = train_feat.iloc[test_index]
model = classifier_pred(train_feat1, classifier, param)
train_preds[test_index] += proba_predict(model, get_predictors_df(train_feat2))
test_preds[:, i] = proba_predict(model, get_predictors_df(test_feat))
i = i + 1
# print('CV训练用时{}秒'.format(time.time() - t0))
test_y = test_preds.mean(axis=1)
#test_y_1 = pd.Series(test_y).apply(lambda x : 1 if x>0.5 else 0)
#submission = pd.DataFrame({'pred':test_preds.mean(axis=1)})
return pd.DataFrame(test_y), pd.DataFrame(train_preds)
def classifier_df(train_feat, test_feat, classifier, cvnum, param=None):
if cvnum <= 1:
predicted, train_preds = fit_once(train_feat, test_feat, classifier,
param)
else:
predicted, train_preds = fit_cv(train_feat, test_feat, classifier,
cvnum, param)
print('output')
#predicted=predicted.round(3)
return predicted, train_preds
#输出结果
def output_predicted(predicted, resultfile, test_feat):
predicted = round(predicted, 3)
resultdf = get_id_df(test_feat).copy()
resultdf['Probability'] = predicted
resultdf.to_csv(resultfile, header=False, index=False, sep=',')
#预测函数
def classifier_df_simple(train_feat, test_feat, classifier, param=None):
model = get_sklearn_model(classifier, param)
model.fit(get_predictors_df(train_feat), get_target_df(train_feat))
predicted = pd.DataFrame(
model.predict_proba(get_predictors_df(test_feat))[:, 1])
return predicted
###Output
_____no_output_____
###Markdown
算法分析
###Code
########################训练曲线################################################
def plot_learning_curve(estimator,
title,
X,
y,
ylim=None,
cv=None,
n_jobs=1,
train_sizes=[0.005, 0.01, 0.02, 0.04, 0.1, 0.2, 0.5]):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator,
X,
y,
cv=cv,
scoring=myeval,
n_jobs=n_jobs,
train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.1,
color="r")
plt.fill_between(train_sizes,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.1,
color="g")
plt.plot(train_sizes,
train_scores_mean,
'o-',
color="r",
label="Training score")
plt.plot(train_sizes,
test_scores_mean,
'o-',
color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
#画算法的学习曲线,为加快画图速度,最多选30%数据
def plot_curve_single(
featurename,
classifier,
cvnum,
train_sizes=[0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.3]):
traindf, testdf = read_data(featurename)
X = get_predictors_df(traindf)
y = get_target_df(traindf)
title = "learning curve of feature:" + featurename + ", model " + classifier + ", cv:" + str(
cvnum)
estimator = get_sklearn_model(classifier) #建模
plot_learning_curve(estimator,
title,
X,
y,
ylim=(0, 1.01),
cv=cvnum,
train_sizes=train_sizes)
##########################算法分析###############################################
#对算法进行分析整体AUC
def classifier_df_score_auc(train_x, train_y, classifier, cvnum, param=None):
from sklearn.model_selection import cross_val_score
model = get_sklearn_model(classifier, param)
#loss = make_scorer(my_custom_loss_func, greater_is_better=False)
#score = make_scorer(my_custom_loss_func, greater_is_better=True)
scores = cross_val_score(model, train_x, train_y, scoring=myeval, cv=cvnum)
return scores
def classifier_single_score(featurename, classifier, cvnum, param=None):
print('reading training data...')
traindf, testdf = read_data(featurename)
train_x = get_predictors_df(traindf)
train_y = get_target_df(traindf)
scores = classifier_df_score(train_x, train_y, classifier, cvnum, param)
print(scores)
print(classifier + ":")
print(scores.mean())
#对算法进行分析Coupon AUC
def classifier_df_score(train_feat, classifier, cvnum, param=None):
clf = get_sklearn_model(classifier, param)
train = train_feat.copy()
target = get_target_df(train_feat).copy()
kf = StratifiedKFold(n_splits=cvnum)
scores = []
score_coupons = []
for k, (train_index, test_index) in enumerate(kf.split(train, target)):
train_data, test_data, train_target, test_target = train.iloc[
train_index], train.iloc[test_index], target[train_index], target[
test_index]
clf.fit(get_predictors_df(train_data), train_target)
test_pred = clf.predict_proba(get_predictors_df(test_data))[:, 1]
score_test = metrics.roc_auc_score(test_target, test_pred)
test_data[pred_result_col] = test_pred
score_coupon_test = myauc(test_data)
scores.append(score_test)
score_coupons.append(score_coupon_test)
print(classifier + "总体AUC:", scores)
print(classifier + "Coupon AUC:", score_coupons)
###Output
_____no_output_____
###Markdown
调参
###Code
##########################网格调参###############################################
#对进行网格调参
def grid_search(train_feat,
test_feat,
classifier,
cvnum,
search_scope,
param=None):
parameters = search_scope
clf = GridSearchCV(get_sklearn_model(classifier, param),
param_grid=parameters,
scoring='roc_auc',
verbose=2)
clf.fit(get_predictors_df(train_feat), get_target_df(train_feat))
print("最优参数")
print(clf.best_params_)
test_df = get_predictors_df(test_feat)
predicted = pd.DataFrame(clf.predict(test_df))
return predicted
def grid_search_single(featurename,
classifier,
cvnum,
search_scope,
param=None):
train_feat, test_feat = read_data(featurename)
predicted = grid_search(train_feat, test_feat, classifier, cvnum,
search_scope, param)
resultfile = resultpath + featurename + '_' + str(
cvnum) + '_' + classifier + '_grid_' + format(
datetime.datetime.now().strftime('%Y%m%d_%H%M%S')) + '.csv'
output_predicted(predicted, resultfile, test_feat)
#对进行网格调参
def grid_plot(train_feat,
classifier,
cvnum,
param_range,
param_name,
param=None):
from sklearn.model_selection import validation_curve
train_scores, test_scores = validation_curve(get_sklearn_model(
classifier, param),
get_predictors_df(train_feat),
get_target_df(train_feat),
param_name=param_name,
param_range=param_range,
cv=cvnum,
scoring='roc_auc',
n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with " + param_name)
plt.xlabel(param_name)
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
plt.semilogx(param_range,
train_scores_mean,
label="Training score",
color="r")
plt.fill_between(param_range,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.2,
color="r")
plt.semilogx(param_range,
test_scores_mean,
label="Cross-validation score",
color="g")
plt.fill_between(param_range,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.2,
color="g")
plt.legend(loc="best")
plt.show()
def grid_plot_single(featurename,
classifier,
cvnum,
param_range,
param_name,
param=None):
train_feat, test_feat = read_data(featurename)
grid_plot(train_feat,
classifier,
cvnum,
param_range,
param_name,
param=None)
###Output
_____no_output_____
###Markdown
整合及输出结果
###Code
######################以下为最终输出函数##########################################
#只运行一种算法
def classifier_single(featurename, classifier, cvnum, param=None):
traindf, testdf = read_data(featurename)
predicted, train_preds = classifier_df(traindf, testdf, classifier, cvnum,
param)
if cvnum > 1:
traindf[pred_result_col] = train_preds
score = myauc(traindf)
print('线下成绩: {}'.format(score))
resultfile = resultpath + featurename + '_' + str(
cvnum) + '_' + classifier + '_' + format(
datetime.datetime.now().strftime('%Y%m%d_%H%M%S')) + '_' + str(
round(score, 3)) + '.csv'
else:
resultfile = resultpath + featurename + '_' + str(
cvnum) + '_' + classifier + '_' + format(
datetime.datetime.now().strftime('%Y%m%d_%H%M%S')) + '.csv'
output_predicted(predicted, resultfile, testdf)
# 平均融合classifier_multi的一种融合方式,可以写多个类似的,作为classifier_multi的参数
def multi_mean(train_multi, test_multi, pred_names):
i = 0
for pred in pred_names:
i = i + 1
if i == 1:
train_multi[pred_result_col] = train_multi[pred]
test_multi[pred_result_col] = test_multi[pred]
else:
train_multi[pred_result_col] = train_multi[pred_result_col] + train_multi[pred]
test_multi[pred_result_col] = train_multi[pred_result_col] + test_multi[pred]
train_multi[pred_result_col] = train_multi[pred_result_col] / i
test_multi[pred_result_col] = test_multi[pred_result_col] / i
return train_multi, test_multi
#运行多种算法
#sum_func为对多种算法结果的整合函数,要求最终的输出列为'predicted'
def classifier_multi(featurename, classifiers, cvnum, sum_func, param=None):
traindf, testdf = read_data(featurename)
train_multi = traindf.copy()
test_multi = testdf.copy()
notes = ''
pred_names = []
for classifier in classifiers:
print('开始' + classifier + '训练')
notes = notes + '-' + classifier
pred_names.append(classifier + '_pred')
predicted, train_preds = classifier_df(traindf, testdf, classifier,
cvnum, param)
train_multi[classifier + '_pred'] = train_preds
test_multi[classifier + '_pred'] = predicted
train_result, test_result = sum_func(train_multi, test_multi, pred_names)
#score = metrics.roc_auc_score(get_target_df(train_result),train_result['predicted'])
#print('线下得分: {}'.format(score))
score = myauc(train_result)
print('线下成绩: {}'.format(score))
if cvnum > 1:
resultfile = resultpath + featurename + '_' + str(
cvnum) + '_' + notes + '_' + format(
datetime.datetime.now().strftime('%Y%m%d_%H%M%S')) + '_' + str(
round(score, 3)) + '.csv'
else:
resultfile = resultpath + featurename + '_' + str(
cvnum) + '_' + notes + '_' + format(
datetime.datetime.now().strftime('%Y%m%d_%H%M%S')) + '.csv'
output_predicted(test_result['predicted'], resultfile, test_result)
#按满减情况分别预测
def classifier_single_sep_fd(featurename, classifier, cvnum, param=None):
trainalldf, testalldf = read_data(featurename)
test_result = pd.DataFrame()
train_result = pd.DataFrame()
#按满减情况分类
for fd in range(0, 2):
traindf = trainalldf[trainalldf.if_fd == fd].copy()
testdf = testalldf[testalldf.if_fd == fd].copy()
predicted, train_preds = classifier_df(traindf, testdf, classifier, cvnum, param)
predicted = round(predicted, 3)
if fd == 0:
test_result = get_id_df(testdf).copy().reset_index(drop=True)
test_result['predicted'] = predicted
train_result = traindf.copy().reset_index(drop=True)
train_result['predicted'] = train_preds
else:
dft1 = get_id_df(testdf).copy().reset_index(drop=True)
dft1['predicted'] = predicted
test_result = pd.concat([test_result, dft1], axis=0).reset_index(drop=True)
dfv1 = traindf.copy().reset_index(drop=True)
dfv1['predicted'] = train_preds
train_result = pd.concat([train_result, dfv1], axis=0).reset_index(drop=True)
if cvnum > 1:
#score = metrics.roc_auc_score(get_target_df(train_result),train_result['predicted'])
score = round(myauc(train_result), 3)
print('线下得分: {}'.format(score))
resultfile = resultpath + featurename + '_sepfd_' + str(
cvnum) + '_' + classifier + '_' + str(score) + '.csv'
else:
resultfile = resultpath + featurename + '_sepfd_' + str(
cvnum) + '_' + classifier + '.csv'
test_result.to_csv(resultfile, header=False, index=False, sep=',')
###Output
_____no_output_____
###Markdown
赛题实践
###Code
# 采用f2版本特征,LightGBM,5折,默认参数
classifier_single('sf2', 'LGB', 5)
# 采用f3版本特征,LightGBM,5折,默认参数
classifier_single('sf3', 'LGB', 5)
# 采用f3版本特征,LightGBM,5折,优化后参数
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'eval_metric': 'auc',
'n_estimators': 200,
'max_depth': 7,
'num_leaves': 100,
'max_bin': 300,
'min_data_in_leaf': 100,
'learning_rate': 0.01,
'lambda_l1': 0.0,
'lambda_l2': 1e-05,
'min_split_gain': 0.0,
'bagging_freq': 8,
'bagging_fraction': 0.9,
'feature_fraction': 0.9,
'seed': 42,
'n_thread': 12
}
classifier_single('sf3', 'LGB', 5, params)
# 采用f3版本特征,LightGBM+XGBoost融合,5折,默认参数
classifier_multi('sf3', ['XGB', 'LGB'], 5, multi_mean)
# 采用f3版本特征,LightGBM,5折,默认参数,根据是否为满减分别训练
classifier_single_sep_fd('sf3', 'LGB', 5)
# 采用f3版本特征,LightGBM,5折,优化后参数,根据是否为满减分别训练
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'eval_metric': 'auc',
'n_estimators': 200,
'max_depth': 5,
'num_leaves': 40,
'max_bin': 400,
'min_data_in_leaf': 120,
'learning_rate': 0.05,
'lambda_l1': 1e-05,
'lambda_l2': 1e-05,
'min_split_gain': 0.0,
'bagging_freq': 4,
'bagging_fraction': 0.9,
'feature_fraction': 0.6,
'seed': 1024,
'n_thread': 12
}
classifier_single_sep_fd('sf3', 'LGB', 5, params)
###Output
开始CV 5折训练...
第1次训练...
第2次训练...
第3次训练...
第4次训练...
第5次训练...
output
开始CV 5折训练...
第1次训练...
第2次训练...
第3次训练...
第4次训练...
第5次训练...
output
线下得分: 0.733
###Markdown
绘制学习曲线
###Code
plot_curve_single('f1', 'LR', 5, [0.01, 0.02, 0.05, 0.1, 0.2, 0.3])
###Output
_____no_output_____
###Markdown
参数调优
###Code
grid_plot_single('sf3', 'LGB', 3, [0.1, 0.2, 0.5, 0.7, 0.8],'colsample_bytree')
grid_search_single('f1', 'LGB', 5, [{'gamma': [0, 0.3, 0.5, 0.7, 0.9]}])
###Output
Fitting 5 folds for each of 5 candidates, totalling 25 fits
[CV] gamma=0 .........................................................
|
content/lessons/03/Now-You-Code/NYC2-Paint-Matching.ipynb | ###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs: everyday, select and premiumOutputs:total price Algorithm (Steps in Program):
###Code
choices = ['everyday','select','premium']
paint = input("which paint do you want? everyday, select or premium: ")
try:
colormatch = input("do you want color matching?: ")
if paint in choices:
if paint == 'everyday':
basecost = 19.99
elif paint == "select":
basecost = 24.99
elif paint == "premium":
basecost = 32.99
if colormatch == 'yes':
cost = basecost+4.99
else:
cost=basecost
print("the total is $%.2f" %(cost))
except:
print("you didn't select a color")
###Output
which paint do you want? everyday, select or premium: premium
do you want color mathcing?: no
the total is $32.99
###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program):
###Code
# Step 2: Write code here
choices= ['everyday','select','premium']
choice=input("Select a choice of paint: everyday, select or premium : " )
matching=input("Do you require color matching? [choose lowercase y/n] : " )
if choice in choices:
if choice=='everyday' and matching=='y' :
print("Total price of everyday paint with color matching is $24.98")
elif choice=='everyday' and matching=='n':
print("Total price of everyday paint without color matching is $19.99")
elif choice=='select' and matching=='y' :
print("Total price of select paint with color matching is $29.98")
elif choice=='select' and matching=='n':
print("Total price of select paint without color matching is $24.99")
elif choice=='premium' and matching=='y' :
print("Total price of premium paint with color matching is $37.98")
elif choice=='premium' and matching=='n':
print("Total price of premium paint without color matching is $32.99")
else:
print("That's not a paint quality")
###Output
Select a choice of paint: everyday, select or premium : everyday
Do you require color matching? [choose lowercase y/n] : y
Total price of everyday paint with color matching is $24.98
###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs:paint qualitycolor matchingOutputs:price of premium w and w/o color matching regular w and w/o color matching everyday w and w/o color matchingAlgorithm (Steps in Program):
###Code
# Step 2: Write code here
e= float(19.99)
s=float(24.99)
p=float(32.99)
cc=float (4.99)
pq= input("Which paint quality do you require ['everyday','select','premium']? ")
cm= input("Do you require color matching ['y/n'] ?")
if 'everyday' in pq and 'y' in cm:
cost=e+cc
print("Your total cost is %.2f" %(cost))
elif 'everyday' in pq and 'n' in cm:
cost=e
elif 'select' in pq and 'y' in cm:
cost= s +cc
print("Your total cost is %.2f" %(cost))
elif 'select' in pq and 'n' in cm:
cost = s
print("Your total cost is %.2f" %(cost))
elif 'premium' in pq and 'y' in cm:
cost=p + cc
print("Your total cost is %.2f" %(cost))
elif 'premium' in pq and 'n' in cm:
cost=p
print("Your total cost is %.2f" %(cost))
elif 'premium' or 'select' or 'everyday' not in pq:
print ("That is not a paint quality")
elif 'y, n' not in pq:
print ("That is not y/n")
###Output
Which paint quality do you require ['everyday','select','premium']? everyday
Do you require color matching ['y/n'] ?y
Your total cost is 24.98
###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program):
###Code
# Step 2: Write code here
###Output
_____no_output_____
###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs:paint quality, color matching.Outputstotal price of paint and color matching.Algorithm (Steps in Program):input quality and color matching option, output total price of purchase.
###Code
# Step 2: Write code here
quality=['everyday','select', 'premium']
try:
choice=input("What type of paint?(everyday,select, premium)")
color=input("Do you want color matching for $4.99(yes/no)")
if(choice in quality):
if (choice=="everyday"):
price=19.99
elif (choice=='select'):
price=24.99
else:
price=32.99
if (color=='yes'):
total=price+4.99
match="with"
else:
total=price
match="without"
print("The total of %s paint %s color matching is %.2f" %(choice,match,total))
except:
print("Im really embarassed, but something went horribly wrong!")
###Output
What type of paint?(everyday,select, premium)select
Do you want color matching for $4.99(yes/no)yes
The total of select paint with color matching is 29.98
###Markdown
Step 3: Questions1. When you enter something other than `'everyday', 'select',` or `'premium'` what happens? Modify the program to print `that is not a paint quality` and then exit in those cases.Answer: it gives the crash code stating what went went wrong.2. What happens when you enter something other than `'y'` or `'n'` for color matching? Re-write the program to print `you must enter y or n` whenever you enter something other than those two values.Answer: it gives the crash code showing what went wrong. now, it tells you im really embarrassed but something went horribly wrong.3. Why can't we use Python's `try...except` in this example?Answer: You can use try and except in this example to stop the program from crashing. Instead it just gives the message to the user saying that something went wrong.4. How many times (at minimum) must we execute this program and check the results before we can be reasonably assured it is correct?Answer: for this one, I would say at least 7 times to test every line of code. Step 4: ReflectionReflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.Keep your response to between 100 and 250 words.`--== Write Your Reflection Below Here ==--`
###Code
This was the hardest now you code for me. It takes such percision in order to get the code completely correct and the directions on the syntax error are not always correct. However, I have a better understanding of this topic now. I will still need to practice this much more in order to be able to do it though. Mr Faitakes helped me with this assignment.
###Output
_____no_output_____
###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99```
###Code
Quality = ('everyday','select','premium')
Choice = ('yes', 'no')
Paint = input("Which paint quality do you require?everyday,select, or premium?")
if Paint in Quality:
CM = input("Do you require color matching?yes or no?")
if CM in Choice:
print("Good choice.")
if (Paint == 'everyday' and CM == 'no'):
print("Total price of everyday paint without color matching is $19.99")
elif (Paint == 'everyday' and CM == 'yes'):
print("Total price of everyday paint with color matching is $24.98")
elif (Paint == 'select' and CM == 'no'):
print("Total price of select paint without color matching is $24.99")
elif (Paint == 'select' and CM == 'yes'):
print("Total price of select paint with color matching is $29.98")
elif (Paint == 'premium' and CM == 'no'):
print("Total price of premium paint without color matching is $32.99")
else:
print("Total price of premium paint with color matching is $37.98")
else:
print("You must enter yes or no.")
else:
print("That is not a quality")
###Output
Which paint quality do you require?everyday,select, or premium?premium
Do you require color matching?yes or no?yes
Good choice.
Total price of premium paint with color matching is $37.98
###Markdown
Now You Code 2: Paint PricingHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. The price of paint is determined by the following factors:- Everyday quality paint is `$19.99` per gallon.- Select quality paint is `$24.99` per gallon.- Premium quality paint is `$32.99` per gallon.In addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. Write a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium', prompt for color matching, and then outputs the price per gallon of the paint.Example Run 1:```Which paint quality do you require ['everyday', 'select', 'premium'] ?selectDo you require color matching [y/n] ?yTotal price of select paint with color matching is $29.98```Example Run 2:```Which paint quality do you require ['everyday', 'select', 'premium'] ?premiumDo you require color matching [y/n] ?nTotal price of premium paint without color matching is $32.99``` Step 1: Problem AnalysisInputs: everydau, select, premium, yes, noOutputs: paimt pricingAlgorithm (Steps in Program): select paint qualitycolor matchingoutput price
###Code
try:
choices = ["everyday","select","premium"]
choice2 = ["yes","no"]
paint=input("What type of paint?(esp)")
colormatch=input("color matching?(yes/no)")
if paint in choices:
if paint =="everyday":
base=19.99
elif paint =="select":
base=24.99
elif paint =="premium":
base=32.99
if colormatch=='yes':
cost=base+4.99
else:
cost=base
print(cost)
except:
print("Invalid Input.")
###Output
What type of paint?(esp)yes
color matching?(yes/no)yes
24.979999999999997
|
My_code/day_2.ipynb | ###Markdown
Day2 简单线性回归
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
dataset = pd.read_csv('../datasets/studentscores.csv')
X = dataset.iloc[ : , : 1 ].values
Y = dataset.iloc[ : , 1 ].values
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split( X, Y, test_size = 1/4, random_state = 0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor = regressor.fit(X_train, Y_train)
Y_pred = regressor.predict(X_test)
plt.scatter(X_train , Y_train, color = 'red')
plt.plot(X_train , regressor.predict(X_train), color ='blue')
plt.show()
plt.scatter(X_test , Y_test, color = 'red')
plt.plot(X_test , regressor.predict(X_test), color ='blue')
plt.show()
###Output
_____no_output_____ |
matrichn-metod.ipynb | ###Markdown
Матричный метод Создадим матрицу
###Code
import numpy as np
K = np.array([[-4., 8., -3.], [2., 7., 9.], [2., 1., -7.]])
K
###Output
_____no_output_____
###Markdown
Вектор матрицы
###Code
L = np.array([1., 4., 8.])
L
###Output
_____no_output_____
###Markdown
Для решения системы воспользуемся функцией numpy.linalg.solve
###Code
np.linalg.solve(K, L)
###Output
_____no_output_____ |
08_Model_Development/Model_Selection/GridSearchCV/LightGBM with GridSearchCV.ipynb | ###Markdown
LightGBM のチカラを見せてやろ
###Code
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import lightgbm as lgb
from lightgbm import LGBMClassifier
#from bayes_opt import BayesianOptimization
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
from sklearn import cross_validation
# Function for Measure Performance#
from sklearn import metrics
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confusion_matrix=True, show_roc_auc = True):
y_pred = clf.predict(X)
y_predprob = clf.predict_proba(X)[:,1]
if show_accuracy:
print ("Accuracy:{0:.3f}".format(metrics.accuracy_score(y,y_pred))),"\n"
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred)),"\n"
if show_confusion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred)),"\n"
if show_roc_auc:
print("ROC AUC Score")
print(metrics.roc_auc_score(y,y_predprob)),"\n"
#載入資料
train=pd.read_csv('train.csv',encoding='utf-8')
test=pd.read_csv('test_public.csv',encoding='utf-8')
submit=pd.read_csv('sampleSubmission.csv',encoding='utf-8')
#Data Preparation for LightGBM
import os
# use LabelEncoder to convert categorical features to int type before construct Dataset
from sklearn.preprocessing import LabelEncoder
def label_encoder(input_df, encoder_dict=None):
""" Process a dataframe into a form useable by LightGBM """
# Label encode categoricals
categorical_feats = input_df.columns[input_df.dtypes == 'object']
for feat in categorical_feats:
encoder = LabelEncoder()
input_df[feat] = encoder.fit_transform(input_df[feat].fillna('NULL'))
return input_df, categorical_feats.tolist(), encoder_dict
application_train, categorical_feats, encoder_dict = label_encoder(train)
X = train.drop(['Class'], axis=1)
y = train.Class
# Prepare dataset
seed = 7
test_size = 0.3
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=test_size, random_state=seed)
# Grid Search
print('Start training...')
estimator = lgb.LGBMClassifier(objective = 'binary', learning_rate = 0.05, n_estimators = 100, random_state=0)
param_grid = {
'num_leaves': [30,35,40,45], #[35,40,45]做完是40
'feature_fraction': [0.2,0.3,0.4],#做過[0.2,0.3,0.4,0.5]出來是0.4
#'bagging_fraction': [0.6,0.7,0.8],
'max_depth':[6,7,8],
'max_bin':[20],
#'lambda_l1':[0.3,0.6],#V4開始做lambda_l1
'lambda_l2':[0.08,0.09,0.10],
'min_split_gain':[0.04,0.05,0.06],#做過[0.04,0.05,0.06]出來0.06
'min_child_weight':[7]
}
%time LGBM_grid = GridSearchCV(estimator, param_grid)
import warnings
warnings.filterwarnings("ignore")
%time LGBM_grid.fit(X_train, y_train)
print('Best parameters found by grid search are:', LGBM_grid.best_params_)
# Final Model
evals_result = {}
print('Start predicting...')
LGBM= lgb.LGBMClassifier(objective = 'binary',
learning_rate = 0.05,
n_estimators = 100,
random_state=0,
num_leaves = LGBM_grid.best_params_['num_leaves'],
feature_fraction = LGBM_grid.best_params_['feature_fraction'],
#bagging_fraction = LGBM_grid.best_params_['bagging_fraction'],
max_depth = LGBM_grid.best_params_['max_depth'],
max_bin = LGBM_grid.best_params_['max_bin'],
#lambda_l1 = LGBM_grid.best_params_['lambda_l1'],
lambda_l2 = LGBM_grid.best_params_['lambda_l2'],
min_split_gain = LGBM_grid.best_params_['min_split_gain'],
min_child_weight = LGBM_grid.best_params_['min_child_weight'])
%time LGBM_fit = LGBM.fit(X_train, y_train)
print('Predicting is over')
#v2
LGBM_grid_measure = measure_performance(X = X_test, y = y_test, clf = LGBM_grid_final, show_classification_report=True, show_confusion_matrix=True)
# feature importances
print('Feature importances:', list(LGBM_grid_final.feature_importances_))
# visualization
print('Plot feature importances...')
ax = lgb.plot_importance(LGBM_fit, max_num_features=len(train))
plt.show()
#v1
LGBM_grid_measure = measure_performance(X = X_test, y = y_test, clf = LGBM_grid_final, show_classification_report=True, show_confusion_matrix=True)
# feature importances
print('Feature importances:', list(LGBM_grid_final.feature_importances_))
# visualization
print('Plot feature importances...')
ax = lgb.plot_importance(LGBM_fit, max_num_features=10)
plt.show()
LGBM_pred =LGBM.predict(test)
submit['Class'] = LGBM_pred
submit['Class'] = submit['Class'].astype(int)
submit.to_csv('submit_002.csv', index= False)
###Output
_____no_output_____ |
DataSets Python Scripts/Scripts/Extra.ipynb | ###Markdown
elt Gather columns into rows.
###Code
print(pd.melt(data))
print(pd.melt(data1))
###Output
variable value
0 a 1
1 a 2
2 a 3
3 a 4
4 b 5
5 b 6
6 b 7
7 b 8
8 c 9
9 c 3
10 c 0
11 c 6
12 d 6
13 d 5
14 d 4
15 d 4
variable value
0 a 1
1 a 2
2 b 5
3 b 6
4 c 9
5 c 3
###Markdown
ppend rows of DataFrames
###Code
x = pd.concat([data,data1], sort = 'True')
print(x)
###Output
a b c d
0 1 5 9 6.0
1 2 6 3 5.0
2 3 7 0 4.0
3 4 8 6 4.0
0 1 5 9 NaN
1 2 6 3 NaN
###Markdown
ppend columns of DataFrames
###Code
x1 = pd.concat([data,data1], axis=1)
print(x1)
###Output
_____no_output_____ |
scripts/remote/.ipynb_checkpoints/ImportanceSampling Tests-Copy1-checkpoint.ipynb | ###Markdown
Importance Sampling Test Parameters
###Code
D = 100
mu = 1.2
mu_vec = np.full((D), mu)
samples = 30000
nu = 4 # Degrees of freedom
Sigma = 1.5*np.eye(D); Sigma[0,D-1] = 0.8; Sigma[D-1,0] = 0.8; Sigma[int(D/2),2] = 1.2
Sigma = Sigma*Sigma.T; Sigma = Sigma + D*np.eye(D)
L = np.linalg.cholesky(Sigma)
def f(x):
return np.mean(x)
###Output
_____no_output_____
###Markdown
Importance sampling test for $\mathcal{N}(\mu, \Sigma)$ from $\mathcal{N}(0, 1)$
###Code
mvn0 = multivariate_normal(mean=np.full((D), mu+0.1), cov=1.2*Sigma, allow_singular=False)
mvn = multivariate_normal(mean=mu_vec, cov=Sigma, allow_singular=False)
Z0 = mvn0.rvs(samples)#draw_mn(np.full((D), 0.), np.eye(D), samples)
X = mvn0.rvs(samples)#mu_vec + np.array([L.dot(Z0i) for Z0i in Z0])
#Linv = np.linalg.inv(L)
prob_factor = 1.#np.log(np.linalg.det(L))
q = np.array([ mvn0.pdf(Xi) for Xi in X])
p = np.array([ mvn.pdf(Xi) for Xi in X])
weight = prob_factor*p/q#np.exp(log_p+log_prob_factor-log_q)#
###Output
_____no_output_____
###Markdown
Importance sampling test for $t_{\nu}(\mu, \Sigma)$ from $\mathcal{N}(0, 1)$ and $\chi^2(\nu)$ Loop version
###Code
mvt = multivariate_t_distribution(mu_vec,Sigma, nu) #
mvn0 = multivariate_normal(mean=np.full((D), 0.0), cov=np.eye(D), allow_singular=False)
mvn = multivariate_normal(mean=mu_vec, cov=Sigma, allow_singular=False)
mvl = multivariate_normal(mean=mu_vec, cov=Sigma, allow_singular=False)
chi2_dist = chi2(nu)
Z0 = mvn0.rvs(samples) # draw N(0,1) samples
c = chi2_dist.rvs(size=(samples,1,1)) # draw chi2 samples
weight = np.zeros(samples)
X = np.zeros((samples, D))
Lchi = np.sqrt(float(nu)/c)*L
for i in range(samples):
#Sigmap = Lchi.dot(Lchi.conj().T)
#Lchi_inv = np.linalg.inv(Lchi)
#prob_factor = 1.#np.log(np.linalg.det(Lchi))#np.log(np.abs(np.linalg.det(np.linalg.inv(Lchi))))
#mvn2 = multivariate_normal(mean=mu_vec, cov=Sigmap, allow_singular=False)
X[i] = mu_vec + Lchi[i,:,:].dot(Z0[i])
log_q = np.log(mvt.pdf(X[i]), dtype=np.longdouble)
log_p = mvn.logpdf(X[i]) #- chi2_dist.logpdf(c[i])#mvn2.logpdf(X[i]) + chi2_dist.logpdf(c[i])#
if log_q < -1e+20:
#print("Overflow")
weight[i] = 0.
else:
weight[i] = np.exp(log_p-log_q, dtype=np.longdouble)#prob_factor*p/q#
###Output
/home/jstobbe/ipcluster_1/ipcluster_1/lib/python3.6/site-packages/ipykernel_launcher.py:12: RuntimeWarning: overflow encountered in double_scalars
if sys.path[0] == '':
/home/jstobbe/ipcluster_1/ipcluster_1/lib/python3.6/site-packages/ipykernel_launcher.py:19: RuntimeWarning: divide by zero encountered in log
###Markdown
Vectorized version
###Code
print( np.average(weight) )
print( np.mean( np.apply_along_axis(f, 1, Z0) ) )
print( np.mean( np.apply_along_axis(f, 1, Z0)*weight))
print( np.mean( np.apply_along_axis(f, 1, X) ))
print( np.mean( np.apply_along_axis(f, 1, X)*weight))
np.min(np.abs(weight))
print("effective sample size: " + str(np.power(np.sum(weight),2)/np.sum(np.power(weight,2))) + " of " + str(samples))
print(str((len(weight[weight < 0.01]))/samples) + "% close to zero")
###Output
0.497% close to zero
###Markdown
Plots
###Code
r = sns.distplot(weight, bins=20, hist=True)
###Output
_____no_output_____
###Markdown
2D stuff
###Code
df_Z = pd.DataFrame(Z, columns=["x", "y"])
df_X = pd.DataFrame(X, columns=["x", "y"])
g = sns.jointplot(x="x", y="y", data=df_Z, kind="kde", color="b")
g.plot_joint(plt.scatter, c="w", s=30, linewidth=1, marker="+")
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels("$X$", "$Y$");
g = sns.jointplot(x="x", y="y", data=df_X, kind="kde", color="b")
g.plot_joint(plt.scatter, c="w", s=30, linewidth=1, marker="+")
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels("$X$", "$Y$");
###Output
_____no_output_____ |
Task3_test.ipynb | ###Markdown
Task 3 test data
###Code
# import os
# os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"]="4"
import pickle
def save_obj(obj, name):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f, encoding='latin1')
#load data
data1 = load_obj('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert1_normtext')
data2 = load_obj('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert2_normtext')
data3 = load_obj('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert_adrclass_normtext')
data4 = load_obj('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert1_realtext')
data5 = load_obj('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert1_adrclass_realtext')
data6 = load_obj('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert1_normtext')
data7 = load_obj ('/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/df_out_bert2_realtext')
sents7 = list(data7['text'])
sents3 = list(data3['text'])
sents4 = list(data4['text'])
sents5 = list(data5['text'])
sents6 = list(data6['text'])
sents1 = list(data1['text'])
sents2 = list(data2['text'])
# sents1_nw = []
# [sents1_nw.append(i) for i in sents1 if i != '-']
# sents1
###Output
_____no_output_____
###Markdown
Getting MEDDRA ids
###Code
from flair.models import TextClassifier
from flair.data import Sentence
path = '/data/dirksonar/Project3_sharedtasks_SMM4H/Task3/flair/final-model.pt'
path2 = '/data/dirksonar/Project3_sharedtasks_SMM4H/Task3/flair/with_alias2/final-model.pt'
classifier = TextClassifier.load_from_file(path)
# create example sentence
sentence = Sentence('getting no sleep')
# predict tags and print
classifier.predict(sentence)
z = sentence.labels
def predict_meddra (sent):
if sent == '-':
x = '-'
else:
sent2 = Sentence(sent)
classifier.predict(sent2)
x = sent2.labels
print(x)
return x
# test = ['head feels all funny', '-', 'gastrointestinal complaints', 'no sleep', 'addiction']
# out = [predict_meddra(i) for i in test]
# print(out)
predicted_bert1 = [predict_meddra(i) for i in sents1]
save_obj (predicted_bert1, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert1')
predicted_bert2 = [predict_meddra(i) for i in sents2]
save_obj (predicted_bert2, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert2_norm')
classifier = TextClassifier.load_from_file(path2)
predicted_bert1_alias = [predict_meddra(i) for i in sents1]
save_obj (predicted_bert1_alias, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert1_alias')
predicted_bert1_alias_realtxt = [predict_meddra(i) for i in sents4]
save_obj (predicted_bert1_alias_realtxt, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert1_alias_realtxt')
classifier = TextClassifier.load_from_file(path)
predicted_bert1_realtxt = [predict_meddra(i) for i in sents4]
save_obj (predicted_bert1_realtxt, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert1_realtxt')
# predicted_bert2_realtxt = [predict_meddra(i) for i in sents6]
# save_obj (predicted_bert2_realtxt, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert2_realtxt')
classifier = TextClassifier.load_from_file(path)
predicted_bert1_realtxt_adr = [predict_meddra(i) for i in sents5]
save_obj (predicted_bert1_realtxt_adr, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert1_realtxt_adr')
predicted_bert1_adr = [predict_meddra(i) for i in sents3]
save_obj (predicted_bert1_adr, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert1_adr')
predicted_bert2 = [predict_meddra(i) for i in sents7]
save_obj (predicted_bert2, '/data/dirksonar/Project3_sharedtasks_SMM4H/testdata/meddra_predictions_bert2')
###Output
[10019198 (1.0)]
-
[10002368 (1.0)]
[10024919 (1.0)]
[10043890 (1.0)]
[10040528 (1.0)]
[10033371 (1.0)]
[10012174 (1.0)]
-
[10047896 (1.0)]
-
-
[10021654 (1.0)]
[10037844 (1.0)]
-
[10016759 (1.0)]
-
[10012217 (1.0)]
[10047700 (1.0)]
[10001125 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
-
-
[10040528 (1.0)]
[10067568 (1.0)]
-
[10049278 (1.0)]
-
-
[10008479 (1.0)]
[10073281 (1.0)]
[10041014 (1.0)]
[10001718 (1.0)]
[10033775 (1.0)]
[10013781 (1.0)]
[10042112 (1.0)]
[10017565 (1.0)]
[10043132 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10016370 (1.0)]
[10073281 (1.0)]
[10047896 (1.0)]
[10073281 (1.0)]
-
-
[10019211 (1.0)]
-
[10041018 (1.0)]
-
[10019045 (1.0)]
[10049901 (1.0)]
-
-
[10071175 (1.0)]
[10013767 (1.0)]
-
[10011906 (1.0)]
[10000125 (1.0)]
[10011865 (1.0)]
[10019063 (1.0)]
[10041349 (1.0)]
[10033775 (1.0)]
[10033775 (1.0)]
-
[10048010 (1.0)]
-
[10047896 (1.0)]
[10041349 (1.0)]
[10000447 (1.0)]
[10047896 (1.0)]
[10047862 (1.0)]
[10012727 (1.0)]
-
[10041349 (1.0)]
-
[10019045 (1.0)]
[10073281 (1.0)]
-
[10028813 (1.0)]
[10016558 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10041349 (1.0)]
[10040528 (1.0)]
[10014559 (1.0)]
[10034100 (1.0)]
[10004969 (1.0)]
[10016370 (1.0)]
[10001718 (1.0)]
-
-
[10073281 (1.0)]
[10073281 (1.0)]
-
[10022437 (1.0)]
[10016370 (1.0)]
-
[10043890 (1.0)]
-
[10001125 (1.0)]
[10045198 (1.0)]
[10047896 (1.0)]
[10045198 (1.0)]
[10036661 (1.0)]
[10071175 (1.0)]
-
-
-
-
[10024855 (1.0)]
[10026749 (1.0)]
[10065032 (1.0)]
[10006120 (1.0)]
[10073281 (1.0)]
-
[10019158 (1.0)]
-
[10012336 (1.0)]
[10073281 (1.0)]
[10043890 (1.0)]
[10001718 (1.0)]
[10016365 (1.0)]
[10016370 (1.0)]
-
-
-
[10016384 (1.0)]
[10011865 (1.0)]
[10013649 (1.0)]
-
[10013781 (1.0)]
[10067843 (1.0)]
[10013710 (1.0)]
[10001718 (1.0)]
[10041000 (1.0)]
[10024668 (1.0)]
[10043087 (1.0)]
-
[10027951 (1.0)]
[10012174 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
[10041017 (1.0)]
[10043890 (1.0)]
[10036596 (1.0)]
[10073281 (1.0)]
[10024919 (1.0)]
[10048010 (1.0)]
[10048010 (1.0)]
[10047896 (1.0)]
[10009851 (1.0)]
[10016256 (1.0)]
[10073281 (1.0)]
-
[10074314 (1.0)]
-
[10016370 (1.0)]
[10033775 (1.0)]
[10001125 (1.0)]
[10017565 (1.0)]
[10001718 (1.0)]
[10013746 (1.0)]
-
[10016256 (1.0)]
-
[10027374 (1.0)]
[10021654 (1.0)]
[10073281 (1.0)]
[10044126 (1.0)]
[10027374 (1.0)]
[10073281 (1.0)]
[10013767 (1.0)]
[10007517 (1.0)]
-
[10001718 (1.0)]
-
[10011293 (1.0)]
[10028836 (1.0)]
-
[10011906 (1.0)]
-
-
-
-
[10002368 (1.0)]
-
-
-
-
[10033371 (1.0)]
[10003591 (1.0)]
[10019045 (1.0)]
-
[10011224 (1.0)]
[10011224 (1.0)]
[10011906 (1.0)]
[10033775 (1.0)]
[10073281 (1.0)]
-
-
-
-
-
[10073281 (1.0)]
[10073281 (1.0)]
[10047896 (1.0)]
[10012727 (1.0)]
[10013746 (1.0)]
[10011293 (1.0)]
[10016365 (1.0)]
[10016333 (1.0)]
-
[10043890 (1.0)]
[10012217 (1.0)]
[10001495 (1.0)]
-
-
[10033775 (1.0)]
[10027599 (1.0)]
[10003988 (1.0)]
[10018028 (1.0)]
[10016365 (1.0)]
[10013649 (1.0)]
[10033371 (1.0)]
-
[10016336 (1.0)]
[10004716 (1.0)]
[10047986 (1.0)]
[10019211 (1.0)]
[10028836 (1.0)]
-
[10073281 (1.0)]
-
[10022876 (1.0)]
[10047862 (1.0)]
[10016370 (1.0)]
-
-
[10019304 (1.0)]
[10019300 (1.0)]
-
[10063006 (1.0)]
-
-
[10002855 (1.0)]
-
-
[10047862 (1.0)]
[10017949 (1.0)]
[10019211 (1.0)]
[10003028 (1.0)]
[10011001 (1.0)]
[10073281 (1.0)]
[10024855 (1.0)]
[10016336 (1.0)]
[10073281 (1.0)]
[10019136 (1.0)]
[10005755 (1.0)]
[10073281 (1.0)]
[10027374 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10016326 (1.0)]
[10041349 (1.0)]
[10016333 (1.0)]
-
[10073281 (1.0)]
-
[10016333 (1.0)]
[10023089 (1.0)]
[10061230 (1.0)]
[10040999 (1.0)]
[10019045 (1.0)]
-
-
[10073281 (1.0)]
[10042112 (1.0)]
[10016370 (1.0)]
[10021574 (1.0)]
[10019300 (1.0)]
-
[10027175 (1.0)]
[10027374 (1.0)]
[10045198 (1.0)]
[10073281 (1.0)]
[10012336 (1.0)]
-
-
[10033371 (1.0)]
-
[10004969 (1.0)]
[10012398 (1.0)]
[10073281 (1.0)]
-
[10006804 (1.0)]
[10012594 (1.0)]
[10029177 (1.0)]
[10001125 (1.0)]
-
[10040528 (1.0)]
[10012336 (1.0)]
[10016365 (1.0)]
[10037844 (1.0)]
[10033775 (1.0)]
[10073281 (1.0)]
[10022437 (1.0)]
[10041001 (1.0)]
[10029855 (1.0)]
-
[10033371 (1.0)]
[10043087 (1.0)]
[10047896 (1.0)]
[10043890 (1.0)]
[10013580 (1.0)]
-
-
[10042661 (1.0)]
-
-
-
-
-
[10013649 (1.0)]
[10071175 (1.0)]
-
-
-
-
-
[10048010 (1.0)]
[10042213 (1.0)]
-
-
-
[10016558 (1.0)]
[10013661 (1.0)]
-
[10024870 (1.0)]
[10043890 (1.0)]
[10011001 (1.0)]
[10043890 (1.0)]
-
[10016365 (1.0)]
-
[10001125 (1.0)]
[10073281 (1.0)]
[10033371 (1.0)]
[10070679 (1.0)]
-
-
[10027374 (1.0)]
-
-
[10071175 (1.0)]
[10047896 (1.0)]
[10011001 (1.0)]
[10041000 (1.0)]
[10004969 (1.0)]
[10033664 (1.0)]
[10024855 (1.0)]
[10040528 (1.0)]
[10013580 (1.0)]
-
-
[10002855 (1.0)]
-
[10027175 (1.0)]
[10073281 (1.0)]
[10023477 (1.0)]
[10071175 (1.0)]
[10043087 (1.0)]
[10046571 (1.0)]
[10043248 (1.0)]
[10033432 (1.0)]
-
-
[10043087 (1.0)]
-
-
-
-
[10015598 (1.0)]
[10042661 (1.0)]
[10016791 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
-
[10033434 (1.0)]
[10005889 (1.0)]
-
[10040617 (1.0)]
[10041014 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10013710 (1.0)]
[10001718 (1.0)]
[10073281 (1.0)]
-
[10015667 (1.0)]
-
-
[10047896 (1.0)]
[10033864 (1.0)]
-
-
[10073281 (1.0)]
-
[10011001 (1.0)]
-
-
[10033371 (1.0)]
[10015667 (1.0)]
[10011001 (1.0)]
[10041014 (1.0)]
[10019300 (1.0)]
[10001125 (1.0)]
[10012336 (1.0)]
-
-
-
[10028823 (1.0)]
[10042661 (1.0)]
[10042661 (1.0)]
[10073281 (1.0)]
-
[10033645 (1.0)]
[10073281 (1.0)]
[10022086 (1.0)]
[10022437 (1.0)]
[10073281 (1.0)]
-
[10012336 (1.0)]
-
[10042076 (1.0)]
[10019211 (1.0)]
-
[10041018 (1.0)]
[10020554 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10040831 (1.0)]
[10001718 (1.0)]
[10056484 (1.0)]
[10013649 (1.0)]
[10013573 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10027374 (1.0)]
-
-
-
[10027176 (1.0)]
[10027175 (1.0)]
[10073281 (1.0)]
-
[10047896 (1.0)]
[10033371 (1.0)]
-
-
-
[10073281 (1.0)]
-
-
[10001718 (1.0)]
-
-
[10000424 (1.0)]
[10000424 (1.0)]
[10071175 (1.0)]
[10016365 (1.0)]
[10013781 (1.0)]
-
-
[10033371 (1.0)]
-
[10040528 (1.0)]
[10040528 (1.0)]
-
-
-
[10040528 (1.0)]
[10073281 (1.0)]
[10043132 (1.0)]
[10043132 (1.0)]
[10073281 (1.0)]
-
[10011906 (1.0)]
[10019063 (1.0)]
[10033371 (1.0)]
[10070679 (1.0)]
-
[10019158 (1.0)]
[10073281 (1.0)]
-
[10012336 (1.0)]
[10016365 (1.0)]
[10022437 (1.0)]
[10013781 (1.0)]
[10047896 (1.0)]
[10041017 (1.0)]
[10020197 (1.0)]
[10016370 (1.0)]
[10009846 (1.0)]
[10027374 (1.0)]
-
[10011293 (1.0)]
[10054248 (1.0)]
[10019300 (1.0)]
[10019211 (1.0)]
-
-
-
-
[10019045 (1.0)]
[10033371 (1.0)]
-
[10027374 (1.0)]
[10012727 (1.0)]
[10022876 (1.0)]
-
[10020197 (1.0)]
[10041014 (1.0)]
[10047862 (1.0)]
-
[10014559 (1.0)]
[10073281 (1.0)]
[10012378 (1.0)]
[10047896 (1.0)]
[10033775 (1.0)]
[10033371 (1.0)]
[10033371 (1.0)]
[10043132 (1.0)]
-
-
[10047896 (1.0)]
-
[10047896 (1.0)]
[10047986 (1.0)]
-
-
[10036828 (1.0)]
[10073281 (1.0)]
[10042494 (1.0)]
[10073281 (1.0)]
-
[10047896 (1.0)]
[10000125 (1.0)]
[n (1.0)]
-
[10027599 (1.0)]
[10016370 (1.0)]
[10040831 (1.0)]
[10033532 (1.0)]
[10038194 (1.0)]
[10001540 (1.0)]
[10028813 (1.0)]
-
[10040528 (1.0)]
-
-
[10041349 (1.0)]
[10011469 (1.0)]
[10041349 (1.0)]
[10062375 (1.0)]
[10043890 (1.0)]
[10061230 (1.0)]
[10023211 (1.0)]
-
[10047896 (1.0)]
[10047896 (1.0)]
[10002368 (1.0)]
-
-
-
[10012174 (1.0)]
-
[10012378 (1.0)]
[10012336 (1.0)]
[10023089 (1.0)]
[10001718 (1.0)]
[10047700 (1.0)]
[10022437 (1.0)]
-
-
-
[10073281 (1.0)]
[10012378 (1.0)]
[10037804 (1.0)]
-
[10042494 (1.0)]
[10041014 (1.0)]
[10041014 (1.0)]
[10048013 (1.0)]
-
[10011293 (1.0)]
[10016333 (1.0)]
-
-
[10036661 (1.0)]
[10022437 (1.0)]
[10049278 (1.0)]
-
-
-
[10028823 (1.0)]
-
-
[10043890 (1.0)]
[10033371 (1.0)]
[10015667 (1.0)]
-
-
[10047896 (1.0)]
[10073281 (1.0)]
-
[10012594 (1.0)]
[10043087 (1.0)]
-
[10013767 (1.0)]
[10073281 (1.0)]
-
-
-
[10001639 (1.0)]
[10019045 (1.0)]
[10027175 (1.0)]
-
-
-
[10073281 (1.0)]
[10073281 (1.0)]
[10001718 (1.0)]
-
-
[10040831 (1.0)]
[10041018 (1.0)]
-
-
[10041349 (1.0)]
-
-
[10033775 (1.0)]
[10039379 (1.0)]
-
-
-
[10047896 (1.0)]
[10071175 (1.0)]
[10065032 (1.0)]
[10033775 (1.0)]
[10033775 (1.0)]
[10079614 (1.0)]
[10033775 (1.0)]
[10017565 (1.0)]
-
[10073281 (1.0)]
[10013754 (1.0)]
[10037234 (1.0)]
[10006784 (1.0)]
[10073281 (1.0)]
-
-
-
-
-
[10059933 (1.0)]
[10019158 (1.0)]
-
[10043132 (1.0)]
[10019211 (1.0)]
[10043132 (1.0)]
[10024668 (1.0)]
[10001718 (1.0)]
-
[10038683 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10011906 (1.0)]
-
[10001718 (1.0)]
-
-
[10019158 (1.0)]
[10073281 (1.0)]
[10016256 (1.0)]
-
[10045148 (1.0)]
[10033371 (1.0)]
-
-
-
[10019045 (1.0)]
[10047896 (1.0)]
-
-
[10017565 (1.0)]
[10073281 (1.0)]
-
-
-
[10073281 (1.0)]
[10037234 (1.0)]
[10043890 (1.0)]
-
[10073281 (1.0)]
[10003028 (1.0)]
-
[10019045 (1.0)]
[10021402 (1.0)]
[10073281 (1.0)]
[10042112 (1.0)]
[10016333 (1.0)]
-
[10073281 (1.0)]
[10016370 (1.0)]
-
[10033775 (1.0)]
[10016384 (1.0)]
[10027374 (1.0)]
[10047896 (1.0)]
[10039379 (1.0)]
[10019063 (1.0)]
[10041014 (1.0)]
[10011906 (1.0)]
[10021574 (1.0)]
[10047896 (1.0)]
[10041014 (1.0)]
[10022437 (1.0)]
[10020772 (1.0)]
[10041014 (1.0)]
-
[10047896 (1.0)]
[10033557 (1.0)]
[10033775 (1.0)]
-
-
-
[10036828 (1.0)]
[10016256 (1.0)]
[10041349 (1.0)]
-
-
-
-
[10004969 (1.0)]
[10043087 (1.0)]
[10047862 (1.0)]
[10042112 (1.0)]
[10073281 (1.0)]
[10029898 (1.0)]
-
-
[10016370 (1.0)]
[10016365 (1.0)]
[10073281 (1.0)]
[10058726 (1.0)]
[10073281 (1.0)]
[10040528 (1.0)]
[10040528 (1.0)]
[10012727 (1.0)]
-
[10033775 (1.0)]
-
[10019211 (1.0)]
[10016256 (1.0)]
[10073281 (1.0)]
[10033371 (1.0)]
[10033645 (1.0)]
[10047896 (1.0)]
[10019045 (1.0)]
-
[10043132 (1.0)]
[10071175 (1.0)]
[10012174 (1.0)]
-
[10001718 (1.0)]
-
[10033371 (1.0)]
-
[10027372 (1.0)]
[10011293 (1.0)]
-
[10019211 (1.0)]
-
[10041374 (1.0)]
[10012336 (1.0)]
[10016256 (1.0)]
-
-
[10019300 (1.0)]
-
-
[10073281 (1.0)]
[10029412 (1.0)]
-
[10043087 (1.0)]
[10016365 (1.0)]
-
[10043890 (1.0)]
[10016323 (1.0)]
[10040995 (1.0)]
-
[10041014 (1.0)]
[10016275 (1.0)]
-
[10073281 (1.0)]
[10034100 (1.0)]
[10016370 (1.0)]
-
-
-
[10041018 (1.0)]
-
[10027374 (1.0)]
[10047896 (1.0)]
[10028813 (1.0)]
[10033371 (1.0)]
[10016365 (1.0)]
[10056465 (1.0)]
-
[10013767 (1.0)]
-
[10073281 (1.0)]
[10040528 (1.0)]
-
[10040831 (1.0)]
-
[10043890 (1.0)]
[10041014 (1.0)]
[10040528 (1.0)]
[10027374 (1.0)]
-
-
[10004969 (1.0)]
[10016384 (1.0)]
-
[10006784 (1.0)]
[10037804 (1.0)]
[10073281 (1.0)]
[10043890 (1.0)]
[10019203 (1.0)]
[10047986 (1.0)]
[10073281 (1.0)]
[10013767 (1.0)]
-
[10045148 (1.0)]
-
-
-
-
[10016876 (1.0)]
[10028822 (1.0)]
[10043132 (1.0)]
[10042494 (1.0)]
[10073281 (1.0)]
-
[10041000 (1.0)]
-
-
-
[10073281 (1.0)]
[10037844 (1.0)]
[10007517 (1.0)]
-
-
[10033775 (1.0)]
-
[10071175 (1.0)]
[10047896 (1.0)]
[10041018 (1.0)]
[10009851 (1.0)]
-
-
-
[10047900 (1.0)]
-
[10011001 (1.0)]
-
[10073281 (1.0)]
[10049119 (1.0)]
[10016365 (1.0)]
[10027374 (1.0)]
[10041014 (1.0)]
[10019158 (1.0)]
[10033371 (1.0)]
[10013746 (1.0)]
-
[10012378 (1.0)]
[10027374 (1.0)]
-
[10073281 (1.0)]
[10027599 (1.0)]
-
[10001718 (1.0)]
[10012336 (1.0)]
-
[10019158 (1.0)]
-
-
[10047896 (1.0)]
[10042112 (1.0)]
[10038740 (1.0)]
[10019297 (1.0)]
[10040528 (1.0)]
[10016323 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10012398 (1.0)]
[10033371 (1.0)]
[10049119 (1.0)]
[10016876 (1.0)]
[10016334 (1.0)]
[10045148 (1.0)]
[10047986 (1.0)]
[10003988 (1.0)]
[10011469 (1.0)]
-
[10073281 (1.0)]
-
[10001718 (1.0)]
-
[10002368 (1.0)]
-
-
-
[10012594 (1.0)]
-
-
[10042076 (1.0)]
[10012727 (1.0)]
[10011001 (1.0)]
[10041349 (1.0)]
-
-
-
-
[10043087 (1.0)]
-
-
[10029412 (1.0)]
-
[10041001 (1.0)]
-
[10016336 (1.0)]
[10047900 (1.0)]
[10041014 (1.0)]
[10073281 (1.0)]
[10003988 (1.0)]
[10047896 (1.0)]
[10073281 (1.0)]
[10033371 (1.0)]
[10033775 (1.0)]
-
[10000496 (1.0)]
-
[10021030 (1.0)]
-
[10013663 (1.0)]
[10000496 (1.0)]
[10027372 (1.0)]
[10012536 (1.0)]
[10016365 (1.0)]
[10047896 (1.0)]
[10047986 (1.0)]
[10041374 (1.0)]
[10048010 (1.0)]
[10042661 (1.0)]
[10023000 (1.0)]
[10003591 (1.0)]
-
[10043087 (1.0)]
[10041014 (1.0)]
-
-
-
[10048010 (1.0)]
[10029414 (1.0)]
[10006784 (1.0)]
-
-
-
-
[10041014 (1.0)]
[10047986 (1.0)]
[10073281 (1.0)]
[10011906 (1.0)]
[10033775 (1.0)]
-
[10043132 (1.0)]
[10047896 (1.0)]
[10040831 (1.0)]
-
[10020772 (1.0)]
[10016340 (1.0)]
-
-
[10016364 (1.0)]
[10042458 (1.0)]
[10040604 (1.0)]
[10025082 (1.0)]
-
-
-
[10040528 (1.0)]
[10061758 (1.0)]
[10001125 (1.0)]
[10040558 (1.0)]
[10033557 (1.0)]
[10040831 (1.0)]
[10028822 (1.0)]
-
-
[10073281 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10071175 (1.0)]
[10019300 (1.0)]
[10047896 (1.0)]
[10013710 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
[10041018 (1.0)]
[10011293 (1.0)]
[10062375 (1.0)]
[10021402 (1.0)]
[10003988 (1.0)]
[10017565 (1.0)]
[10033371 (1.0)]
[10011001 (1.0)]
[10040528 (1.0)]
[10073281 (1.0)]
[10047896 (1.0)]
[10027951 (1.0)]
-
[10073281 (1.0)]
-
-
-
[10040559 (1.0)]
[10073281 (1.0)]
[10013580 (1.0)]
-
[10006774 (1.0)]
[10027175 (1.0)]
-
-
-
-
[10073281 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
[10001125 (1.0)]
[10073281 (1.0)]
-
[10047862 (1.0)]
[10073281 (1.0)]
[10049183 (1.0)]
[10002368 (1.0)]
-
-
[10000424 (1.0)]
-
[10041349 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
-
-
[10073281 (1.0)]
-
-
-
[10004969 (1.0)]
-
[10047896 (1.0)]
-
[10019300 (1.0)]
-
[10041017 (1.0)]
[10047904 (1.0)]
-
[10013767 (1.0)]
[10047862 (1.0)]
-
-
-
[10041374 (1.0)]
[10016365 (1.0)]
[10073281 (1.0)]
[10015667 (1.0)]
[10047896 (1.0)]
-
[10019300 (1.0)]
[10006774 (1.0)]
[10005889 (1.0)]
[10004969 (1.0)]
-
[10016256 (1.0)]
[10073281 (1.0)]
[10043132 (1.0)]
-
-
[10033665 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
-
-
[10003030 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10043132 (1.0)]
[10014555 (1.0)]
[10041349 (1.0)]
[10022437 (1.0)]
-
[10019045 (1.0)]
[10042494 (1.0)]
-
[10011469 (1.0)]
[10016370 (1.0)]
[10058672 (1.0)]
[10033371 (1.0)]
[10033371 (1.0)]
-
-
-
[10041018 (1.0)]
-
[10033371 (1.0)]
-
[10016256 (1.0)]
[10041349 (1.0)]
[10016256 (1.0)]
[10039906 (1.0)]
[10041349 (1.0)]
[10014559 (1.0)]
[10027599 (1.0)]
[10027599 (1.0)]
[10001718 (1.0)]
[10039379 (1.0)]
[10001718 (1.0)]
[10073281 (1.0)]
-
[10013932 (1.0)]
[10061230 (1.0)]
-
-
-
[10047896 (1.0)]
[10042458 (1.0)]
[10042458 (1.0)]
-
-
-
[10073281 (1.0)]
[10073281 (1.0)]
[10006784 (1.0)]
[10027374 (1.0)]
[10027374 (1.0)]
-
-
-
[10001718 (1.0)]
[10013781 (1.0)]
[10019211 (1.0)]
[10047896 (1.0)]
-
-
[10016365 (1.0)]
[10073281 (1.0)]
[10012336 (1.0)]
[10073281 (1.0)]
[10027374 (1.0)]
[10073281 (1.0)]
[10000125 (1.0)]
[10073281 (1.0)]
[10033864 (1.0)]
-
-
-
-
-
-
-
[10024130 (1.0)]
[10016876 (1.0)]
-
[10016336 (1.0)]
-
[10013746 (1.0)]
[10011293 (1.0)]
[10073281 (1.0)]
-
[10047896 (1.0)]
-
-
[10023000 (1.0)]
[10033371 (1.0)]
-
[10040831 (1.0)]
[10070679 (1.0)]
-
-
-
[10016256 (1.0)]
[10001718 (1.0)]
-
-
[10041349 (1.0)]
[10019136 (1.0)]
-
[10070679 (1.0)]
[10042112 (1.0)]
-
[10047810 (1.0)]
[10073281 (1.0)]
-
-
[10047896 (1.0)]
[10048010 (1.0)]
[10033434 (1.0)]
[10033775 (1.0)]
[10009696 (1.0)]
[10047896 (1.0)]
[10003988 (1.0)]
-
-
[10033775 (1.0)]
-
[10013754 (1.0)]
[10016333 (1.0)]
[10013781 (1.0)]
-
[10073281 (1.0)]
-
[10047896 (1.0)]
-
[10040558 (1.0)]
[10019158 (1.0)]
-
-
-
-
[10073281 (1.0)]
[10022437 (1.0)]
[10073281 (1.0)]
-
[10073281 (1.0)]
[10022437 (1.0)]
[10001718 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10019045 (1.0)]
[10047896 (1.0)]
[10047896 (1.0)]
-
-
[10073281 (1.0)]
[10038683 (1.0)]
[10001718 (1.0)]
-
[10045198 (1.0)]
[10016370 (1.0)]
[10073281 (1.0)]
[10012378 (1.0)]
[10073281 (1.0)]
[10048013 (1.0)]
[10011001 (1.0)]
[10043087 (1.0)]
[10015667 (1.0)]
[10043087 (1.0)]
[10011001 (1.0)]
[10073281 (1.0)]
[10043087 (1.0)]
-
[10033775 (1.0)]
[10001718 (1.0)]
[10037234 (1.0)]
[10041349 (1.0)]
[10033775 (1.0)]
[10049901 (1.0)]
[10027175 (1.0)]
-
-
-
[10015667 (1.0)]
-
-
-
[10011001 (1.0)]
[10016256 (1.0)]
[10043087 (1.0)]
[10040528 (1.0)]
[10045198 (1.0)]
[10024668 (1.0)]
[10047896 (1.0)]
[10047896 (1.0)]
-
[10028823 (1.0)]
[10047896 (1.0)]
[10047896 (1.0)]
[10047896 (1.0)]
[10016365 (1.0)]
[10073281 (1.0)]
[10029177 (1.0)]
[10073281 (1.0)]
-
[10073281 (1.0)]
[10035067 (1.0)]
[10006804 (1.0)]
[10040617 (1.0)]
[10016791 (1.0)]
-
[10047896 (1.0)]
[10073281 (1.0)]
-
[10016370 (1.0)]
-
-
-
[10000447 (1.0)]
-
[10011906 (1.0)]
[10041000 (1.0)]
[10002368 (1.0)]
[10073281 (1.0)]
[10027374 (1.0)]
[10073281 (1.0)]
[10012727 (1.0)]
[10006804 (1.0)]
[10033371 (1.0)]
-
[10006804 (1.0)]
[10045148 (1.0)]
-
[10043132 (1.0)]
[10048010 (1.0)]
-
[10011906 (1.0)]
-
[10016365 (1.0)]
[10041017 (1.0)]
-
-
[10047896 (1.0)]
[10012398 (1.0)]
-
-
-
[10012336 (1.0)]
-
-
[10073281 (1.0)]
[10019211 (1.0)]
[10073281 (1.0)]
[10020772 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10041349 (1.0)]
[10028836 (1.0)]
[10028836 (1.0)]
[10001125 (1.0)]
[10011865 (1.0)]
-
-
[10019045 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10013573 (1.0)]
[10073281 (1.0)]
[10028823 (1.0)]
[10034100 (1.0)]
-
-
-
-
-
[10047896 (1.0)]
[10047896 (1.0)]
[10073281 (1.0)]
[10043087 (1.0)]
[10073281 (1.0)]
-
[10001125 (1.0)]
-
-
[10073281 (1.0)]
[10013746 (1.0)]
[10001718 (1.0)]
[10043087 (1.0)]
[10047896 (1.0)]
[10047896 (1.0)]
[10018304 (1.0)]
[10033775 (1.0)]
-
[10019136 (1.0)]
[10002368 (1.0)]
[10029177 (1.0)]
[10017565 (1.0)]
[10011293 (1.0)]
-
[10073281 (1.0)]
-
-
-
[10015667 (1.0)]
[10006774 (1.0)]
[10016365 (1.0)]
-
-
[10021654 (1.0)]
[10013781 (1.0)]
[10016365 (1.0)]
[10048010 (1.0)]
[10037234 (1.0)]
-
[10005755 (1.0)]
[10028823 (1.0)]
-
-
[10019136 (1.0)]
[10019300 (1.0)]
[10043709 (1.0)]
-
[10033371 (1.0)]
[10048013 (1.0)]
[10071175 (1.0)]
[10013746 (1.0)]
[10027374 (1.0)]
[10033432 (1.0)]
[10062519 (1.0)]
[10019045 (1.0)]
[10016370 (1.0)]
-
[10012174 (1.0)]
[10073281 (1.0)]
[10012378 (1.0)]
[10019158 (1.0)]
[10043087 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
[10011906 (1.0)]
[10013580 (1.0)]
[10073281 (1.0)]
[10045198 (1.0)]
[10041014 (1.0)]
[10011865 (1.0)]
-
[10041017 (1.0)]
[10008479 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
[10028823 (1.0)]
[10024490 (1.0)]
[10062375 (1.0)]
[10028823 (1.0)]
[10027374 (1.0)]
[10006804 (1.0)]
[10016821 (1.0)]
[10001125 (1.0)]
-
-
[10016370 (1.0)]
[10036661 (1.0)]
[10047896 (1.0)]
[10045148 (1.0)]
[10073281 (1.0)]
-
[10001718 (1.0)]
-
[10019158 (1.0)]
[10027352 (1.0)]
[10041017 (1.0)]
[10024130 (1.0)]
-
-
-
-
[10037844 (1.0)]
-
[10073281 (1.0)]
[10027599 (1.0)]
-
[10001718 (1.0)]
[10006774 (1.0)]
-
[10073281 (1.0)]
[10017565 (1.0)]
[10028823 (1.0)]
-
[10003028 (1.0)]
-
-
[10042458 (1.0)]
-
-
[10040558 (1.0)]
-
[10011001 (1.0)]
[10016275 (1.0)]
[10024490 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
[10047896 (1.0)]
[10016807 (1.0)]
-
-
[10033371 (1.0)]
[10041349 (1.0)]
[10047896 (1.0)]
[10047896 (1.0)]
[10012336 (1.0)]
[10001125 (1.0)]
-
-
-
[10047896 (1.0)]
[10016365 (1.0)]
[10047700 (1.0)]
[10044126 (1.0)]
-
[10040528 (1.0)]
-
[10047896 (1.0)]
[10016256 (1.0)]
[10073281 (1.0)]
[10016370 (1.0)]
[10016384 (1.0)]
[10019297 (1.0)]
-
-
-
-
[10033371 (1.0)]
-
-
[10011293 (1.0)]
[10047896 (1.0)]
[10040528 (1.0)]
[10041014 (1.0)]
[10033371 (1.0)]
[10013781 (1.0)]
[10021574 (1.0)]
-
-
[10040831 (1.0)]
[10013778 (1.0)]
[10013781 (1.0)]
-
[10019045 (1.0)]
[10049278 (1.0)]
[10045148 (1.0)]
[10041349 (1.0)]
[10011001 (1.0)]
[10041014 (1.0)]
[10073281 (1.0)]
[10019211 (1.0)]
[10005124 (1.0)]
[10000496 (1.0)]
[10073281 (1.0)]
[10043087 (1.0)]
[10012378 (1.0)]
-
-
[10001718 (1.0)]
[10073281 (1.0)]
[10016256 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
[10041349 (1.0)]
-
-
[10073281 (1.0)]
-
[10001718 (1.0)]
[10073281 (1.0)]
[10019045 (1.0)]
-
[10040558 (1.0)]
-
[10038743 (1.0)]
[10019300 (1.0)]
[10058726 (1.0)]
[10000447 (1.0)]
-
-
-
[10073281 (1.0)]
[10001718 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
[10033371 (1.0)]
[10043890 (1.0)]
[10062519 (1.0)]
-
-
-
-
-
-
-
[10041349 (1.0)]
-
[10073281 (1.0)]
[10033371 (1.0)]
-
-
-
-
-
[10041014 (1.0)]
-
-
[10042112 (1.0)]
-
-
-
[10029414 (1.0)]
[10042209 (1.0)]
[10073281 (1.0)]
[10012727 (1.0)]
[10039906 (1.0)]
[10016323 (1.0)]
[10041049 (1.0)]
-
[10042112 (1.0)]
-
[10047862 (1.0)]
[10073281 (1.0)]
-
[10041349 (1.0)]
[10041052 (1.0)]
[10040558 (1.0)]
[10042661 (1.0)]
[10047862 (1.0)]
-
[10027951 (1.0)]
[10029414 (1.0)]
[10073281 (1.0)]
-
-
[10001718 (1.0)]
-
[10006804 (1.0)]
-
[10027374 (1.0)]
-
[10012336 (1.0)]
-
[10015667 (1.0)]
[10073281 (1.0)]
[10020197 (1.0)]
[10073281 (1.0)]
-
-
[10024262 (1.0)]
-
-
[10027599 (1.0)]
-
-
[10033775 (1.0)]
[10056484 (1.0)]
-
-
-
-
[10019211 (1.0)]
[10038743 (1.0)]
[10021654 (1.0)]
[10041018 (1.0)]
[10073281 (1.0)]
[10011469 (1.0)]
[10047700 (1.0)]
-
[10045148 (1.0)]
[10027372 (1.0)]
[10016370 (1.0)]
[10043890 (1.0)]
-
[10023191 (1.0)]
[10038001 (1.0)]
-
-
[10073281 (1.0)]
-
-
[10043087 (1.0)]
[10010300 (1.0)]
[10073281 (1.0)]
[10013767 (1.0)]
-
-
[10027372 (1.0)]
-
[10073281 (1.0)]
[10070679 (1.0)]
-
-
-
[10041349 (1.0)]
-
[10021428 (1.0)]
[10042112 (1.0)]
-
[10043890 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10012378 (1.0)]
[10014843 (1.0)]
[10019045 (1.0)]
-
[10033665 (1.0)]
[10073281 (1.0)]
[10041018 (1.0)]
[10043890 (1.0)]
-
[10040831 (1.0)]
-
[10017565 (1.0)]
[10027175 (1.0)]
[10043087 (1.0)]
[10073281 (1.0)]
[10016370 (1.0)]
-
-
-
[10070679 (1.0)]
[10013781 (1.0)]
-
-
-
-
[10062375 (1.0)]
[10073281 (1.0)]
[10016365 (1.0)]
-
-
[10012398 (1.0)]
[10073281 (1.0)]
[10047896 (1.0)]
-
[10011001 (1.0)]
-
[10016337 (1.0)]
[10043087 (1.0)]
[10016323 (1.0)]
[10033371 (1.0)]
[10040528 (1.0)]
[10028339 (1.0)]
[10027175 (1.0)]
[10033775 (1.0)]
-
[10033371 (1.0)]
[10043890 (1.0)]
-
[10073281 (1.0)]
[10073281 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10073281 (1.0)]
-
[10004969 (1.0)]
[10070679 (1.0)]
-
[10047896 (1.0)]
[10033371 (1.0)]
[10036402 (1.0)]
[10073281 (1.0)]
[10033371 (1.0)]
[10016256 (1.0)]
-
[10047986 (1.0)]
[10041349 (1.0)]
[10033371 (1.0)]
[10027599 (1.0)]
-
-
[10073281 (1.0)]
[10040528 (1.0)]
-
-
[10041014 (1.0)]
[10026749 (1.0)]
[10001125 (1.0)]
[10001125 (1.0)]
[10013932 (1.0)]
[10074314 (1.0)]
-
-
[10001718 (1.0)]
[10006804 (1.0)]
-
[10073281 (1.0)]
-
[10073281 (1.0)]
-
-
[10028823 (1.0)]
[10048010 (1.0)]
[10002855 (1.0)]
-
[10016336 (1.0)]
-
[10027374 (1.0)]
[10033371 (1.0)]
[10027374 (1.0)]
-
[10025082 (1.0)]
[10019304 (1.0)]
[10017565 (1.0)]
[10071175 (1.0)]
[10041001 (1.0)]
-
[10011001 (1.0)]
[10073281 (1.0)]
-
[10073281 (1.0)]
[10043890 (1.0)]
[10033775 (1.0)]
-
-
-
[10019211 (1.0)]
[10028813 (1.0)]
-
[10012378 (1.0)]
-
[10043890 (1.0)]
-
[10041018 (1.0)]
[10016333 (1.0)]
[10027374 (1.0)]
[10024668 (1.0)]
[10073281 (1.0)]
[10013580 (1.0)]
[10019045 (1.0)]
-
-
[10006774 (1.0)]
-
-
-
[10033371 (1.0)]
-
[10043087 (1.0)]
[10019158 (1.0)]
-
-
-
-
-
[10073281 (1.0)]
[10012336 (1.0)]
[10073281 (1.0)]
[10033371 (1.0)]
[10073281 (1.0)]
[10021428 (1.0)]
[10003028 (1.0)]
-
-
[10013746 (1.0)]
[10019045 (1.0)]
[10073281 (1.0)]
-
[10073281 (1.0)]
[10043890 (1.0)]
[10010774 (1.0)]
-
-
[10040528 (1.0)]
[10019203 (1.0)]
-
-
[10073281 (1.0)]
|
docs/source/user_guide/clean/clean_ch_esr.ipynb | ###Markdown
Swiss Einzahlungsschein MIT Referenznummers Introduction The function `clean_ch_esr()` cleans a column containing Swiss EinzahlungsSchein mit Referenznummer (ESR) strings, and standardizes them in a given format. The function `validate_ch_esr()` validates either a single ESR strings, a column of ESR strings or a DataFrame of ESR strings, returning `True` if the value is valid, and `False` otherwise. ESR strings can be converted to the following formats via the `output_format` parameter:* `compact`: only number strings without any seperators or whitespace, like "1878583"* `standard`: ESR strings with proper whitespace in the proper places, like "00 00000 00000 00000 00018 78583"Invalid parsing is handled with the `errors` parameter:* `coerce` (default): invalid parsing will be set to NaN* `ignore`: invalid parsing will return the input* `raise`: invalid parsing will raise an exceptionThe following sections demonstrate the functionality of `clean_ch_esr()` and `validate_ch_esr()`. An example dataset containing ESR strings
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"esr": [
"18 78583",
"210000000003139471430009016",
"51824753556",
"51 824 753 556",
"hello",
np.nan,
"NULL"
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
###Output
_____no_output_____
###Markdown
1. Default `clean_ch_esr`By default, `clean_ch_esr` will clean esr strings and output them in the standard format with proper separators.
###Code
from dataprep.clean import clean_ch_esr
clean_ch_esr(df, column = "esr")
###Output
_____no_output_____
###Markdown
2. Output formats This section demonstrates the output parameter. `standard` (default)
###Code
clean_ch_esr(df, column = "esr", output_format="standard")
###Output
_____no_output_____
###Markdown
`compact`
###Code
clean_ch_esr(df, column = "esr", output_format="compact")
###Output
_____no_output_____
###Markdown
3. `inplace` parameterThis deletes the given column from the returned DataFrame. A new column containing cleaned ESR strings is added with a title in the format `"{original title}_clean"`.
###Code
clean_ch_esr(df, column="esr", inplace=True)
###Output
_____no_output_____
###Markdown
4. `errors` parameter `coerce` (default)
###Code
clean_ch_esr(df, "esr", errors="coerce")
###Output
_____no_output_____
###Markdown
`ignore`
###Code
clean_ch_esr(df, "esr", errors="ignore")
###Output
_____no_output_____
###Markdown
4. `validate_ch_esr()` `validate_ch_esr()` returns `True` when the input is a valid ESR. Otherwise it returns `False`.The input of `validate_ch_esr()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_ch_esr()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_ch_esr()` returns the validation result for the whole DataFrame.
###Code
from dataprep.clean import validate_ch_esr
print(validate_ch_esr("18 78583"))
print(validate_ch_esr("210000000003139471430009016"))
print(validate_ch_esr("51824753556"))
print(validate_ch_esr("51 824 753 556"))
print(validate_ch_esr("hello"))
print(validate_ch_esr(np.nan))
print(validate_ch_esr("NULL"))
###Output
_____no_output_____
###Markdown
Series
###Code
validate_ch_esr(df["esr"])
###Output
_____no_output_____
###Markdown
DataFrame + Specify Column
###Code
validate_ch_esr(df, column="esr")
###Output
_____no_output_____
###Markdown
Only DataFrame
###Code
validate_ch_esr(df)
###Output
_____no_output_____ |
PIA._Red_Neuronal_Convulosional/Modelo_3.ipynb | ###Markdown
**Producto Integrador de Aprendizaje**---Tema: **Entrenamiento de una CNN usando CIFAR-100**\Inteligencia Artificial; IB> 1877436, Jesús Emmanuel Guerrero Cortez \> 1943543, Haziel Jair Sánchez Barrón \> 1378118, Víctor Manuel Castañeda de León \> 1841262, Bryan David Garrido \> 1799686, Jesús Quezada Oviedo
###Code
import tensorflow as tf
#Importación de base de trabajo y librerías
from tensorflow.keras import datasets, layers, models
from keras.models import Model
from keras.layers import Dense, Dropout, Flatten, Activation
from tensorflow.keras.applications.inception_v3 import InceptionV3
#VGG16 con fine-tuning, aunque puede ser empleado con cualquier modelo
from keras.callbacks import ModelCheckpoint, EarlyStopping
from sklearn.metrics import accuracy_score
#____________________________________________________________________#
from datetime import datetime
import pandas as pd
import numpy as np
import pytz
import random
import matplotlib.pyplot as plt
#Arreglo se usará como registro, aunque bien se podrían enviar los valores a archivo csv.
historyarray = [0]
(train_images, train_labels), (test_images, test_labels) = datasets.cifar100.load_data(label_mode='coarse')
#Normalización (abajo) e importación del data set CIFAR100 (arriba), con conjuntos de prueba y entrenamiento.
train_images, test_images = train_images / 255.0, test_images / 255.0
#Inicialmente, optamos por eliminar el arreglo de strings para etiquetas, puesto que equivaldría a 100 clases.
#Para evitar la fatiga de copiar y pegar, se procesaron los atributos a partir de un archivo csv (sep. por coma, tipo str); creado.
#Al final, para aumentar exactitud, se redujo la ultima capa profunda a 20 nodos, o superclases.
#Valores de shape de set de entrenamiento se usarán al trabajar con modelo de transfer learning.
'''
CIFAR100_attributes = pd.read_csv('https://pastebin.com/raw/qgDaNggt', sep=',', header=None).astype(str).values.tolist()[0]
'''
Coarse_labels = ['aquatic_mammals', 'fish', 'flowers', 'food_containers', 'fruit_and_vegetables',
'household_electrical_devices', 'household_furniture', 'insects', 'large_carnivores',
'large_man-made_outdoor_things', 'large_natural_outdoor_scenes', 'large_omnivores_and_herbivores',
'medium_mammals', 'non-insect_invertebrates', 'people', 'reptiles', 'small_mammals', 'trees', 'vehicles_1', 'vehicles_2']
data_size, img_rows, img_cols, img_channels = train_images.shape
plt.figure(figsize=(10,10))
for i in range(16):
plt.subplot(4,4,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(Coarse_labels[train_labels[i][0]])
plt.show()
#Imágenes a RGB, 32x32x(3_canales); capas superficiales
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.20))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.20))
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.25))
#Modelo de capas profundas
model.add(layers.Flatten())
model.add(layers.Dense(1024, activation='relu'))
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(20))
model.summary()
# Esta celda tiene por intención guardar el mejor modelo a partir del peso de sus características; solo tiene utilidad en fine-tuning
# ModelCheckpoint callback - guardar los mejores pesos (weights).
tl_checkpoint_1 = ModelCheckpoint(filepath='tl_model_v1.weights.best.hdf5',
save_best_only=True,
verbose=1)
# Si no existe cambio en la perdida durante la validación, se detiene.
early_stop = EarlyStopping(monitor='val_loss',
patience=10,
restore_best_weights=True,
mode='min')
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
#Aceleración con GPU (demasiado útil). Fundamentos disponibles en: https://colab.research.google.com/notebooks/gpu.ipynb#scrollTo=sXnDmXR7RDr2
tf.keras.optimizers.Adagrad(
learning_rate=0.00005, initial_accumulator_value=0.1, epsilon=1e-07, name="Adagrad"
)
tf.keras.optimizers.Adadelta(
learning_rate=0.001, rho=0.95, epsilon=1e-07, name="Adadelta"
)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
def train_model(model, epochs = 10, steps_per_epoch = 2, validation_steps = 1):
history = model.fit(train_images, train_labels, epochs = epochs, steps_per_epoch = steps_per_epoch, validation_data = (test_images, test_labels), validation_steps = validation_steps, callbacks=[tl_checkpoint_1, early_stop])
return(history)
SIMPLE_MODEL_history = train_model(model, 30, 150, 20)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
# Transfer learning: VGG16 entrenado usando ImageNet; se usa sin su top layer.
def init_VGG16_model(summary):
VGG16_MODEL=tf.keras.applications.VGG16(input_shape=(img_rows, img_cols, img_channels), include_top=False, weights='imagenet')
# Habilitar conv layers
VGG16_MODEL.trainable=True
dropout_layer = tf.keras.layers.Dropout(rate = 0.25)
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
# Se sobrepone la top layer de CIFAR100 (dense de 20 neuronas, activación softmax para daterminación de clases = 1.0) junto con el model sequential de la red de transferencia
prediction_layer = tf.keras.layers.Dense(len(Coarse_labels),activation='sigmoid')
model = tf.keras.Sequential([VGG16_MODEL, dropout_layer, global_average_layer, prediction_layer])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.000001), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=["accuracy"])
if summary:
model.summary()
return model
VGG16_MODEL = init_VGG16_model(summary = True)
VGG16_MODEL_history = train_model(VGG16_MODEL, 15, 400, 40)
test_loss_VGG16, test_acc_VGG16 = model.evaluate(test_images, test_labels, verbose=2)
# Transfer learning: VGG16 entrenado usando ImageNet; se usa sin su top layer, pero se añade fine-tuning (ver documento)
def init_VGG16_model_finetuning(summary, fine_tune=0):
VGG16_MODEL_FT=tf.keras.applications.VGG16(input_shape=(img_rows, img_cols, img_channels), include_top=False, weights='imagenet')
# Define cuántas capas se congelan durante el entrenamiento. Las capas de la base convolucional
# pasan de ser entrenables a no entrenables dependiendo del valor de fine-tuning.
if fine_tune > 0:
for layer in VGG16_MODEL_FT.layers[:-fine_tune]:
layer.trainable = False
else:
for layer in VGG16_MODEL_FT.layers:
layer.trainable = False
top_model = VGG16_MODEL_FT.output
top_model = Flatten(name="flatten")(top_model)
top_model = Dense(4096, activation='relu')(top_model)
top_model = Dense(1072, activation='relu')(top_model)
top_model = Dropout(0.2)(top_model)
# Se sobrepone la top layer de CIFAR100 (dense de 20 neuronas, activación softmax para
# determinación de clases = 1.0) junto con el model sequential de la red de transferencia
prediction_layer = Dense(len(Coarse_labels), activation='softmax')(top_model)
model = tf.keras.Model(inputs=VGG16_MODEL_FT.input, outputs=prediction_layer)
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.0001), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=["accuracy"])
if summary:
model.summary()
return model
VGG16_MODEL_FT = init_VGG16_model_finetuning(summary = True, fine_tune=4)
VGG16_MODEL_FT_history = train_model(VGG16_MODEL_FT, 15, 100, 10)
test_loss_VGG16_FT, test_acc_VGG16_FT = model.evaluate(test_images, test_labels, verbose=1)
# Esta celda fue inhabilitada debido a la exhaustividad de procesamiento; SEGUIR LOS SIGUIENTES PASOS SI SE DESEA TRABAJAR CON ESTE MODELO PREENTRENADO.
# Trabajar con Inception (una arquitectura CNN más moderna que VGG16) implicaría ajustar la forma de la entrada de las bases de CIFAR100 a, por lo menos, 75x75x3.
# Para ello se sugiere hacer uso del siguiente dataset que ha aplicado un reescalamiento (256x256) a las imágenes, sin causar en nuestra máquina un consumo excesivo de recursos.
# https://www.kaggle.com/ibraheemmoosa/cifar100-256x256
# Es necesario, si se continúa empleando Colab, importar dicho dataset a la consola virtual. Para ello, recomendamos visualizar el siguiente link donde se detalla paso a paso su subida.
# https://www.kaggle.com/ibraheemmoosa/cifar100-256x256
# Una vez logrado, deberemos ajustar el punto de carga, el input_shape del modelo (256x256x3) e hiperparámetros (de ser necesario).
# Enseguida, se correrá esta celda. Notése que se han predefinido las dimensiones de la entrada.
# Transfer learning: Inception (versión 3) entrenado usando ImageNet; se usa sin su top layer, al igual que el modelo preentrenado anterior.
def init_Inception_model(summary):
Inception_MODEL = InceptionV3(input_shape=(256, 256, 3), include_top=False, weights='imagenet')
# Habilitar conv layers
Inception_MODEL.trainable=True
dropout_layer = tf.keras.layers.Dropout(rate = 0.2)
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
# Se sobrepone la top layer de CIFAR100 (dense de 20 neuronas, activación softmax para daterminación de clases = 1.0) junto con el model sequential de la red de transferencia
prediction_layer = tf.keras.layers.Dense(len(Coarse_labels),activation='softmax')
model = tf.keras.Sequential([Inception_MODEL, dropout_layer, global_average_layer, prediction_layer])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.00005), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=["accuracy"])
if summary:
model.summary()
return model
Inception_MODEL = init_Inception_model(summary = True)
plt.plot(SIMPLE_MODEL_history.history['accuracy'], label='accuracy')
plt.plot(SIMPLE_MODEL_history.history['val_accuracy'], label = 'val_accuracy')
plt.plot(VGG16_MODEL_history.history['accuracy'], label='accuracy_(with VGG16)')
plt.plot(VGG16_MODEL_history.history['val_accuracy'], label = 'val_accuracy_(with VGG16)')
plt.plot(VGG16_MODEL_FT_history.history['accuracy'], label='accuracy_(with VGG16)/Fine-tuning')
plt.plot(VGG16_MODEL_FT_history.history['val_accuracy'], label = 'val_accuracy_(with VGG16)/Fine-tuning')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0, 1])
plt.legend(loc='lower right')
#Arreglo de valores de exactitud para validación/testeo
tz_MX = pytz.timezone('America/Mexico_City')
now = datetime.now(tz_MX)
current_time = now.strftime("%d/%m/%Y a las %H:%M:%S")
historyarray.append(f'{current_time}: {round((test_acc * 100), 2)}%')
print(test_acc, historyarray)
###Output
0.19259999692440033 [0, '18/11/2021 a las 13:14:47: 19.26%']
|
demo_monai_multimodal_ai4covid.ipynb | ###Markdown
Get Dataset Download data
###Code
import os, shutil
# download
!echo "wget ... (TrainSet URL to be published by challenge organizer)"
!echo "wget ... (TestSet URL to be published by challenge organizer)"
# unzip
if not os.path.exists('./TrainSet'):
!unzip -q -o TrainSet.zip
else:
print('TrainSet already unzipped.')
if not os.path.exists('./TestSet'):
!unzip -q -o TestSet.zip
else:
print('TestSet already unzipped.')
print('Data ready.')
###Output
wget ... (TrainSet URL to be published by challenge organizer)
wget ... (TestSet URL to be published by challenge organizer)
TrainSet already unzipped.
TestSet already unzipped.
Data ready.
###Markdown
Pre-process imagesPre-processing includes resizing to (224,224) and image intensity normalization to a percentile-range (with previous percentile-based outlier removal).
###Code
import pandas as pd
from glob import glob
from tqdm import tqdm
import numpy as np
import pandas as pd
import skimage
import skimage.transform
from skimage import io, exposure
def normalize_image_robust(img, pctl_low=0.1, pctl_high=99.9, clip=True,
mask=None, th_low=-np.Inf, th_high=np.Inf):
# mask and threshold pixel intensities before extracting a lower/upper bound
if mask is not None:
# mask (binary) might come from a lung segmentation algorithm, but is not used in this example
vec = img[mask]
else:
vec = img.ravel()
if isinstance(th_low, str):
th_low = np.percentile(img, float(th_low))
if isinstance(th_high, str):
th_high = np.percentile(img, float(th_high))
vec = vec[np.logical_and(vec>th_low, vec<th_high)]
# contrast stretching to lower/upper intensity bound
img_low = np.percentile(vec, pctl_low)
img_high = np.percentile(vec, pctl_high)
img = img-img_low
img = img/(img_high-img_low)
if clip:
img[img<0.0] = 0.0
img[img>1.0] = 1.0
# histogram equalization
img = exposure.equalize_hist(img)
return img
# filepath list of images
fl_train = glob('./TrainSet/*.png')
fl_test = glob('./TestSet/*.png')
to_png = False
to_npy = True
for fl, subpath in zip([fl_train, fl_test],
['TrainSet','TestSet']):
pn_src = os.path.join('.',subpath)
pn_dst_img = os.path.join('.',subpath+'_224_img')
pn_dst_npy = os.path.join('.',subpath+'_224_np')
if not os.path.exists(pn_dst_img):
os.makedirs(pn_dst_img)
if not os.path.exists(pn_dst_npy):
os.makedirs(pn_dst_npy)
for idx, filepath in enumerate(tqdm(fl)):
pn, fn = os.path.split(filepath)
ff_dst_img = os.path.join(pn_dst_img,fn)
ff_dst_npy = os.path.join(pn_dst_npy,fn.replace('.png','.npy'))
if (to_png and not os.path.exists(ff_dst_img)) or \
(to_npy and not os.path.exists(ff_dst_npy)):
ff_src = os.path.join(pn_src,fn)
img = io.imread(ff_src)
img_norm = normalize_image_robust(img, 1.0, 99.0, th_low='0.1', th_high='99.9')
img_out = skimage.transform.resize(img_norm, (224,224))
# train set has 4 90deg-ccw rotated images, test has none - unrotate 4 outliers in train set
if np.any([s in ff_src for s in ['P_1_60', 'P_1_163', 'P_694', 'P_829']]):
img_out = img_out.T
# write to disk, either as png and/or as npy
if to_png:
io.imsave(ff_dst_img, (img_out*255.0).astype('uint8'))
if to_npy:
np.save(ff_dst_npy, img_out.astype('float32'))
print('Image preprocessing (resize/normalize) done.')
from time import time
from datetime import datetime
from matplotlib import pylab as plt
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import RobustScaler, OneHotEncoder, LabelEncoder
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score
df = pd.read_excel('./trainClinData.xls')
pd.set_option('display.max_columns', None)
df.describe(include='all')
df.head()
###Output
_____no_output_____
###Markdown
EHR pre-processing with Sklearn ColumnTransformerCombines separate treatment of continuous random variables and categorical variables. * **Continuous variables** get median-imputed and scaled robustly from [1,99] percentiles to [0,1] range. * **Categorical variables** get one-hot encoded, missing values are treated as a category on its own.
###Code
# make a data split and train / testa linear classifier for baseline, then a pytorch MLP model for comparison
# define preprocessor for data columns
# normalize with numeric random variables
numeric_features = ["Age", "Temp_C", "DaysFever", "WBC", "RBC", "CRP", "Fibrinogen", "Glucose", "PCT", "LDH", "INR", "D_dimer", "Ox_percentage", "PaO2", "SaO2", "PaCO2", "pH"]
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="median")), ("scaler", RobustScaler(quantile_range=(1.0, 99.0)))]
)
# deal with categorical random variables - we encode missing values (NaNs) as a category on its own
categorical_features = ["Sex", 'PositivityAtAdmission'] + ['Cough','DifficultyInBreathing','CardiovascularDisease','IschemicHeartDisease','AtrialFibrillation','HeartFailure','Ictus','HighBloodPressure','Diabetes','Dementia','BPCO','Cancer','ChronicKidneyDisease','RespiratoryFailure','Obesity','Position']
categorical_transformer = Pipeline(steps=[("onehot", OneHotEncoder(handle_unknown="ignore"))])
# combine into ColumnTransformer
preprocessor = ColumnTransformer(
transformers=[
("cat", categorical_transformer, categorical_features),
("num", numeric_transformer, numeric_features),
("pass", "passthrough", []), # no columns suitable for passthrough in this dataset
]
)
# define preprocessor for label column (LabelEncoder)
le = LabelEncoder()
le.fit(['MILD', 'SEVERE'])
# train/test data via split (incl. dataframe re-indexing)
df_train, df_val = train_test_split(df, test_size=0.15, random_state=1)
df_train['index_orig'] = df_train.index.tolist()
df_train = df_train.reset_index(drop=True)
df_val['index_orig'] = df_val.index.tolist()
df_val = df_val.reset_index(drop=True)
# fit pre-processor only (!) to train data
preprocessor.fit(df_train)
# append pre-processed features to dataframes
df_train_proc = df_train.copy().assign(EHR=[x for x in preprocessor.transform(df_train)])
df_val_proc = df_val.copy().assign(EHR=[x for x in preprocessor.transform(df_val)])
# load test dataset and apply fitted preprocessor
df_test = pd.read_excel('./completeTestClinData.xls')
df_test['index_orig'] = df_test.index.tolist()
df_test = df_test.reset_index(drop=True)
df_test_proc = df_test.copy().assign(EHR=[x for x in preprocessor.transform(df_test)])
# fit classifers to train data (we ignore validation data here)
naive = np.argmax([np.sum(df_train.Prognosis=='MILD'), np.sum(df_train.Prognosis=='SEVERE')]) # majority class in training set
lr = LogisticRegression(C=1, max_iter=5000)
gbc = GradientBoostingClassifier(random_state=0)
lr.fit(preprocessor.transform(df_train), le.transform(df_train.Prognosis))
gbc.fit(preprocessor.transform(df_train), le.transform(df_train.Prognosis))
# predict on validation and test data
naive_val_preds = np.array([naive]*df_val.shape[0])
lr_val_preds = lr.predict(preprocessor.transform(df_val))
gbc_val_preds = gbc.predict(preprocessor.transform(df_val))
naive_test_preds = np.array([naive]*df_test.shape[0])
lr_test_preds = lr.predict(preprocessor.transform(df_test))
gbc_test_preds = gbc.predict(preprocessor.transform(df_test))
# evaluate/report baseline accuracies on validation and test set
print(f'\nBaseline accuracies for validation set:')
print(f'Validation accuracy naive (majority class vote): {accuracy_score(le.transform(df_val.Prognosis), naive_val_preds)}')
print(f'Validation accuracy for LogisticRegression: {accuracy_score(le.transform(df_val.Prognosis), lr_val_preds)}')
print(f'Validation accuracy for GradBoostedClassifier: {accuracy_score(le.transform(df_val.Prognosis), gbc_val_preds)}')
print(f'\nBaseline accuracies for test set:')
print(f'Test accuracy naive (majority class vote): {accuracy_score(le.transform(df_test.Prognosis), naive_test_preds)}')
print(f'Test accuracy for LogisticRegression: {accuracy_score(le.transform(df_test.Prognosis), lr_test_preds)}')
print(f'Test accuracy for GradBoostedClassifier: {accuracy_score(le.transform(df_test.Prognosis), gbc_test_preds)}')
# torch imports
import torch
from torch import nn
from torch.nn import Sequential, Linear, ReLU, Dropout, Softmax
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
# Datasets and DataLoader
import monai
from monai.data import PILReader
from monai.data import Dataset, CSVDataset
from monai.data import DataLoader
# ready-made MONAI transforms
from monai.transforms import (
AddChanneld,
AsChannelFirstd,
Compose,
Lambdad,
LoadImaged,
NormalizeIntensityd,
RandAdjustContrastd,
RandAffined,
RandFlipd,
Resized,
RepeatChanneld,
ScaleIntensityRanged,
ScaleIntensityRange,
ToTensord,
Transform
)
# imports for custom MONAI transforms
from monai.utils.enums import TransformBackends
# imports for custom MONAI dictionary transforms
from typing import Any, Callable, Dict, Hashable, List, Mapping, Optional, Sequence, Tuple, Union
from monai.config import DtypeLike, KeysCollection
from monai.config.type_definitions import NdarrayOrTensor
from monai.transforms import RandomizableTransform
from monai.transforms.transform import MapTransform
from monai.transforms.utils import is_positive
from monai.utils import ensure_tuple, ensure_tuple_rep
# network models
from monai.networks.nets import TorchVisionFCModel
monai.config.print_config()
# At the beginning of this notebook, we pre-processed the images in the
# ai4covid dataset, due to their large variety of image intensities.
# Instead of performing this step once, and offline, we can also perform
# such transformations with custom MONAI transforms, on-the-fly, as
# implemented here.
class RobustImageScalerAI4C(Transform):
"""
Apply robust image intensity normalization from percentile-bounds to range [0...1].
Normalization happens via contrast stretching to percentile-bounds,
followed by histogram equalization. Prior to computing the percentile-bounds,
there is an option to mask the intensity values with a dense mask, and/or
intensity bounds (either absolute, or again via percentiles).
Default params tuned to train-set of Ai4covid challenge.
Args:
pctl_low: lower percentile-bound. Default: 0.1
pctl_high: upper percentile-bound. Default: 99.9
clip: Clip to [0...1] after normalization, or allow outlier values beyond that
(False would results in output pixel intensities <=0.0 and >=1.0).
Default: True
mask: Image mask (binary, same shape as image) to restrict the image regions from
which intensity percentiles are computed. Default: None (entire image is used)
th_low: Intensity bound to create an ad-hoc mask for image intensities
Can be an absolute intensity value (float) or a percentile (string
in range ['0.0'...'100.0']). Default: -np.Inf
th_high: Upper intensity bound for ad-hoc mask (analoguos to th_low).
Default: np.Inf
"""
def __init__(self, pctl_low=0.1, pctl_high=99.9, clip=True,
mask=None, th_low=-np.Inf, th_high=np.Inf) -> None:
self.pctl_low = pctl_low
self.pctl_high = pctl_high
self.clip = clip
self.mask = mask
self.th_low=th_low
self.th_high=th_high
if isinstance(self.th_low, str):
self.th_low = np.percentile(img, float(self.th_low))
if isinstance(th_high, str):
self.th_high = np.percentile(img, float(self.th_high))
def __call__(self, img: np.ndarray) -> np.ndarray:
"""
normalize Xray images robustly
"""
ret = self.normalize_image_robust(img)
return ret
def normalize_image_robust(self, img, mask=None):
# mask and threshold pixel intensities before extracting a lower/upper bound
if mask is not None:
assert mask.shape==img.shape
# mask (binary) might come from a lung segmentation algorithm, but is not used in this example
vec = img[self.mask]
else:
vec = img.ravel()
vec = vec[np.logical_and(vec>self.th_low, vec<self.th_high)]
# contrast stretching to lower/upper intensity bound
img_low = np.percentile(vec, self.pctl_low)
img_high = np.percentile(vec, self.pctl_high)
img = img-img_low
img = img/(img_high-img_low)
if self.clip:
img[img<0.0] = 0.0
img[img>1.0] = 1.0
# histogram equalization
img = exposure.equalize_hist(img)
return img
class RobustImageScalerAI4Cd(MapTransform):
"""
Dictionary-based wrapper of :py:class:`monai.transforms.SklearnImputer`.
"""
def __init__(self, keys: KeysCollection, allow_missing_keys: bool = False,
pctl_low=0.1, pctl_high=99.9, clip=True,
mask=None, th_low=-np.Inf, th_high=np.Inf,
) -> None:
"""
Args:
fitted_transformer: a transformer for tabular data that has already been fit
keys: keys of the corresponding items to be transformed.
See also: :py:class:`monai.transforms.compose.MapTransform`
offset: offset value to shift the intensity of image.
allow_missing_keys: don't raise exception if key is missing.
"""
super().__init__(keys, allow_missing_keys)
self.scaler = RobustImageScalerAI4C(pctl_low=pctl_low, pctl_high=pctl_high, clip=clip,
mask=mask, th_low=th_low, th_high=th_high)
def __call__(self, data: Mapping[Hashable, np.ndarray]) -> Dict[Hashable, np.ndarray]:
d = dict(data)
for key in self.key_iterator(d):
d[key] = self.scaler(d[key])
return d
class RandInvertImaged(RandomizableTransform, MapTransform):
"""
Dictionary transform: Inverts normalized image intensities from [0...1] to [1...0].
"""
#backend = RandScaleIntensity.backend
def __init__(
self,
keys: KeysCollection,
prob: float = 0.1,
seed: int = None,
allow_missing_keys: bool = False,
) -> None:
"""
Args:
keys: keys of the corresponding items to be transformed.
See also: :py:class:`monai.transforms.compose.MapTransform`
factors: factor range to randomly scale by ``v = v * (1 + factor)``.
if single number, factor value is picked from (-factors, factors).
prob: probability of rotating.
(Default 0.1, with 10% probability it returns a rotated array.)
dtype: output data type, if None, same as input image. defaults to float32.
allow_missing_keys: don't raise exception if key is missing.
"""
MapTransform.__init__(self, keys, allow_missing_keys)
RandomizableTransform.__init__(self, prob)
if seed is not None:
self.set_random_state(seed)
self.inverter = ScaleIntensityRange(0.0, 1.0, b_min=1.0, b_max=0.0)
def set_random_state(
self, seed: Optional[int] = None, state: Optional[np.random.RandomState] = None
) -> "RandScaleIntensityd":
super().set_random_state(seed, state)
return self
def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:
d = dict(data)
# decide whether to randomly invert or not
self.randomize(None)
if not self._do_transform:
return d
# invert
for key in self.key_iterator(d):
d[key] = self.inverter(d[key])
return d
transforms_train = Compose([
Lambdad(keys="ImageFile", func=lambda filepath: os.path.join(os.getcwd(),
'TrainSet_224_np',
filepath.replace('.png','.npy'))), # append base path to relative image path
LoadImaged(keys="ImageFile", image_only=True),
AddChanneld(keys="ImageFile"),
#AsChannelFirstd(keys="ImageFile"),
#RobustImageScalerAI4Cd(keys="ImageFile"),
#Resized(keys="ImageFile", spatial_size=(224,224)),
RandFlipd(keys="ImageFile", prob=0.5, spatial_axis=1),
RandInvertImaged(keys="ImageFile",prob=0.5,seed=0),
RandAdjustContrastd(keys="ImageFile", prob=0.5, gamma=(0.5, 2.0)),
RandAffined(keys="ImageFile",
prob=0.5,
rotate_range=np.pi*30.0/180.0,
shear_range=[0.15,0.15],
translate_range=[[-0.1*224, 0.1*224]]*2,
scale_range=0.2,
spatial_size=[224,224],
padding_mode="zeros",
cache_grid=True,
as_tensor_output=True,
device=None),
RepeatChanneld(keys="ImageFile",repeats=3),
NormalizeIntensityd(keys="ImageFile", subtrahend=np.array([0.485, 0.456, 0.406]), divisor=np.array([0.229, 0.224, 0.225]), channel_wise=True),
#ToTensord(keys="ImageFile"),
]
)
# significantly faster version if only EHR is used (leave away image I/O transforms)
transforms_train_ehr_only = Compose([
Lambdad(keys="ImageFile", func=lambda filepath: os.path.join(os.getcwd(),
'TrainSet_224_np',
filepath.replace('.png','.npy'))), # append base path to relative image path
]
)
transforms_val = Compose([
Lambdad(keys="ImageFile", func=lambda filepath: os.path.join(os.getcwd(),
'TrainSet_224_np',
filepath.replace('.png','.npy'))), # append base path to relative image path
LoadImaged(keys="ImageFile", image_only=True),
AddChanneld(keys="ImageFile"),
RepeatChanneld(keys="ImageFile",repeats=3),
NormalizeIntensityd(keys="ImageFile", subtrahend=np.array([0.485, 0.456, 0.406]), divisor=np.array([0.229, 0.224, 0.225]), channel_wise=True),
ToTensord(keys="ImageFile"),
]
)
transforms_test = Compose([
Lambdad(keys="ImageFile", func=lambda filepath: os.path.join(os.getcwd(),
'TestSet_224_np',
filepath.replace('.png','.npy'))), # append base path to relative image path
LoadImaged(keys="ImageFile", image_only=True), # , reader=PILReader(converter=lambda image: image.convert("RGB")
AddChanneld(keys="ImageFile"),
RepeatChanneld(keys="ImageFile",repeats=3),
NormalizeIntensityd(keys="ImageFile", subtrahend=np.array([0.485, 0.456, 0.406]), divisor=np.array([0.229, 0.224, 0.225]), channel_wise=True),
ToTensord(keys="ImageFile"),
]
)
# check our transformation pipeline
data_train = CSVDataset(src=df_train_proc,
col_names=['ImageFile', 'EHR', 'Prognosis'],
transform=transforms_train)
batch_size = 8
check_loader = DataLoader(dataset=data_train, batch_size=batch_size, shuffle=True)
t0 = time()
blob = next(iter(check_loader))
print(f'Elapsed time (N={batch_size}): {time()-t0:.3f} sec.')
tiles = int(np.ceil(np.sqrt(batch_size)))
fig = plt.figure(figsize=(tiles*5,tiles*5))
for idx, img in enumerate(blob['ImageFile']):
fig.add_subplot(tiles,tiles, idx+1)
plt.imshow(img[0])
plt.show()
class EhrOnlyNet(nn.Module):
def __init__(self, num_classes=0, input_dim=70, hidden_sizes=[128,128], dropout_rate=0.2):
super().__init__()
self.num_classes = num_classes
self.hidden_sizes = hidden_sizes
self.input_dim = input_dim
self.dropout_rate = dropout_rate
# create MLP
layers = [Dropout(dropout_rate), Linear(input_dim, hidden_sizes[0]), ReLU()]
for i in range(1,len(hidden_sizes)):
layers += [Dropout(dropout_rate), Linear(hidden_sizes[i-1], hidden_sizes[i]), ReLU()]
# add classification head if needed
if num_classes>0:
layers += [Dropout(dropout_rate), Linear(hidden_sizes[-1], num_classes), Softmax(dim=1)]
self.mlp = Sequential(*layers)
def forward(self, ehr):
'''Forward pass'''
return self.mlp(ehr)
class ImageOnlyNet(nn.Module):
def __init__(self, num_classes=0, hidden_size=512, torchvision_model_id="resnet50",
freeze_backbone=True, freeze_gradcam=False, dropout_rate=0.2):
super().__init__()
self.num_classes = num_classes
self.hidden_size = hidden_size
self.torchvision_model_id = torchvision_model_id
self.freeze_backbone = freeze_backbone
self.freeze_gradcam = freeze_gradcam
# the CNN implicitly contains a fc layer which embeds into num_classes dimensions
self.cnn = TorchVisionFCModel(self.torchvision_model_id,
num_classes=self.hidden_size,
use_conv=True,
pretrained=True)
# Avoid catastrophic forgetting --> freeze all CNN layers except last fc layer
if self.freeze_backbone:
if self.freeze_gradcam:
# # Freeze backbone
for name, param in self.model.named_parameters():
if "features.7" not in name:
param.requires_grad = False
for param in self.cnn.fc.parameters():
param.requires_grad = True
else:
for param in self.cnn.parameters():
param.requires_grad = False
for param in self.cnn.fc.parameters():
param.requires_grad = True
layers = [Dropout(dropout_rate), Linear(hidden_size,hidden_size), ReLU(),
Dropout(dropout_rate), Linear(hidden_size,hidden_size), ReLU()]
self.mlp = Sequential(*layers)
# add another classification layer fc2
if self.num_classes>0:
self.fc1 = nn.Linear(hidden_size,self.num_classes)
self.softmax = Softmax(dim=1)
def forward(self, images):
'''Forward pass'''
x = F.relu(self.cnn(images))
x = self.mlp(x.squeeze())
if self.num_classes>0:
x = self.softmax(self.fc1(x.squeeze()))
return x
class MultimodalNet(nn.Module):
def __init__(self, num_classes=0,
kwargs_cnn={},
kwargs_ehr={},
hidden_sizes=[128,128],
dropout_rate=0.2):
super().__init__()
# Vision model
self.cnn = ImageOnlyNet(**kwargs_cnn)
# EHR model
self.ehr = EhrOnlyNet(**kwargs_ehr)
# multimodal MLP model
layers = [Dropout(dropout_rate),
Linear(self.cnn.hidden_size+self.ehr.hidden_sizes[-1],hidden_sizes[0]),
ReLU()]
for i in range(1,len(hidden_sizes)):
layers += [Dropout(dropout_rate), Linear(hidden_sizes[i-1], hidden_sizes[i]), ReLU()]
if num_classes>0:
layers += [Dropout(dropout_rate), Linear(hidden_sizes[-1], num_classes), Softmax(dim=1)]
self.mlp = Sequential(*layers)
def forward(self, images, ehr):
# single-modal forward passes
x1 = self.cnn(images)
x2 = self.ehr(ehr)
# MLP on multimodal embeddings
x = torch.cat((x1.squeeze(), x2), dim=1)
x = self.mlp(x)
return x
device='cuda:0'
# ehr
model_ehr = EhrOnlyNet(input_dim=70,num_classes=2).to(device)
t0 = time()
test_ehr = model_ehr(blob['EHR'].type(torch.float32).to(device))
print(f'Time elapsed EHR: {time()-t0:.5f}')
# img
model_img = ImageOnlyNet(num_classes=2,
torchvision_model_id="resnet18",
freeze_gradcam=False).to(device)
t0 = time()
test_img = model_img(blob['ImageFile'].to(device))
print(f'Time elapsed Image: {time()-t0:.5f}')
# mm
model_mm = MultimodalNet(num_classes=2,
kwargs_cnn={'hidden_size': 512,
'torchvision_model_id': "resnet18"},
kwargs_ehr={'input_dim': 70},
hidden_sizes=[128,128]).to(device)
t0 = time()
test_mm = model_mm(blob['ImageFile'].to(device), blob['EHR'].type(torch.float32).to(device))
print(f'Time elapsed Multimodal: {time()-t0:.5f}')
###Output
Time elapsed EHR: 0.00100
Time elapsed Image: 0.00650
Time elapsed Multimodal: 0.00667
###Markdown
Typical torch training and validation loop
###Code
def train(train_loader, model, criterion, optimizer, device, data_keys, label_key,
label_encoder=None, summary_writer=None, summary_writer_offset=0):
'''
Function for the training step of the training loop
'''
model.train()
running_loss = 0
all_predictions = []
all_targets = []
for idx, batch_data in enumerate(train_loader):
optimizer.zero_grad()
data = [batch_data[key].type(torch.float32).to(device) for key in data_keys]
targets = batch_data[label_key]
if label_encoder is not None:
targets = label_encoder.transform(targets)
targets = torch.from_numpy(targets).to(device)
# Forward pass
predictions = model(*data)
loss = criterion(predictions, targets)
all_predictions.append(predictions)
all_targets.append(targets)
if summary_writer is not None:
summary_writer.add_scalar("Loss/train", loss, global_step=summary_writer_offset+idx)
running_loss += loss.item() * data[0].size(0)
# Backward pass
loss.backward()
optimizer.step()
epoch_loss = running_loss / len(train_loader.dataset)
return model, optimizer, epoch_loss, all_predictions, all_targets
def validate(valid_loader, model, criterion, device, data_keys, label_key,
label_encoder=None, summary_writer=None, summary_writer_offset=0):
'''
Function for the validation step of the training loop
'''
model.eval()
running_loss = 0
all_predictions = []
all_targets = []
with torch.no_grad():
for batch_data in valid_loader:
data = [batch_data[key].type(torch.float32).to(device) for key in data_keys]
targets = batch_data[label_key]
if label_encoder is not None:
targets = label_encoder.transform(targets)
targets = torch.from_numpy(targets).to(device)
# Forward pass
predictions = model(*data)
loss = criterion(predictions, targets)
running_loss += loss.item() * data[0].size(0)
all_predictions.append(predictions)
all_targets.append(targets)
epoch_loss = running_loss / len(valid_loader.dataset)
if summary_writer is not None:
summary_writer.add_scalar("Loss/valid", epoch_loss, global_step=summary_writer_offset)
return model, epoch_loss, all_predictions, all_targets
def training_loop(model, criterion, optimizer, train_loader, valid_loader, epochs, device, label_key,
data_keys=None, label_encoder=None, summary_writer=None,
text_embedder=None, print_every=1):
'''
Function defining the entire training loop
'''
if data_keys is None:
print('Training loop requires list of target data keys in batch dictionary.')
return None, None, None
# set objects for storing metrics
train_losses = []
valid_losses = []
best_valid_loss = 1e10
best_predictions = None
best_model = None
best_epoch = 0
# Train model
for epoch in tqdm(range(0, epochs)):
# training
#print(f'Epoch {epoch:3d} - Training:')
model, optimizer, train_loss, all_predictions, all_targets = train(train_loader, model, criterion, optimizer, device,
data_keys=data_keys, label_key=label_key,
label_encoder=label_encoder,
summary_writer=summary_writer,
summary_writer_offset=len(train_loader)*epoch)
train_losses.append(train_loss)
# validation
if epoch % print_every == (print_every - 1):
with torch.no_grad():
#print('Validating:')
model, valid_loss, all_predictions, all_targets = validate(valid_loader, model, criterion, device,
data_keys=data_keys, label_key=label_key,
label_encoder=label_encoder,
summary_writer=summary_writer,
summary_writer_offset=len(train_loader)*epoch)
valid_losses.append(valid_loss)
if valid_loss<best_valid_loss:
best_valid_loss = valid_loss
best_model = model
best_epoch = epoch
best_predictions = all_predictions
#print(f'{datetime.now().time().replace(microsecond=0)} --- '
# f'Epoch: {epoch}\t'
# f'Valid loss: {valid_loss:.4f}\t')
return best_model, optimizer, (train_losses, valid_losses), best_predictions, best_epoch
DEVICE = 'cuda:0'
RANDOM_SEED = 0
LEARNING_RATE = 0.00001
BATCH_SIZE = 64
N_EPOCHS = 2000
N_CLASSES = 2
monai.utils.set_determinism(seed=0, additional_settings=None)
torch.manual_seed(RANDOM_SEED)
outputs = {}
for model_tag in ['ehr', 'image', 'multimodal']:
if model_tag=='image':
model_data_keys = ['ImageFile']
model = ImageOnlyNet(num_classes=N_CLASSES,
torchvision_model_id="resnet50",
freeze_backbone=True).to(DEVICE)
transforms = transforms_train
elif model_tag=='ehr':
model_data_keys = ['EHR']
model = EhrOnlyNet(num_classes=N_CLASSES).to(DEVICE)
transforms = transforms_train_ehr_only
else: # model_tag=='multimodal':
model_data_keys = ['ImageFile', 'EHR']
model = MultimodalNet(num_classes=N_CLASSES,
kwargs_cnn={'freeze_backbone': True,
'torchvision_model_id': "resnet50"}).to(DEVICE)
transforms = transforms_train
data_train = CSVDataset(src=df_train_proc,
col_names=['ImageFile', 'EHR', 'Prognosis'],
transform=transforms )
loader_train = DataLoader(dataset=data_train,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=0,
#prefetch_factor=16
)
data_valid = CSVDataset(src=df_val_proc,
col_names=['ImageFile', 'EHR', 'Prognosis'],
transform=transforms_val)
loader_valid = DataLoader(dataset=data_valid,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=0)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
criterion = nn.CrossEntropyLoss()
label_key = 'Prognosis'
writer = SummaryWriter(f'runs/model_{model_tag}')
best_model, optimizer, losses, best_predictions, best_epoch = training_loop(model, criterion, optimizer, loader_train, loader_valid,
N_EPOCHS, DEVICE, label_key,
data_keys=model_data_keys, label_encoder=le,
summary_writer=writer)
outputs[model_tag] = {
'model': best_model,
'losses': losses,
'predictions': best_predictions,
'epoch': best_epoch
}
writer.flush()
writer.close()
# print some results
tgts = le.transform(df_val.Prognosis)
print(f'Validation accuracy naive (majority class vote): {accuracy_score(tgts, naive_val_preds):.5f}')
for model_tag in ['ehr', 'image', 'multimodal']:
preds = []
for v in outputs[model_tag]['predictions']:
preds += list(v.argmax(axis=1).detach().cpu().numpy())
print(f'Validation accuracy for {model_tag}: {accuracy_score(tgts, preds):.5f}')
# start tensorboard via
# tensorboard --logdir=./runs --bind_all
###Output
100%|██████████| 2000/2000 [07:03<00:00, 4.72it/s]
100%|██████████| 2000/2000 [2:03:04<00:00, 3.69s/it]
100%|██████████| 2000/2000 [1:54:58<00:00, 3.45s/it]
###Markdown
Run inference on test set
###Code
def infer(loader, model, device, data_keys, label_key, label_encoder=None):
'''
Function for evaluation on the test set
'''
model.eval()
all_predictions = []
all_targets = []
with torch.no_grad():
for batch_data in loader:
data = [batch_data[key].type(torch.float32).to(device) for key in data_keys]
targets = batch_data[label_key]
if label_encoder is not None:
targets = label_encoder.transform(targets)
targets = torch.from_numpy(targets).to(device)
# Forward pass
predictions = model(*data)
all_predictions.append(predictions)
all_targets.append(targets)
return model, all_predictions, all_targets
for model_tag in ['ehr', 'image', 'multimodal']:
model = outputs[model_tag]['model']
if model_tag=='image':
model_data_keys = ['ImageFile']
transforms = transforms_test
elif model_tag=='ehr':
model_data_keys = ['EHR']
transforms = transforms_train_ehr_only
else: # model_tag=='multimodal':
model_data_keys = ['ImageFile', 'EHR']
transforms = transforms_test
data_test = CSVDataset(src=df_test_proc,
col_names=['ImageFile', 'EHR', 'Prognosis'],
transform=transforms_test)
loader_test = DataLoader(dataset=data_test,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=0)
label_key = 'Prognosis'
model, all_predictions, all_targets = infer(loader_test, model, device,
data_keys=model_data_keys, label_key=label_key,
label_encoder=le)
outputs[model_tag]['predictions_test'] = all_predictions
###Output
_____no_output_____
###Markdown
Report results on test set
###Code
from sklearn.metrics import roc_auc_score, classification_report
for tmp_df, tag, pred_key in zip([df_val, df_test],
['Validation', 'Test'],
['predictions','predictions_test']):
print('\n********************')
print(f'Results {tag} set')
print('********************')
print('\n*** Prediction accuracy from different models:')
tgts = le.transform(tmp_df.Prognosis)
naive_preds = np.array([naive]*tmp_df.shape[0])
print(f'{tag} accuracy naive (majority class vote): {accuracy_score(tgts, naive_preds):.5f}')
for model_tag in ['ehr', 'image', 'multimodal']:
preds = []
for v in outputs[model_tag][pred_key]:
preds += list(v.argmax(axis=1).detach().cpu().numpy())
print(f'{tag} accuracy for {model_tag} (epoch: {outputs[model_tag]["epoch"]}): {accuracy_score(tgts, preds):.5f}')
# Print accuracy/auc/sensitivity/specificity of the multimodal model
print('\n*** Further prediction statistics for multimodal model:')
preds = []
for v in outputs[model_tag][pred_key]:
preds += list(v.argmax(axis=1).detach().cpu().numpy())
preds = np.array(preds)
preds_proba = []
for v in outputs[model_tag][pred_key]:
preds_proba += list(v[:,1].squeeze().detach().cpu().numpy())
print(f'Accuracy: {accuracy_score(tgts, preds):.3f}')
print(f'ROC AUC: {roc_auc_score(tgts, preds_proba):.3f}')
# Note that in binary classification, recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”.
print(classification_report(tgts, preds, target_names=le.classes_))
# save the three models
# Specify a path
pn = './results_v2_withTorchVisionImageNormalization'
if not os.path.exists(pn):
os.makedirs(pn)
for model_tag in ['ehr', 'image', 'multimodal']:
fn = model_tag+'.pt'
ff = os.path.join(pn,fn)
# Save
torch.save(outputs[model_tag]['model'].state_dict(), ff)
# Load
#model = Net()
#model.load_state_dict(torch.load(PATH))
#model.eval()
###Output
_____no_output_____ |
machine_learning/jupyter_101/Pandas basics.ipynb | ###Markdown
Importing Pandas
###Code
import pandas as pd
pd.__version__
###Output
_____no_output_____
###Markdown
Pandas `Series` ClassA Pandas Series object is like a generalized array where the indexes are explicit, don't have to be contiguous or increasing, and in fact don't have to be integers. The object also behaves like a specialised dictionary where the keys are in order (though not necessarily sorted). Implicit indexes
###Code
# Simple series
data = pd.Series([0.25, 0.5, 0.75, 1.0])
data
data.index # a pd.Index object
data.values # a NumPy array
###Output
_____no_output_____
###Markdown
Explicit indexesThe difference between a Pandas Series and a NumPy array is that the Pandas object has an **explicit** index. Indexes need not be contiguous or even monotonic. Here, we specify an explicit index which are not contiguous or uniformly increasing. Series will preserve the order in which we specified the index.
###Code
data = pd.Series([0.25, 0.5, 0.75, 1.0], index=[2,5,3,7])
data
data.index
###Output
_____no_output_____
###Markdown
Explicit, Non-integer indexes Any data type can be used as an index.
###Code
data = pd.Series([0.25, 0.5, 0.75, 1.], index=['a', 'b', 'c', 'd'])
data
data.index
###Output
_____no_output_____
###Markdown
We can construct a Series object directly from a Python dictionary. The dict's keys will be the Series' index. The dict's keys will be **sorted**. Note below: `Indonesia` comes before `USA` in the Series.
###Code
pop_dict = {'China': 13888170000,
'India': 1325460000,
'USA': 326309000,
'Indonesia': 261890900}
pop = pd.Series(pop_dict)
pop
pop['China']
###Output
_____no_output_____
###Markdown
Constructing Series
###Code
# From a list
pd.Series([1,2,3,4])
# From a scalar, repeated to fill the specified index
pd.Series(10, index=[100, 200, 300])
# From a dictionary, indexes are sorted dictionary keys
pd.Series({2:'a', 1:'b', 3:'c'})
# Index can be explicitly set with a dictionary
pd.Series({2:'a', 1:'b', 3:'c'}, index=[3,2]) # Note: indexes in order as specified, not sorted
###Output
_____no_output_____
###Markdown
Pandas `DataFrame` Class A Pandas DataFrame is like a generalized two-dimensional matrix, where the row indices are flexible like a Series and column have names. It can be thought of a a collection of Series objects, which have a shared index.
###Code
pop_dict = {'China': 13888170000,
'India': 1325460000,
'USA': 326309000,
'Indonesia': 261890900}
pop = pd.Series(pop_dict)
pop
area_dict = {'China': 9596960,
'India': 3287590,
'Indonesia': 1905000,
'USA': 9631418}
area = pd.Series(area_dict)
area
countries = pd.DataFrame({'pop': pop, 'area': area})
countries
countries.index
countries.columns
countries.values
###Output
_____no_output_____
###Markdown
DataFrame as a specialized dictionaryA DataFrame also behaves like a dictionary. Its keys are *column names*. This is different from a numpy array, where `data[0]` would return the first row. In a DataFrame you index by columns: `data['colname']`.
###Code
countries['area']
countries['pop']
###Output
_____no_output_____
###Markdown
Constructing DataFrames
###Code
# From a list of dicts; each element is a row
lst_dicts = [ {'a': i, 'b': i+1} for i in range(3)]
lst_dicts
pd.DataFrame(lst_dicts)
# From a NumPy 2-dimensional array
import numpy as np
np_array = np.random.rand(3,2)
np_array
pd.DataFrame(np_array, columns=['foo', 'bar'], index=['a', 'b', 'c'])
# From a single Series object
pd.DataFrame(area, columns=['area'])
# From a dictionary of Series objects
pd.DataFrame({'pop': pop, 'area': area})
###Output
_____no_output_____
###Markdown
Pandas `Index` Class
###Code
idx = pd.Index([2, 3, 5, 7, 11])
idx
idx[1]
idx[::3]
(idx.size, idx.shape, idx.ndim, idx.dtype)
# Indexes are immutable
# idx[1] = 0 ## Will raise error
###Output
_____no_output_____
###Markdown
Indexes as ordered sets
###Code
idx1 = pd.Index([1, 3, 5, 7, 9])
idx2 = pd.Index([2, 3, 5, 7, 11])
idx1 & idx2 # Intersection
idx1 | idx2 # union
idx1 ^ idx2 # symmetric difference, or xor
###Output
_____no_output_____
###Markdown
Indexing `Series` and `DataFrame` objects Series indexing
###Code
data = pd.Series({'a':1, 'b':2, 'c':3}, dtype='f8')
data
###Output
_____no_output_____
###Markdown
Series as a dictionary
###Code
data.keys()
data['b']
# Series can be treated as a dictionary
data['e'] = 1.25
data
# Series as a collection of keys
'a' in data
list(data.items()) # the list() is important; items() returns a lazy iterator
###Output
_____no_output_____
###Markdown
Series as a one-dimensional arrayPandas style indexing.
###Code
data['a':'c'] # the `stop` values is included
###Output
_____no_output_____
###Markdown
Python style indexing.
###Code
data[0:2] # the `stop` value is not inclueded
# masking, like numpy
data[(data > 1) & (data < 2)]
# fancy indexing, like numpy
data[['a', 'b']] # but you can't use a 2-d array, like [['a'], ['b']]
###Output
_____no_output_____
###Markdown
Explicit index or Python style index?If the Series object has explicit *integer* indexes, then:`data[i]` will use the explicit index, while `data[i:j]` will use Python style indexes. This can be confusing.
###Code
data = pd.Series(['a', 'b', 'c', 'd', 'e'], index=[1,2,3,4,5])
data
data[1] # explicit index
data[1:3] # implicit, 0-based, Python style index; {1:'a'} not inclued in output because its implicit index is 0.
###Output
_____no_output_____
###Markdown
Choosing Pandas vs NumPy indexing: `loc`, `iloc`
###Code
# loc always uses explicit indexes, and includes the final index value
data.loc[1:3]
# iloc always uses the position-based index, and exludes the final index value
data.iloc[1:3]
###Output
_____no_output_____
###Markdown
DataFrame indexing
###Code
pop_dict = {'China': 1388817000,
'India': 1325460000,
'USA': 326309000,
'Indonesia': 261890900}
pop = pd.Series(pop_dict)
area_dict = {'China': 9596960,
'India': 3287590,
'Indonesia': 1905000,
'USA': 9631418}
area = pd.Series(area_dict)
data = pd.DataFrame({'pop': pop, 'area': area})
data
###Output
_____no_output_____
###Markdown
DataFrame as a dictionary DataFrame can be treated as a dictionary, mapping column names to a Series (not an array of raw values!):
###Code
data['area']
type(data['area'])
###Output
_____no_output_____
###Markdown
If the column names are simple strings, it can be convenient to access them as attributes:
###Code
data.area
###Output
_____no_output_____
###Markdown
The two styles access the same underlying object:
###Code
data.area is data['area']
###Output
_____no_output_____
###Markdown
**Unless**, the column name is the same as a DataFrame method name. In that case, the attribute refers to the method.
###Code
data.pop is data['pop']
type(data.pop)
###Output
_____no_output_____
###Markdown
To be safe, at least for assignment, use the dictionary style access, otherwise you may end up over-writing a method attribute and create hard to debug bugs.
###Code
data['density'] = data['pop'] / data['area']
data
###Output
_____no_output_____
###Markdown
DataFrame as a 2-dimensional array Some array-like operations can be done on the DataFrame.
###Code
data.T # Transpose
###Output
_____no_output_____
###Markdown
But DataFrame itself is not a pure array. For e.g., in `data[idx]`, `idx` has to be a column name. In an array, it would be the row index.The underlying data is a NumPy array.
###Code
data.values
###Output
_____no_output_____
###Markdown
We can do use all indexing methods supported by NumPy on the values, but we lose the nice row and column labels.
###Code
data.values[0]
data.values[:,1]
###Output
_____no_output_____
###Markdown
Choosing Pandas vs NumPy indexing: `loc`, `iloc` We should use the `loc` and `iloc` attributes, instead. These will preserve row and column labels.
###Code
data.iloc[:3, :2]
data.loc['India', :'pop']
###Output
_____no_output_____
###Markdown
`iloc` and `loc` support all NumPy style indexing methods.
###Code
data.loc[data.density > 100, ['density', 'pop']]
###Output
_____no_output_____
###Markdown
Note: indexing vs slicing/masking without `loc` or `iloc`When indexing DataFrame directly, a single index refers to a column name
###Code
data['area']
###Output
_____no_output_____
###Markdown
But a *slice* refers to rows:
###Code
data['China':'India']
###Output
_____no_output_____
###Markdown
*Direct* masking operations also refers to rows:
###Code
data[data.density > 100]
###Output
_____no_output_____
###Markdown
DataFrame does not support fancy indexing directly. Use `loc` or `iloc` for that.
###Code
# data[[1, 2, 3]] ## will raise error
###Output
_____no_output_____
###Markdown
OperationsNumPy *ufuncs* work on Pandas Series and DataFrame objects (not merely on the underlying numpy array holding the data). Index PreservationUnary operations preserve the *index* and *column labels* in the output.
###Code
rng = np.random.RandomState(42)
# A Series
ser = pd.Series(rng.randint(0, 10, 4))
ser
# A DataFrame
df = pd.DataFrame(rng.randint(0, 10, (3, 4)), columns=['A', 'B', 'C', 'D'])
df
# Operation on series
np.exp(ser)
# operation on DataFrame
np.sin(df * np.pi / 4)
df.mean() # axis=0 by default, that is, column-wise operation
df.mean(axis=1) # row-wise operation
###Output
_____no_output_____
###Markdown
Index AlignmentBinary operations on Series or DataFrames *align* the indices. Index Alignment in Series
###Code
area = pd.Series({'Alaska': 1723337, 'Texas': 695662}, name='area')
pop = pd.Series({'California': 38332521, 'Texas': 26448192})
pop / area
###Output
_____no_output_____
###Markdown
Note the row indices are the union of the indices of the two inputs. Any index for which one or the other input does not have an entry is marked with `NaN`. This is how missing values are handled for built-in Python operators.
###Code
area.index | pop.index
###Output
_____no_output_____
###Markdown
For handling missing data differently, object methods need to be used in place of the operators.
###Code
pop.divide(area, fill_value=0)
###Output
_____no_output_____
###Markdown
Index Alignment in DataFrame
###Code
df1 = pd.DataFrame(rng.randint(0, 20, (2, 2)), columns=['A', 'B'], index=['a', 'b'])
df1
df2 = pd.DataFrame(rng.randint(0, 10, (2, 3)), columns=['B', 'A', 'C'], index=['b', 'a'])
df2
df1 + df2 # Note: indexes are aligned regardless of order, and are sorted in the output
###Output
_____no_output_____
###Markdown
Instead of using built-in Python operators, we can use equivalen Pandas methods, which gives us control on how to handle missing values.
###Code
df1.add(df2, fill_value=1000)
###Output
_____no_output_____
###Markdown
Operations between DataFrame and Series
###Code
A = rng.randint(10, size=(3, 4)) # A NumPy array
A
A - A[0] # Operation between a 2-dimensional and 1-dimensional NumPy array
df = pd.DataFrame(A, columns=list('QRST'))
df
df.stack() # convert to hierarchically indexed series
df - df.iloc[0] # Operation between DataFrame and Series. Note: row-wise operations
df.subtract(df['R'], axis=0) # Column-wise operations: needs explicit method call instead of using operator
# Different sized DataFrame and Series
halfrow = df.iloc[0, ::2]
halfrow
df - halfrow
###Output
_____no_output_____
###Markdown
Missing DataMissing data can be handled by using Python `None`, or by `NaN` for floating point data (only). `None` amd `Nan` in NumPyHaving `None` in a NumPy array means that the array's dtype is `object` and all operations will take place in Python, not in optimized native code.
###Code
for t in ['object', 'int32', 'float32']:
print("dtype =", t)
arr = np.arange(1E6, dtype=t) # create an array of type t, containing 1 million numbers
%timeit arr.sum()
###Output
dtype = object
20.4 ms ± 197 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
dtype = int32
562 µs ± 5.84 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
dtype = float32
209 µs ± 3.13 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Some operations, especially aggregations, done't like `None` values.
###Code
vals1 = np.array([1, None, 5, 6])
vals1, vals1.dtype
# vals1.sum() ## Cannot handle `None`, will raise TypeError
###Output
_____no_output_____
###Markdown
Arrays containing only floating point values can use the `NaN` value.
###Code
vals2 = np.array([1., np.nan, 5., 6.])
vals2, vals2.dtype
###Output
_____no_output_____
###Markdown
Now aggregations won't throw an error, but won't contain anything useful either.
###Code
vals2.sum()
###Output
_____no_output_____
###Markdown
There are `NaN` friendly versions of the aggregation methods as NumPy functions.
###Code
np.nansum(vals2)
arr1 = np.arange(1e6)
arr2 = np.arange(1e6); arr2[0] = np.nan
%timeit arr1.sum()
%timeit np.nansum(arr1)
%timeit arr2.sum() # note: will return nan
%timeit np.nansum(arr2)
###Output
1.51 ms ± 47.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
The `NaN` friendly versions are ~5x slower. `None` and `NaN` in Pandas SeriesPandas will convert `None` to `NaN` if all other values are floats. Ints
###Code
pd.Series([1, np.nan, 2, None])
###Output
_____no_output_____
###Markdown
Pandas will automatically typecast non-floats to floats to accomodate `NaN`s when possible.
###Code
pd.Series([1, 2, 3, None])
x = pd.Series(range(2), dtype=int)
x # Note dtype
x[0] = None
x # Note new dtype
###Output
_____no_output_____
###Markdown
Bools
###Code
y = pd.Series([True, False, True])
y
y[0] = None # None is treated as False, not as a missing value
y
y[0] = np.nan # NaN is not treated as False; it is a true missing value
y
###Output
_____no_output_____
###Markdown
Objects
###Code
z = pd.Series(['A', 'B', 'C'])
z
z[0] = None
z
z[0] = np.nan
z
z.isnull()
z.notnull()
z.dropna()
z.fillna(value='OMG')
z.fillna(method='bfill') # Replace NaN's with objects from higher index
###Output
_____no_output_____
###Markdown
`None` and `NaN` in Pandas DataFrameDataFrame's `dropna` method has options to specify the axis, and also to specify how many values must be null for a row or col to be dropped, or how many non-null values must exist for a row or col with a null value to be preserved.
###Code
df = pd.DataFrame([[1, np.nan, np.nan],
[2, np.nan, np.nan],
[3, 4, np.nan]
])
df
df.isnull()
df.dropna() # drop all rows that contain even a single NaN
df.dropna(axis=1) # drop columns, not rows
df.dropna(axis=1, how='all') # drop columns, if all their values are NaN
df.dropna(axis=1, how='any') # same as dropna(axis=1)
df.dropna(thresh=2) # drop rows with NaN, keeping those that have 2 or more non-NaN values
df.fillna(value=pd.Series([1000, 2000, 3000]))
# df.fillna(value=pd.Series([1000, 2000, 3000]), axis=1) ## Not implemented yet
df2 = pd.DataFrame([[1000, 1000, 1000],
[2000, 2000, 2000],
[3000, 3000, 3000]
])
df.fillna(value=df2)
df.fillna(method='ffill', axis=1) # Fill-forward along rows
###Output
_____no_output_____
###Markdown
Pandas MultiIndex: Hierarchical Indexing
###Code
index = pd.MultiIndex.from_product([['California', 'Texas', 'New York'], [2000, 2010]], names=['State', 'Year'])
index
pop = pd.Series([12345, 23456, 34567, 45678, 56789, 67890], index=index, name='Population')
pop
pop['California']
pop[:, 2010]
df = pop.unstack() # convert to 2-dimensional DataFrame
df
df.stack() # convert 2-dimensional DataFrame to a hierarchically indexed Series
# Multiple Series
df = pd.DataFrame({'Total': pop, 'Under18': [2345, 3456, 4567, 5678, 6789, 7890]})
df
f_u18 = df['Under18'] / df['Total']
f_u18
f_u18.unstack()
###Output
_____no_output_____
###Markdown
Creating MultiIndex explicitlyYou can create a MultiIndex object and give it to the `index` argument of a Series or DataFrame. There are 4 ways of creating MultiIndexes.
###Code
# From tuples
pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)])
# From arrays
pd.MultiIndex.from_arrays([list('aabb'), [1, 2, 1, 2]])
# From a cross-product, when every first level has every second level
pd.MultiIndex.from_product([['a', 'b'], [1, 2]])
# Explicitly using `levels` and `labels` arguments to constructor
pd.MultiIndex(levels=[['a', 'b'], [1, 2]],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]])
###Output
_____no_output_____
###Markdown
Creating MuiltiIndexes implicitlyYou can make Series and DataFrame compute a MultiIndex automatically
###Code
# From arrays
data = np.random.rand(4, 2)
pd.DataFrame(data,
index=[['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
columns=['data1', 'data2'])
# From data, where the keys are tuples
pd.Series({('a', 1): 12345,
('a', 2): 23456,
('b', 1): 34567,
('b', 2): 45678})
###Output
_____no_output_____
###Markdown
MultiIndexed Columns
###Code
rows = pd.MultiIndex.from_product([[2010, 2011], [1, 2]], names=['Year', 'Visit#'])
cols = pd.MultiIndex.from_product([['Bob', 'Amy', 'Sue'], ['B.P.', 'Temp']], names=['Subject', 'Type'])
data = np.round(np.random.randn(4, 6), 1)
health_df = pd.DataFrame(data, index=rows, columns=cols)
health_df
###Output
_____no_output_____
###Markdown
Column Indexing
###Code
health_df['Bob']
health_df['Amy', 'Temp']
###Output
_____no_output_____
###Markdown
Tuples can be used too.
###Code
health_df[('Amy', 'Temp')]
###Output
_____no_output_____
###Markdown
Row IndexingTo index rows, `loc` or `iloc` must be used. Top-level indexes can be specified directly.
###Code
health_df.loc[2010]
###Output
_____no_output_____
###Markdown
Multi-level indexes can be specified using tuples.
###Code
health_df.loc[(2010, 1)]
###Output
_____no_output_____
###Markdown
Row and Column Indexing
###Code
health_df.loc[(2010, 1), 'Bob']
health_df.loc[(2010, 1), ('Amy', 'Temp')]
###Output
_____no_output_____
###Markdown
Slices with `loc`For slices to work with `loc`, the rows or columns being sliced must be **ordered**.Column slicing:
###Code
# health_df.loc[(2010, 1), 'Amy':] ## Will raise error.
health_df.sort_index(axis=1, inplace=True)
health_df
health_df.loc[(2010, 1), 'Bob':] # Won't raise error now
health_df.loc[:, 'Bob']
###Output
_____no_output_____
###Markdown
Second level slices need an explicit slice object, because creating slices in tuples leads to a syntax error.
###Code
# health_df.loc[(:, 1), 'Bob'] # (:, 1) => syntax error
idx = pd.IndexSlice
health_df.loc[idx[:, 1], 'Bob']
idx = pd.IndexSlice
health_df.loc[idx[:, 1], idx[:, 'Temp']]
###Output
_____no_output_____
###Markdown
Slices with `iloc`Same as `loc`, except that all indices are integers.Examplles: TODO. Creating MultiIndexes from 2-dimensional DataFrames
###Code
# A 2-D DataFrame, such as one you'd get by reading a CSV file
df = pd.DataFrame([['California', 2000, 12345],
['California', 2010, 23456],
['New York', 2000, 34567],
['New York', 2010, 45678],
['Texas', 2000, 56789],
['Texas', 2010, 67890]],
columns=['State', 'Year', 'Population'])
df
# Convert to MultiIndex
df.set_index(['State', 'Year'], inplace=True)
df
# And back
df.reset_index()
###Output
_____no_output_____
###Markdown
Simple Data AggregationPandas' `Series` and `DataFrame` objects have numerours methods to compute aggregates, such as the sum, min, max, of quantities. All the aggregations available in NumPy are also available in Pandas.Given a `DataFrame`:
###Code
health_df
###Output
_____no_output_____
###Markdown
The `describe()` method gives several important aggregations in one shot, to give us a feel of the data.
###Code
health_df.describe()
###Output
_____no_output_____
###Markdown
We can compute individual aggregations:
###Code
health_df.mean()
###Output
_____no_output_____
###Markdown
Choosing the level in hierarchical indexesWith hierarchical indexes, we can choose what level to preserve (aggregating others).
###Code
health_df.mean(level='Visit#')
health_df.mean(level='Year')
###Output
_____no_output_____
###Markdown
We can specify the axis along which to apply the aggregation; we can also have successive aggregations.
###Code
health_df.mean(level='Year').mean(level='Type', axis=1)
###Output
_____no_output_____
###Markdown
ConcatenationWe can concat two `DataFrame` or `Series` objects.
###Code
ser1 = pd.Series(['A', 'B', 'C'], index=[1,2,3])
ser2 = pd.Series(['D', 'E', 'F'], index=[4,5,6])
print(ser1); print(ser2); print(pd.concat([ser1, ser2]));
###Output
1 A
2 B
3 C
dtype: object
4 D
5 E
6 F
dtype: object
1 A
2 B
3 C
4 D
5 E
6 F
dtype: object
###Markdown
We can also concatenate `DataFrame` objects.First, a convenience function:
###Code
# A function to generate DataFrames quickly
def make_df(cols, ind):
"""Quickly make a DataFrame for tests."""
data = {c: [str(c) + str(i) for i in ind] for c in cols}
return pd.DataFrame(data, ind)
make_df('ABC', range(3))
df1 = make_df('AB', [1, 2])
df2 = make_df('AB', [3, 4])
print(df1); print(df2); print(pd.concat([df1, df2]))
###Output
A B
1 A1 B1
2 A2 B2
A B
3 A3 B3
4 A4 B4
A B
1 A1 B1
2 A2 B2
3 A3 B3
4 A4 B4
###Markdown
Handling Duplicate IndicesSince Pandas indices are explicit, the indices of the objects to be concatenate may *overlap*. The default behaviour of `concat` is to preserve the indices in specified order.
###Code
x = make_df('AB', [0, 1]); x
y = make_df('AB', [1, 2]); y
pd.concat([x, y]) # 1 repeated!
###Output
_____no_output_____
###Markdown
Catching duplicates as errorsWe can tell Pandas to verify that the indices in the result do not overlap.
###Code
try:
pd.concat([x, y], verify_integrity=True) # will raise ValueError
except ValueError as e:
print("ValueError: ", e)
###Output
ValueError: Indexes have overlapping values: [1]
###Markdown
Ignoring the specified indicesWe can get Pandas to generate new integer indices in the result, ignoring the indices in the inputs.
###Code
pd.concat([x, y], ignore_index=True)
###Output
_____no_output_____
###Markdown
Adding MultiIndex keysWe can ask Pandas to use hierarchically indexed results. We do this by specifying labels for the input data sources using the `keys` option. Pandas will then nest the indices of a data source under a top level index of the specified keys.
###Code
print(x); print(y); print(pd.concat([x, y], keys=['x', 'y']))
###Output
A B
0 A0 B0
1 A1 B1
A B
1 A1 B1
2 A2 B2
A B
x 0 A0 B0
1 A1 B1
y 1 A1 B1
2 A2 B2
###Markdown
Handling different columns - concat with joinIf the input `DataFrame` objects have different columns, the default behaviour of `concat` is to do an *outer*-join like operation, producing a result that has the union of columns. It will fill missing fields with `NaN` values.
###Code
df1 = make_df('ABC', [1,2]); df1
df2 = make_df('BCD', [3,4]); df2
pd.concat([df1, df2])
###Output
_____no_output_____
###Markdown
Using inner joinWe can tell Pandas to keep only the columns that are common to both inputs; that is, do an *inner8*-join.
###Code
pd.concat([df1, df2], join='inner')
###Output
_____no_output_____
###Markdown
Specifying the index of the columns explicitlyWe can force Pandas to use a specific list of index objects. Here, we specify the column indices of the first input.
###Code
df1.columns
pd.concat([df1, df2], join_axes=[df1.columns])
###Output
_____no_output_____
###Markdown
`append` as a shortcut to `concat`Concatenating dataframes and series in the default way is common, so `append` is a shortcut for that. `append` does not modify the original objects but creates a new one. It is less efficient that `concat` especially when concatenating multiple objects.
###Code
df1 = make_df('AB', [0, 1])
df2 = make_df('AB', [1, 2])
df1.append(df2, ignore_index=True)
###Output
_____no_output_____
###Markdown
MergesPandas offers high-performance in-memory join and merge operations, like database `JOIN`s, via the `pd.merge` function.
###Code
df1 = pd.DataFrame({'employee': ['Bob', 'Lisa', 'George', 'Sue'],
'department': ['Accounting', 'Engineering', 'Engineering', 'HR']})
df1
df2 = pd.DataFrame({'name': ['Bob', 'George', 'Sue', 'Lisa'],
'hire_date': [2001, 2010, 2009, 1005]})
df2
df3 = pd.DataFrame({'department': ['Accounting', 'Engineering', 'HR'],
'supervisor': ['Alex', 'Guido', 'Carla']})
df3
df4 = pd.DataFrame({'department': ['Accounting', 'Accounting', 'Engineering', 'Engineering', 'HR'],
'skill': ['Math', 'Excel', 'Excel', 'Programming', 'organization']})
df4
###Output
_____no_output_____
###Markdown
One-to-one joinsThis is the simplest case:
###Code
df1.merge(df2, left_on='employee', right_on='name')
###Output
_____no_output_____
###Markdown
We can get rid of the redundant column.
###Code
df1.merge(df2, left_on='employee', right_on='name').drop('name', axis=1)
###Output
_____no_output_____
###Markdown
One-to-Many joinsWhen one or the other data frame contains duplicate column values for the key column. Note also that Pandas automatically detect columns with the same name in both `DataFrame`s and uses them as the key column.
###Code
df1.merge(df3)
###Output
_____no_output_____
###Markdown
Many-to-Many joinsWhen key columns in both inputs contain duplicate values, we get a many-to-many join.
###Code
df1.merge(df4)
###Output
_____no_output_____
###Markdown
Joining on an index instead of column valueThe first data frame could have been like this:
###Code
df1a = df1.set_index('employee'); df1a
###Output
_____no_output_____
###Markdown
To join this to `df2`, we have to use indices on `df1` to match column values on `df2`. We can use `left_index=True` (or `right_index=True`) to tell Pandas to use indices instead of column values for the left or right argument.
###Code
df1a.merge(df2, left_index=True, right_on='name')
# Or even
df2.merge(df1a, left_on='name', right_index=True).set_index('name')
###Output
_____no_output_____
###Markdown
When both inputs are keyed on the indicesIf both arguments are to be joined by their indices, we need to specify both `left_index` and `right_index` as `True`. The `join` method is a short hand for this.
###Code
df2a = df2.set_index('name'); df2a
df1a.merge(df2a, left_index=True, right_index=True)
###Output
_____no_output_____
###Markdown
**Using `join`**:
###Code
df1a.join(df2a)
###Output
_____no_output_____
###Markdown
Inner, Outer, Left and Right joinsWhen the two inputs have some missing values in the columns to be joined, it becomes important to specify how to handle missing values using the `how` keyword, which takes the values `outer`, `inner`, `left`, `right`.
###Code
df6 = pd.DataFrame({'name': ['Mary', 'Joseph', 'Peter'],
'food': ['Bread', 'Cheese', 'Fish']})
df6
df7 = pd.DataFrame({'name': ['Mary', 'Joseph', 'Paul'],
'drink': ['Wine', 'Beer', 'Mead']})
df7
df6.merge(df7) # default how=inner
df6.merge(df7, how='outer')
df6.merge(df7, how='left')
df6.merge(df7, how='right')
###Output
_____no_output_____
###Markdown
Overlapping column namesWhen the two inputs have columns with the conflicting names (that is, the columns have different meanings but happen to have the same name), you can ask Pandas to generate unique column names using the `suffixes` option to `merge`.Also, since more than one column now has the same name in the inputs, we have to specify the name of the key column to join on. (By default Pandas will use a composite key of all columns that have name in each input.)
###Code
df8 = pd.DataFrame({'name': ['Mary', 'Joseph', 'Peter'],
'likes': ['Bread', 'Cheese', 'Fish']})
df8
df9 = pd.DataFrame({'name': ['Mary', 'Joseph', 'Paul'],
'likes': ['Wine', 'Beer', 'Mead']})
df9
df8.merge(df9, on='name', suffixes=['_eat', '_drink'])
###Output
_____no_output_____
###Markdown
GroupBy: Aggregation, Filtering, Transform and grouping
###Code
import seaborn as sns
planets = sns.load_dataset('planets')
planets.shape
planets.head()
decade = 10 * (planets['year'] // 10)
decade = decade.astype('str') + "s"
decade.name = 'decade'
planets.groupby(['method', decade])['number'].sum().unstack().fillna(0)
###Output
_____no_output_____ |
violence.ipynb | ###Markdown
###Code
%cd "/content/drive/MyDrive/projects/violence"
%ls
#!unzip "dataset.zip"
%ls
import numpy as np
from glob import glob
import os
from torchvision import datasets
from torchvision import datasets
import torchvision.transforms as transforms
import torch
import numpy as np
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
batch_size = 10
num_workers = 0
data_dir = 'dataset'
train_dir = os.path.join(data_dir, 'train/')
valid_dir = os.path.join(data_dir, 'val/')
standard_normalization = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
data_transforms = {'train': transforms.Compose([transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
standard_normalization]),
'val': transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
standard_normalization])}
train_data = datasets.ImageFolder(train_dir, transform=data_transforms['train'])
valid_data = datasets.ImageFolder(valid_dir, transform=data_transforms['val'])
train_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
num_workers=num_workers,
shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_data,
batch_size=batch_size,
num_workers=num_workers,
shuffle=False)
loaders_scratch = {
'train': train_loader,
'valid': valid_loader}
torch.cuda.is_available()
loaders_transfer = loaders_scratch.copy()
use_cuda = torch.cuda.is_available()
import torch.nn as nn
import torchvision.models as models
## TODO: Specify model architecture
model_transfer = models.resnet50(pretrained=True)
for param in model_transfer.parameters():
param.requires_grad = False
model_transfer.fc = nn.Linear(2048, 2, bias=True)
fc_parameters = model_transfer.fc.parameters()
for param in fc_parameters:
param.requires_grad = True
if use_cuda:
model_transfer = model_transfer.cuda()
model_transfer
import torch.optim as optim
criterion_transfer = nn.CrossEntropyLoss()
optimizer_transfer = optim.SGD(model_transfer.fc.parameters(), lr=0.001)
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# initialize weights to zero
optimizer.zero_grad()
output = model(data)
# calculate loss
loss = criterion(output, target)
# back prop
loss.backward()
# grad
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
if batch_idx % 100 == 0:
print('Epoch %d, Batch %d loss: %.6f' %
(epoch, batch_idx + 1, train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss < valid_loss_min:
torch.save(model.state_dict(), save_path)
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
valid_loss_min = valid_loss
# return trained model
return model
train(100, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'final_transfer.pt')
class_names = loaders_transfer['train'].dataset.classes
print(class_names)
from PIL import Image
import torchvision.transforms as transforms
def load_input_image(img_path):
image = Image.open(img_path).convert('RGB')
prediction_transform = transforms.Compose([transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
standard_normalization])
# discard the transparent, alpha channel (that's the :3) and add the batch dimension
image = prediction_transform(image)[:3,:,:].unsqueeze(0)
return image
def predict_image(model, class_names, img_path):
# load the image and return the predicted breed
img = load_input_image(img_path)
model = model.cpu()
model.eval()
idx = torch.argmax(model(img))
return class_names[idx]
import matplotlib.pyplot as plt
%matplotlib inline
def run_app(img_path):
img = Image.open(img_path)
plt.imshow(img)
plt.show()
prediction = predict_image(model_transfer, class_names, img_path)
return prediction
run_app("/content/drive/MyDrive/projects/violence/dataset/train/safe/49152357-group-of-casual-people-social-gathering-concept.jpg")
#---------#
#---------#
import torch
from PIL import Image
import torchvision.transforms as transforms
filepath = "final_transfer.pt"
model = torch.load(filepath)
finalmodel = model_transfer.load_state_dict(torch.load(filepath,map_location='cpu'))
standard_normalization = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
def load_input_image(img_path):
image = Image.open(img_path).convert('RGB')
prediction_transform = transforms.Compose([transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
standard_normalization])
image = prediction_transform(image)[:3,:,:].unsqueeze(0)
return image
def predict_image(model, class_names, img_path):
# load the image and return the predicted breed
img = load_input_image(img_path)
#model = model.cpu()
model.eval()
idx = torch.argmax(model(img))
return ["",""][idx]
import matplotlib.pyplot as plt
%matplotlib inline
def run_app(img_path):
img = Image.open(img_path)
plt.imshow(img)
plt.show()
prediction = predict_image(finalmodel, [], img_path)
return prediction
###Output
_____no_output_____ |
Projects in Python with Scikit-Learn- XGBoost- Pandas- Statsmodels- etc./Loan prediction (SVC).ipynb | ###Markdown
Data description & Problem statement: The dataset is related with a mortgage loan and challenge is to predict approval status of loan (Approved/Reject). Needless to mention that, among all industries, the insurance domain has one of the largest uses of analytics & data science methods. * Dataset is imbalanced. The data has 615 rows and 13 columns.* This is a classification problem. I will predict if a loan will get approved or not. Workflow:- Load the dataset, and define the required functions (e.g. for detecting the outliers)- Data Cleaning/Wrangling: Manipulate outliers, missing data or duplicate values, Encode categorical variables, etc.- Split data into training & test parts (utilize the training part for training & hyperparameter tuning of model, and test part for the final evaluation of model) Model Training:- Build an initial SVM model, and evaluate it via C-V approach- Use grid-search along with C-V approach to find the best hyperparameters of SVM model: Find the best SVM model (Note: I've utilized SMOTE technique via imblearn toolbox to synthetically over-sample the minority category and even the dataset imbalances.) Model Evaluation: - Evaluate the best SVM model with optimized hyperparameters on Test Dataset, by calculating: - AUC score: 0.87 - Confusion matrix - ROC curve - Precision-Recall curve - Average precision: 0.92
###Code
import sklearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
%matplotlib inline
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Function to remove outliers (all rows) by Z-score:
def remove_outliers(X, y, name, thresh=3):
L=[]
for name in name:
drop_rows = X.index[(np.abs(X[name] - X[name].mean()) >= (thresh * X[name].std()))]
L.extend(list(drop_rows))
X.drop(np.array(list(set(L))), axis=0, inplace=True)
y.drop(np.array(list(set(L))), axis=0, inplace=True)
print('number of outliers removed : ' , len(L))
df=pd.read_csv('C:/Users/rhash/Documents/Datasets/Loan prediction/train_loanPrediction.csv')
# To Shuffle the data:
np.random.seed(42)
df=df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df.drop('Loan_ID', axis=1, inplace=True)
L_c=['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Credit_History', 'Property_Area', 'Loan_Status' ]
L_n=['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term']
df.head(3)
df_selected=df[['Credit_History', 'ApplicantIncome', 'LoanAmount', 'CoapplicantIncome' , 'Property_Area', 'Loan_Status']]
#df_selected['LoanAmount'].fillna(value=df['LoanAmount'].mean(), inplace=True)
df_selected.dropna( axis=0, inplace=True)
df_selected.shape
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
for i in [ 'Property_Area', 'Loan_Status' ]:
encode_text_index(df_selected, i)
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
#for i in ['Credit_History' , 'Property_Area']:
# encode_text_dummy(df_selected, i)
df_selected.head(3)
X=df_selected.drop(['Loan_Status'], axis=1)
y=df_selected['Loan_Status']
# We initially devide data into training & test folds: We do the Grid-Search only on training part
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
remove_outliers(X_train, y_train, ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount'], thresh=5)
from sklearn.preprocessing import StandardScaler, MinMaxScaler, PolynomialFeatures
scalor_X=MinMaxScaler().fit(X_train)
X_train=scalor_X.transform(X_train)
X_test=scalor_X.transform(X_test)
# We build the Initial Model & Cross-Validation:
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
model=SVC(random_state=42)
kfold=StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
scores=cross_val_score(model, X_train, y_train, cv=kfold, scoring="roc_auc")
print(scores, "\n")
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
# Grid-Serach for the best model parameters:
# Rough Search (Round 1)
from sklearn.model_selection import GridSearchCV
param={'kernel':['rbf'], 'C': [0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 1, 5, 10, 50, 100, 300, 500, 700, 1000, 1e4],
'gamma':[0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 20, 50, 100, 1000, 1e4 ]}
kfold=StratifiedKFold(n_splits=4, shuffle=True, random_state=42)
grid_search=GridSearchCV(SVC(class_weight='balanced', probability=True), param, cv=kfold, scoring="roc_auc", n_jobs=-1)
grid_search.fit(X_train, y_train)
# Grid-Search report:
G=pd.DataFrame(grid_search.cv_results_)
G.sort_values("rank_test_score").head(3)
print("Best parameters: ", grid_search.best_params_)
print("Best validation accuracy: %0.2f (+/- %0.2f)" % (np.round(grid_search.best_score_, decimals=2), np.round(G.loc[grid_search.best_index_,"std_test_score" ], decimals=2)))
print("Test score: ", np.round(grid_search.score(X_test, y_test),2))
h=G[["param_C", "param_gamma", "mean_test_score"]].pivot_table(index="param_C", columns="param_gamma", values="mean_test_score")
sns.heatmap(h, annot=True)
from sklearn.metrics import roc_curve, auc, confusion_matrix, classification_report
# Plot a confusion matrix.
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=45)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
class_names=["0", "1"]
# Compute confusion matrix
cm = confusion_matrix(y_test, grid_search.predict(X_test))
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, class_names, title='Normalized confusion matrix')
plt.show()
# Classification report:
report=classification_report(y_test, grid_search.predict(X_test))
print(report)
# ROC curve & auc:
from sklearn.metrics import precision_recall_curve, roc_curve, roc_auc_score, average_precision_score
fpr, tpr, thresholds=roc_curve(np.array(y_test), grid_search.decision_function(X_test) , pos_label=1)
roc_auc=roc_auc_score(np.array(y_test), grid_search.decision_function(X_test))
plt.figure()
plt.step(fpr, tpr, color='darkorange', lw=2, label='ROC curve (auc = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', alpha=0.4, lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.plot([cm_normalized[0,1]], [cm_normalized[1,1]], 'or')
plt.show()
# Precision-Recall trade-off:
precision, recall, thresholds=precision_recall_curve(y_test,grid_search.predict_proba(np.array(X_test))[:, 1], pos_label=1)
ave_precision=average_precision_score(y_test,grid_search.predict_proba(np.array(X_test))[:, 1])
plt.step(recall, precision, color='navy')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.001])
plt.ylim([0, 1.02])
plt.title('Precision-Recall curve: AP={0:0.2f}'.format(ave_precision))
plt.plot([cm_normalized[1,1]], [cm[1,1]/(cm[1,1]+cm[0,1])], 'ob')
plt.show()
###Output
_____no_output_____ |
code/obsolete/Detection of Pneumonia from Chest X-Ray Images 1.0.0.1.ipynb | ###Markdown
SummeryAuthor : Anjana TihaProject Name : Detection of Pneumonia from Chest X-Ray Images using Convolutional Neural Network, and Transfer Learning.Description : 1. Detected Pneumonia from Chest X-Ray images by retraining pretrained model “InceptionV3” with 5856 images of X-ray (1.15GB). 2. For retraining removed output layers, freezed first few layers and Fine-tuned model for two new label classes (Pneumonia and Normal). 3. Attained testing accuracy 83.44% and loss 0.42.Method : Tools/Library : Python, Keras, PyTorch, TensorFlowVersion History : 1.0.0.0Current Version : 1.0.0.0Last Update : 11.30.2018Comments : Please use Anaconda editor for convenience of visualization. CodeGitHub Link : Detection of Pneumonia from Chest X-Ray Images(GitHub)GitLab Link : Detection of Pneumonia from Chest X-Ray Images(GitLab)Portfolio : Anjana Tiha's Portfolio DatasetDataset Name : Chest X-Ray Images (Pneumonia)Dataset Link : Chest X-Ray Images (Pneumonia) Dataset (Kaggle) : Chest X-Ray Images (Pneumonia) Dataset (Original Dataset)Original Paper : Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning (Daniel S. Kermany, Michael Goldbaum, Wenjia Cai, M. Anthony Lewis, Huimin Xia, Kang Zhang) https://www.cell.com/cell/fulltext/S0092-8674(18)30154-5 <!--- Library/Tools Version- Python - v3.6.7- argparse- random- numpy- shutil- gc- re- Keras - 2.2.4- Keras-preprocessing - v1.0.5- TensorFlow - 1.12- PIL/Pillow - 5.1.0- Matplotlib - 2.2.2- scikit-learn - 0.19.1- mlxtend - 0.14.0--> Commands / Running Instructiontensorboard --logdir=logs%config IPCompleter.greedy=True Dataset DetailsDataset Name : Chest X-Ray Images (Pneumonia)Number of Class : 2Number/Size of Images : Total : 5856 (1.15 Gigabyte (GB)) Training : 5216 (1.07 Gigabyte (GB)) Validation : 320 (42.8 Megabyte (MB)) Testing : 320 (35.4 Megabyte (MB))Model ParametersMachine Learning Library: KerasBase Model : InceptionV3Optimizers : AdamLoss Function : categorical_crossentropyTraining ParametersBatch Size : 64Number of Epochs : 50Training Time : 3 HoursOutput (Prediction/ Recognition / Classification Metrics)Validation-->TestingAccuracy : 83.44%Loss : 0.42Recall : 94% (highest) Import Libraries
###Code
import sys
import os
import argparse
import random
import time
import datetime
from collections import Counter
import numpy as np
import pandas as pd
import shutil
from tqdm import tqdm
import inspect
import gc
import re
from PIL import Image
import cv2
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras import models
from keras.models import Model
from keras.applications.inception_v3 import InceptionV3
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, GlobalAveragePooling1D, Flatten, BatchNormalization
from keras import optimizers
from keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard, ReduceLROnPlateau
from keras import backend as K
from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score
from sklearn.metrics import recall_score, precision_score, f1_score
from mlxtend.plotting import plot_confusion_matrix
import tensorflow as tf
from IPython.display import display
import seaborn as sns
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
# Creates directory, if directory exists removes if remove parameter is set to True
def create_directory(directory_path, remove=False):
if remove and os.path.exists(directory_path):
try:
shutil.rmtree(directory_path)
os.mkdir(directory_path)
except:
print("Could not remove directory : ", directory_path)
return False
else:
try:
os.mkdir(directory_path)
except:
print("Could not create directory: ", directory_path)
return False
return True
# Removes directory, if directory exists
def remove_directory(directory_path):
if os.path.exists(directory_path):
try:
shutil.rmtree(directory_path)
except:
print("Could not remove directory : ", directory_path)
return False
return True
def clear_directory(directory_path):
dirs_files = os.listdir(directory_path)
for item in dirs_files:
item_path = os.path.join(directory_path, item)
try:
if os.path.isfile(item_path):
os.unlink(item_path)
elif os.path.isdir(item_path):
shutil.rmtree(item_path)
except Exception as e:
print(e)
return True
def remove_empty_folders(path, removeRoot=True):
if not os.path.isdir(path):
return
# remove empty subfolders
files = os.listdir(path)
if len(files):
for f in files:
fullpath = os.path.join(path, f)
if os.path.isdir(fullpath):
remove_empty_folders(fullpath)
# if folder empty, delete it
files = os.listdir(path)
if len(files) == 0 and removeRoot:
print("Removing empty folder:", path)
os.rmdir(path)
# print date and time for given type of representation
def date_time(x):
if x==1:
return 'Timestamp: {:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())
if x==2:
return 'Timestamp: {:%Y-%b-%d %H:%M:%S}'.format(datetime.datetime.now())
if x==3:
return 'Date now: %s' % datetime.datetime.now()
if x==4:
return 'Date today: %s' % datetime.date.today()
# prints a integer for degugging
def debug(x):
print("-"*40, x, "-"*40)
# Removes everything except alphabetical and selected characters from name string
def name_correct(name):
return re.sub(r'[^a-zA-Z,:]', ' ', name).title()
###Output
_____no_output_____
###Markdown
Data Visualization Function
###Code
def get_reset_subplot_params(nrows, ncols, dpi):
subplot_params = {}
subplot_params["nrows"] = nrows
subplot_params["ncols"] = ncols
subplot_params["figsize_col"] = subplot_params["ncols"]*2.5
subplot_params["figsize_row"] = subplot_params["nrows"]*2.5
subplot_params["dpi"] = dpi
subplot_params["facecolor"] = 'w'
subplot_params["edgecolor"] = 'k'
subplot_params["subplot_kw"] = {'xticks': [], 'yticks': []}
subplot_params["axes.titlesize"] = 'small'
subplot_params["hspace"] = 0.5
subplot_params["wspace"] = 0.3
return subplot_params
def get_reset_plot_params(figsize=(15, 5), title="", xlabel ="", ylabel="", legends=[], title_fontsize = 18, label_fontsize = 14, image_file_name="", save = False, dpi=100, update_image=True):
plot_params = {}
plot_params["figsize"] = figsize
plot_params["title"] = title
plot_params["xlabel"] = xlabel
plot_params["ylabel"] = ylabel
plot_params["legends"] = legends
plot_params["title_fontsize"] = title_fontsize
plot_params["axes.titlesize"] = "small"
plot_params["label_fontsize"] = label_fontsize
plot_params["image_file_name"] = image_file_name
plot_params["save"] = save
plot_params["update_image"] = update_image
plot_params["subplot"] = None
return plot_params
def select_image_by_category(image_dir, image_count_per_category):
classes = os.listdir(image_dir)
class_count = len(classes)
image_file_paths = {}
for i in range(class_count):
subdir_path = image_dir+"/"+classes[i]
subdir_files = os.listdir(subdir_path)
subdir_file_count = len(subdir_files)
subdir_file_mem = {}
subdir_file_index = -1
image_file_paths[classes[i]] = []
for j in range(image_count_per_category):
while subdir_file_index in subdir_file_mem:
subdir_file_index = random.randint(0, subdir_file_count-1)
subdir_file_mem[subdir_file_index] = 1
subdir_file_name = subdir_files[subdir_file_index]
subdir_file_path = subdir_path+ "/" + subdir_file_name
image_file_paths[classes[i]].append(subdir_file_path)
return image_file_paths
def plot_sample_image(image_file_paths, plot_params, subplot_params, update_image=True):
fig, axs = plt.subplots(
nrows=subplot_params["nrows"], ncols=subplot_params["ncols"],
figsize=(subplot_params["figsize_col"], subplot_params["figsize_row"]),
dpi=subplot_params["dpi"], facecolor=subplot_params["facecolor"],
edgecolor=subplot_params["edgecolor"], subplot_kw=subplot_params["subplot_kw"])
plt.rcParams.update({'axes.titlesize': plot_params["axes.titlesize"]})
plt.subplots_adjust(hspace=subplot_params["hspace"], wspace=subplot_params["wspace"])
i=0
for img_filepath in image_file_paths:
img = cv2.imread(img_filepath, 1)
plt.title(img_filepath.split("/")[-1])
plt.subplot(subplot_params["nrows"], subplot_params["ncols"], i+1)
plt.imshow(img)
plt.xticks([])
plt.yticks([])
i=i+1
if plot_params["update_image"] and os.path.exists(plot_params["image_file_name"]):
os.remove(plot_params["image_file_name"])
if plot_params["save"]:
fig.savefig(plot_params["image_file_name"], dpi=plot_params["dpi"])
plt.tight_layout()
plt.show()
def show_class_sample_images(directory, image_count_per_category=5, save=False, dpi=100, update_image=False):
class_count = len(os.listdir(directory))
print("Number of Class: ", class_count)
sample_img_by_class = select_image_by_category(directory, image_count_per_category)
for class_name in sample_img_by_class:
plot_params = get_reset_plot_params(image_file_name="img.png", save = save, dpi=dpi, update_image=update_image)
subplot_params = get_reset_subplot_params(nrows=1, ncols=image_count_per_category, dpi=dpi)
print("%s%s%s"%("-"*55, name_correct(class_name), "-"*55))
plot_sample_image(sample_img_by_class[class_name], plot_params, subplot_params)
print("")
print("%s%s%d%s"%("-"*55, "All Class Printed:", class_count, "-"*55))
# count number of files in each subdirectory of a directory
def subdirectory_file_count(master_directory):
subdirectories = os.listdir(master_directory)
subdirectory_count = len(subdirectories)
subdirectory_names = []
subdirectory_file_counts = []
for subdirectory in subdirectories:
current_directory = os.path.join(master_directory, subdirectory)
file_count = len(os.listdir(current_directory))
subdirectory_names.append(subdirectory)
subdirectory_file_counts.append(file_count)
return subdirectory_names, subdirectory_file_counts
# show barplot
def bar_plot(x, y, plot_property):
if plot_property['subplot']:
plt.subplot(plot_property['subplot'])
sns.barplot(x=x, y=y)
plt.title(plot_property['title'], fontsize=plot_property['title_fontsize'])
plt.xlabel(plot_property['xlabel'], fontsize=plot_property['label_fontsize'])
plt.ylabel(plot_property['ylabel'], fontsize=plot_property['label_fontsize'])
plt.xticks(range(len(x)), x)
# show bar plot for count of labels in subdirectory of a directory
def count_bar_plot(master_directory, plot_property):
dir_name, dir_file_count = subdirectory_file_count(master_directory)
x = [name_correct(i) for i in dir_name]
# x = dir_name
y = dir_file_count
bar_plot(x, y, plot_property)
# show bar plot for count of labels in subdirectory of a training, validation, testing directory
def show_train_val_test(training_dir, validation_dir, testing_dir, plot_property):
plt.figure(figsize=plot_property['figsize'])
title = plot_property['title']
plot_property['title'] = title + " (Training)"
subplot_no = plot_property['subplot']
count_bar_plot(training_dir, plot_property)
plot_property['title'] = title + " (Validation)"
plot_property['subplot'] = subplot_no+1
count_bar_plot(validation_dir, plot_property)
plot_property['title'] = title + " (Testing)"
plot_property['subplot'] = subplot_no + 2
count_bar_plot(testing_dir, plot_property)
plt.show()
# reset tensorflow graph tp free up memory and resource allocation
def reset_graph(model=None):
if model:
try:
del model
except:
return False
tf.reset_default_graph()
K.clear_session()
gc.collect()
return True
# reset callbacks
def reset_callbacks(checkpoint=None, reduce_lr=None, early_stopping=None, tensorboard=None):
checkpoint = None
reduce_lr = None
early_stopping = None
tensorboard = None
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
reset_graph()
# Configure input/ output directory
# Configure training, validation, testing directory
input_directory = r"data/input/"
output_directory = r"data/output/"
training_dir = input_directory + r"train"
validation_dir = input_directory + r"val"
testing_dir = input_directory + r"test"
figure_directory = r"data/output/figures"
figure_directory_input = figure_directory+r"/data"
plot_input_dist = figure_directory + r"/dist.png"
file_name_pred_batch = figure_directory+r"/result"
file_name_pred_sample = figure_directory+r"/sample"
show_class_sample_images(training_dir, image_count_per_category=5, save=False, dpi=100, update_image=False)
plot_params = get_reset_plot_params()
plot_params['figsize'] = (18,4)
plot_params['title_fontsize'] = 13
plot_params['label_fontsize'] = 10
plot_params['title'] = "Number of Cases"
plot_params['subplot'] = 131
show_train_val_test(training_dir, validation_dir, testing_dir, plot_params)
classes = os.listdir(training_dir)
classes = [name_correct(i) for i in classes]
###Output
_____no_output_____
###Markdown
Image Preprocessing/ Augmentation/ Transformation for Training, Validation, Testing and Dataset
###Code
rescale = 1./255
# batch_size = 32
batch_size = 64
# target_size = (150, 150)
target_size = (299, 299)
# color_mode = "grayscale"
color_mode = "rgb"
class_mode = "binary"
shuffle = True
train_datagen = ImageDataGenerator(
rescale=rescale,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# train_datagen = ImageDataGenerator(rescale=rescale)
train_generator = train_datagen.flow_from_directory(
training_dir,
target_size=target_size,
# class_mode=class_mode,
batch_size=batch_size)
# train_generator = train_datagen.flow_from_directory(
# training_dir,
# target_size=target_size,
# color_mode=color_mode,
# class_mode=class_mode,
# batch_size=batch_size,
# shuffle=shuffle)
validation_datagen = ImageDataGenerator(rescale=rescale)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=target_size,
# class_mode=class_mode,
batch_size=batch_size)
test_datagen = ImageDataGenerator(rescale=rescale)
test_generator = test_datagen.flow_from_directory(
testing_dir,
target_size=target_size,
# class_mode=class_mode,
batch_size=batch_size)
reset_graph()
###Output
_____no_output_____
###Markdown
Training Files Configuration
###Code
def reset_directory(output_directory):
main_model_dir = output_directory + r"models/"
main_log_dir = output_directory + r"logs/"
clear_directory(main_log_dir)
remove_empty_folders(main_model_dir, False)
model_dir = main_model_dir + time.strftime('%Y-%m-%d %H-%M-%S') + "/"
log_dir = main_log_dir + time.strftime('%Y-%m-%d %H-%M-%S')
create_directory(model_dir, remove=True)
create_directory(log_dir, remove=True)
model_file = model_dir + "{epoch:02d}-val_acc-{val_acc:.2f}-val_loss-{val_loss:.2f}.hdf5"
return model_dir, log_dir, model_file
# Load and configure model InceptionV3 for fine-tuning with new class labels
def get_model(summary=False):
# base_model = InceptionV3(weights=None, include_top=False, input_shape=(3, 150, 150))
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = Dropout(0.5)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(128, activation='relu')(x)
# x = Dense(1024, activation='relu')(x)
x = BatchNormalization()(x)
predictions = Dense(2, activation='sigmoid')(x)
# predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
if summary:
model.summary()
return model
###Output
_____no_output_____
###Markdown
Callbacks
###Code
def get_callbacks(model_file, log_dir):
print("Settting Callbacks at ", date_time(1))
monitor = 'val_loss'
checkpoint = ModelCheckpoint(
model_file,
monitor=monitor,
save_best_only=True)
early_stopping = EarlyStopping(
monitor=monitor,
patience=10,
restore_best_weights=True)
tensorboard = TensorBoard(
log_dir=log_dir,
batch_size=batch_size)
# reduce_lr = ReduceLROnPlateau(
# monitor=monitor,
# patience=3,
# verbose=1)
reduce_lr = ReduceLROnPlateau(
monitor=monitor,
min_lr=0.0000000001)
callbacks = [checkpoint, reduce_lr, tensorboard]
# callbacks = [checkpoint, tensorboard]
print("Set Callbacks", date_time(1))
return callbacks
###Output
_____no_output_____
###Markdown
Training/Fine-Tuning Base Model-InceptionV3 for Fine-Tuning with New Class Labels
###Code
def train(model, train_generator, validation_generator, callbacks, epochs=10, lr=0.1, base=False):
print("Starting Trainning Model at ", date_time(1))
print("%s%d%s%f\n%s"%("Number of Epochs: ", epochs, "\nLearning Rate: ", lr, "-"*120))
if base:
reset_graph()
model = get_model()
steps_per_epoch=len(train_generator)
validation_steps=len(validation_generator)
optimizer=optimizers.Adam(lr=lr)
loss='binary_crossentropy'
metrics=['accuracy']
model.compile(optimizer, loss=loss, metrics=metrics)
history = model.fit_generator(
train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
verbose=1,
callbacks=callbacks,
validation_data=validation_generator,
validation_steps=validation_steps)
print("Completed Model Trainning at", date_time(1))
return model
# from sklearn.metrics import precision_score, recall_score, f1_score
def model_performance(cur_model, generator, title):
print("%s%s%s"%("-"*50, title, "-"*50))
result = cur_model.evaluate_generator(generator, steps=len(generator))
print("%s%.2f "% ("Loss : ", result[0]))
print("%s%.2f%s"% ("Accuracy : ", result[1]*100, "%"))
def extended_performance(model, generator, title):
print("%s%s%s"%("-"*50, title, "-"*50))
y_pred = model.predict_generator(generator, steps=len(generator))
y_pred = y_pred.argmax(axis=-1)
y_true = generator.classes
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred)
print("%s%.2f%s"% ("Precision : ", precision*100, "%"))
print("%s%.2f%s"% ("Recall : ", recall*100, "%"))
print("%s%.2f%s"% ("F1-Score : ", f1*100, "%"))
CM = confusion_matrix(y_true, y_pred)
fig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(10,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(len(classes)), classes, fontsize=12)
plt.yticks(range(len(classes)), classes, fontsize=12)
plt.show()
cls_report_print = classification_report(y_true, y_pred, target_names=classes)
cls_report = classification_report(y_true, y_pred, target_names=classes, output_dict=True)
print(cls_report_print)
def grid_search(train_generator, validation_generator, output_directory):
reset_graph()
reset_callbacks()
model_dir, log_dir, model_file = reset_directory(output_directory)
callbacks = get_callbacks(model_file, log_dir)
lr=1
for i in range(10):
print("i: ", i)
print("lr:", lr)
print("-"*120)
lr = lr/10
if i==0:
model = train(None, train_generator, validation_generator, callbacks, epochs=1, lr=lr, base=True)
else:
model = train(model, train_generator, validation_generator, callbacks, epochs=1, lr=lr, base=False)
print("-"*120)
print("-"*120)
model_performance(model, test_generator, "Test")
extended_performance(model, test_generator, "Test")
print("-"*120)
print("*"*120)
i = i + 1
return model
model = grid_search(train_generator, validation_generator, output_directory)
###Output
[WinError 5] Access is denied: 'data/output/logs/2018-12-14 07-48-55\\events.out.tfevents.1544791770.DESKTOP-BU5QMIC'
[WinError 5] Access is denied: 'data/output/logs/2018-12-14 07-53-22\\events.out.tfevents.1544792030.DESKTOP-BU5QMIC'
[WinError 145] The directory is not empty: 'data/output/logs/2018-12-14 07-59-57'
Removing empty folder: data/output/models/2018-12-14 07-59-57
Settting Callbacks at Timestamp: 2018-12-14 08:00:51
Set Callbacks Timestamp: 2018-12-14 08:00:51
i: 0
lr: 1
------------------------------------------------------------------------------------------------------------------------
Starting Trainning Model at Timestamp: 2018-12-14 08:00:51
Number of Epochs: 1
Learning Rate: 0.100000
------------------------------------------------------------------------------------------------------------------------
Epoch 1/1
82/82 [==============================] - ETA: 9:29 - loss: 0.7925 - acc: 0.554 - ETA: 5:13 - loss: 0.6146 - acc: 0.683 - ETA: 3:47 - loss: 0.6384 - acc: 0.703 - ETA: 3:33 - loss: 0.6863 - acc: 0.732 - ETA: 3:30 - loss: 0.7098 - acc: 0.748 - ETA: 3:25 - loss: 0.7685 - acc: 0.743 - ETA: 3:21 - loss: 0.7159 - acc: 0.750 - ETA: 3:14 - loss: 0.6604 - acc: 0.768 - ETA: 3:08 - loss: 0.6065 - acc: 0.789 - ETA: 3:03 - loss: 0.6106 - acc: 0.795 - ETA: 2:58 - loss: 0.6096 - acc: 0.795 - ETA: 2:57 - loss: 0.6678 - acc: 0.785 - ETA: 2:52 - loss: 0.6582 - acc: 0.789 - ETA: 2:48 - loss: 0.6390 - acc: 0.796 - ETA: 2:44 - loss: 0.6165 - acc: 0.803 - ETA: 2:41 - loss: 0.5916 - acc: 0.811 - ETA: 2:37 - loss: 0.5818 - acc: 0.815 - ETA: 2:34 - loss: 0.5625 - acc: 0.823 - ETA: 2:32 - loss: 0.5370 - acc: 0.830 - ETA: 2:29 - loss: 0.5261 - acc: 0.832 - ETA: 2:26 - loss: 0.5164 - acc: 0.834 - ETA: 2:24 - loss: 0.5032 - acc: 0.836 - ETA: 2:22 - loss: 0.4986 - acc: 0.835 - ETA: 2:19 - loss: 0.4816 - acc: 0.841 - ETA: 2:16 - loss: 0.4696 - acc: 0.845 - ETA: 2:14 - loss: 0.4568 - acc: 0.849 - ETA: 2:11 - loss: 0.4452 - acc: 0.852 - ETA: 2:09 - loss: 0.4354 - acc: 0.855 - ETA: 2:06 - loss: 0.4319 - acc: 0.858 - ETA: 2:03 - loss: 0.4207 - acc: 0.861 - ETA: 2:01 - loss: 0.4120 - acc: 0.863 - ETA: 1:59 - loss: 0.4042 - acc: 0.865 - ETA: 1:56 - loss: 0.3983 - acc: 0.867 - ETA: 1:54 - loss: 0.3993 - acc: 0.865 - ETA: 1:51 - loss: 0.3901 - acc: 0.867 - ETA: 1:49 - loss: 0.3875 - acc: 0.868 - ETA: 1:47 - loss: 0.3803 - acc: 0.870 - ETA: 1:45 - loss: 0.3750 - acc: 0.871 - ETA: 1:42 - loss: 0.3667 - acc: 0.875 - ETA: 1:40 - loss: 0.3593 - acc: 0.877 - ETA: 1:38 - loss: 0.3543 - acc: 0.879 - ETA: 1:35 - loss: 0.3505 - acc: 0.880 - ETA: 1:33 - loss: 0.3472 - acc: 0.882 - ETA: 1:31 - loss: 0.3415 - acc: 0.884 - ETA: 1:28 - loss: 0.3355 - acc: 0.885 - ETA: 1:26 - loss: 0.3302 - acc: 0.887 - ETA: 1:24 - loss: 0.3255 - acc: 0.889 - ETA: 1:21 - loss: 0.3202 - acc: 0.891 - ETA: 1:19 - loss: 0.3147 - acc: 0.893 - ETA: 1:17 - loss: 0.3098 - acc: 0.894 - ETA: 1:14 - loss: 0.3055 - acc: 0.896 - ETA: 1:12 - loss: 0.3011 - acc: 0.897 - ETA: 1:10 - loss: 0.2967 - acc: 0.899 - ETA: 1:08 - loss: 0.2929 - acc: 0.899 - ETA: 1:06 - loss: 0.2882 - acc: 0.901 - ETA: 1:03 - loss: 0.2849 - acc: 0.902 - ETA: 1:01 - loss: 0.2822 - acc: 0.903 - ETA: 58s - loss: 0.2797 - acc: 0.904 - ETA: 56s - loss: 0.2764 - acc: 0.90 - ETA: 54s - loss: 0.2727 - acc: 0.90 - ETA: 51s - loss: 0.2702 - acc: 0.90 - ETA: 49s - loss: 0.2690 - acc: 0.90 - ETA: 46s - loss: 0.2691 - acc: 0.90 - ETA: 44s - loss: 0.2663 - acc: 0.90 - ETA: 41s - loss: 0.2638 - acc: 0.90 - ETA: 39s - loss: 0.2630 - acc: 0.90 - ETA: 36s - loss: 0.2626 - acc: 0.90 - ETA: 34s - loss: 0.2603 - acc: 0.91 - ETA: 31s - loss: 0.2597 - acc: 0.91 - ETA: 29s - loss: 0.2588 - acc: 0.91 - ETA: 26s - loss: 0.2569 - acc: 0.91 - ETA: 24s - loss: 0.2546 - acc: 0.91 - ETA: 22s - loss: 0.2527 - acc: 0.91 - ETA: 19s - loss: 0.2529 - acc: 0.91 - ETA: 17s - loss: 0.2504 - acc: 0.91 - ETA: 14s - loss: 0.2484 - acc: 0.91 - ETA: 12s - loss: 0.2461 - acc: 0.91 - ETA: 9s - loss: 0.2436 - acc: 0.9164 - ETA: 7s - loss: 0.2420 - acc: 0.916 - ETA: 4s - loss: 0.2395 - acc: 0.917 - ETA: 2s - loss: 0.2380 - acc: 0.918 - 203s 2s/step - loss: 0.2358 - acc: 0.9191 - val_loss: 0.3153 - val_acc: 0.8734
Completed Model Trainning at Timestamp: 2018-12-14 08:05:17
------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------Test--------------------------------------------------
Loss : 0.50
Accuracy : 81.09%
--------------------------------------------------Test--------------------------------------------------
Precision : 59.82%
Recall : 67.34%
F1-Score : 63.36%
###Markdown
Model Performance Visualization over the Epochs
###Code
xlabel = 'Epoch'
legends = ['Training', 'Validation']
ylim_pad = [0.01, 0.1]
plt.figure(figsize=(15, 5))
# Plot training & validation Accuracy values
y1 = history.history['acc']
y2 = history.history['val_acc']
min_y = min(min(y1), min(y2))-ylim_pad[0]
max_y = max(max(y1), max(y2))+ylim_pad[0]
plt.subplot(121)
plt.plot(y1)
plt.plot(y2)
plt.title('Model Accuracy', fontsize=17)
plt.xlabel(xlabel, fontsize=15)
plt.ylabel('Accuracy', fontsize=15)
plt.ylim(min_y, max_y)
plt.legend(legends, loc='upper left')
plt.grid()
# Plot training & validation loss values
y1 = history.history['loss']
y2 = history.history['val_loss']
min_y = min(min(y1), min(y2))-ylim_pad[1]
max_y = max(max(y1), max(y2))+ylim_pad[1]
plt.subplot(122)
plt.plot(y1)
plt.plot(y2)
plt.title('Model Loss', fontsize=17)
plt.xlabel(xlabel, fontsize=15)
plt.ylabel('Loss', fontsize=15)
plt.ylim(min_y, max_y)
plt.legend(legends, loc='upper left')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Test Saved Models
###Code
dir_name = r"data/output/models/"
dirs = os.listdir(dir_name)
for i in range(len(dirs)):
print(i, dirs[i])
cur_dir =dir_name+dirs[7]+"/"
model_names = os.listdir(cur_dir)
for i in range(len(model_names)):
print(i, model_names[i])
model_file = cur_dir+model_names[7]
print(model_file)
cur_model = keras.models.load_model(model_file)
model_performance(cur_model, train_generator, title="Training")
model_performance(cur_model, validation_generator, title="Validation")
model_performance(cur_model, test_generator, title="Testing")
print("%s%s%s"%("-"*50, "END", "-"*50))
extended_performance(cur_model, train_generator, title="Training")
extended_performance(cur_model, validation_generator, title="Validation")
extended_performance(cur_model, test_generator, title="Testing")
print("%s%s%s"%("-"*50, "END", "-"*50))
numofbatch = len(test_generator)
batch_no = random.randint(0, numofbatch-1)
y_img_batch, y_true_batch = test_generator[batch_no]
y_true_batch = y_true_batch.argmax(axis=-1)
y_pred_batch = model.predict(y_img_batch)
y_pred_batch = y_pred_batch.argmax(axis=-1)
sizeofbatch = len(y_true_batch)
print("-"*35)
print("%s%d"% ("Selected Batch No : ", batch_no))
print("-"*35)
print("%s%d"% ("Batch Size : ", len(y_pred_batch)))
print("-"*35)
print("%s%.2f%s"% ("Accuracy : ", np.mean(y_true==y_pred)*100, "%"))
print("-"*35)
###Output
_____no_output_____
###Markdown
Visualization
###Code
def show_predictions(y_img_batch, y_true, y_pred, image_file_name, plt, figure_map, sample=True):
# figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
fig, axs = plt.subplots(nrows=figure_map["rows"], ncols=figure_map["cols"], figsize=(figure_map["figsize_col"], figure_map["figsize_row"]),
dpi=figure_map["dpi"], facecolor=figure_map["facecolor"], edgecolor=figure_map["edgecolor"],
subplot_kw=figure_map["subplot_kw"])
plt.rcParams.update({'axes.titlesize': figure_map["axes.titlesize"]})
plt.subplots_adjust(hspace=figure_map["hspace"], wspace=figure_map["wspace"])
m = {}
length = len(y_true)
for i in range(0, count):
num = i
if sample:
num = random.randint(0, length-1)
while num in m:
num = int(random.randint(0, length-1))
m[num]=1
plt.subplot(rows, cols, i+1)
plt.imshow(y_img_batch[num])
plt.xticks([])
plt.yticks([])
original = figure_map['class_map'][y_true[num]]
predicted = figure_map['class_map'][y_pred[num]]
title_text = ("%s%s%s%s%s"%(true_label_title_prefix, original, "\n", pred_label_title_prefix, predicted))
if original==predicted:
plt.title(title_text)
else:
plt.title(title_text, color=false_prediction_label_color)
if update_image and os.path.exists(image_file_name):
os.remove(image_file_name)
fig.savefig(image_file_name, dpi=dpi)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Visualization
###Code
figure_directory = "data/output/figures"
image_file_name_batch = figure_directory+"/result"
image_file_name_sample = figure_directory+"/sample"
cols = 8
batch_size_t = len(y_true_batch)
if batch_size_t<8:
cols = batch_size_t
rows = batch_size_t/cols
if batch_size_t%cols==0:
rows = int(batch_size_t/cols)
else:
rows = int(batch_size_t/cols)+1
count = batch_size_t
figsize_col = cols*2.5
figsize_row = rows*2.5
hspace = 0.5
wspace = 0.3
facecolor='w'
edgecolor='k'
titlesize = 'small'
true_prediction_label_color='black'
false_prediction_label_color='red'
true_label_title_prefix = "org: "
pred_label_title_prefix = "pred: "
dpi=100
update_image = True
if not os.path.exists(figure_directory):
os.mkdir(figure_directory)
batch_size_tmp = batch_size_t
figure_map = {}
figure_map["rows"]=rows
figure_map["cols"]=cols
figure_map["figsize_col"]=figsize_col
figure_map["figsize_row"]=figsize_row
figure_map["facecolor"]=facecolor
figure_map["edgecolor"]=edgecolor
figure_map["subplot_kw"]={'xticks': [], 'yticks': []}
figure_map["axes.titlesize"]=titlesize
figure_map["hspace"]=hspace
figure_map["wspace"]=wspace
figure_map["update_image"]=update_image
figure_map["dpi"]=dpi
figure_map["count"]=count
figure_map["batch_size_tmp"]=batch_size_t
figure_map["true_label_title_prefix"]=true_label_title_prefix
figure_map["pred_label_title_prefix"]=pred_label_title_prefix
figure_map["false_prediction_label_color"]=false_prediction_label_color
figure_map['class_map'] = {v: k for k, v in test_generator.class_indices.items()}
###Output
_____no_output_____
###Markdown
Visualization 1 (Random Batch)Visualization of performance of a random test dataset batch
###Code
show_predictions(y_img_batch, y_true_batch, y_pred_batch, image_file_name_batch, plt, figure_map, False)
###Output
_____no_output_____
###Markdown
Visualization 2 (Random) Visualization of performance of a few random images from a random batch
###Code
cols = 4
rows = 2
if batch_size_t<4:
cols = 1
count = cols*rows
figsize_col = cols*2.5
figsize_row = rows*2.5
figure_map["rows"]=rows
figure_map["cols"]=cols
figure_map["figsize_col"]=figsize_col
figure_map["figsize_row"]=figsize_row
figure_map["count"]=count
figure_map["batch_size_tmp"]=batch_size_t
show_predictions(y_img_batch, y_true_batch, y_pred_batch, image_file_name_batch, plt, figure_map)
###Output
_____no_output_____ |
docs/source/notebooks/GP-smoothing.ipynb | ###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from theano import shared
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
###Output
_____no_output_____
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sd=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
_____no_output_____
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
###Output
_____no_output_____
###Markdown
Interactive smoothingBelow you can interactively test different levels of smoothing. Notice, because we use a **shared Theano variable** to specify the smoothing above, the model doesn't need to be recompiled every time you move the slider, and so the **inference is fast**!
###Code
from IPython.html.widgets import interact
@interact(smoothing=[0.01,0.99])
def plot_smoothed(smoothing=0.9):
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from theano import shared
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
###Output
/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sigma=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
INFO (theano.gof.compilelock): Waiting for existing lock by process '16678' (I am process '16664')
INFO (theano.gof.compilelock): To manually release the lock, delete /.theano/compiledir_Linux-4.15--generic-x86_64-with-debian-buster-sid-x86_64-3.6.8-64/lock_dir
INFO (theano.gof.compilelock): Waiting for existing lock by process '16678' (I am process '16664')
INFO (theano.gof.compilelock): To manually release the lock, delete /.theano/compiledir_Linux-4.15--generic-x86_64-with-debian-buster-sid-x86_64-3.6.8-64/lock_dir
INFO (theano.gof.compilelock): Waiting for existing lock by process '16678' (I am process '16664')
INFO (theano.gof.compilelock): To manually release the lock, delete /.theano/compiledir_Linux-4.15--generic-x86_64-with-debian-buster-sid-x86_64-3.6.8-64/lock_dir
INFO (theano.gof.compilelock): Waiting for existing lock by process '16678' (I am process '16664')
INFO (theano.gof.compilelock): To manually release the lock, delete /.theano/compiledir_Linux-4.15--generic-x86_64-with-debian-buster-sid-x86_64-3.6.8-64/lock_dir
INFO (theano.gof.compilelock): Waiting for existing lock by process '16678' (I am process '16664')
INFO (theano.gof.compilelock): To manually release the lock, delete /.theano/compiledir_Linux-4.15--generic-x86_64-with-debian-buster-sid-x86_64-3.6.8-64/lock_dir
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
/repos/pymc3/pymc3/tuning/starting.py:61: UserWarning: find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.
warnings.warn('find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.')
/repos/pymc3/pymc3/tuning/starting.py:102: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
warnings.warn('In future versions, set the optimization algorithm with a string. '
logp = -4.6549e+06: 0%| | 16/5000 [00:01<07:29, 11.10it/s]
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
/repos/pymc3/pymc3/tuning/starting.py:61: UserWarning: find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.
warnings.warn('find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.')
/repos/pymc3/pymc3/tuning/starting.py:102: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
warnings.warn('In future versions, set the optimization algorithm with a string. '
logp = -5.2675e+06: 1%| | 37/5000 [00:00<00:06, 741.11it/s]
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
###Output
/repos/pymc3/pymc3/tuning/starting.py:61: UserWarning: find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.
warnings.warn('find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.')
/repos/pymc3/pymc3/tuning/starting.py:102: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
warnings.warn('In future versions, set the optimization algorithm with a string. '
logp = -6.6522e+06: 1%| | 52/5000 [00:00<00:04, 1058.44it/s]
logp = -1.3071e+07: 2%|▏ | 108/5000 [00:00<00:05, 835.78it/s]
logp = -3.0073e+07: 5%|▍ | 238/5000 [00:00<00:06, 737.98it/s]
logp = -4.3473e+07: 10%|█ | 520/5000 [00:00<00:05, 845.90it/s]
###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from theano import shared
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
###Output
_____no_output_____
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sd=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
_____no_output_____
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
###Output
_____no_output_____
###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
from theano import shared
###Output
_____no_output_____
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sigma=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
_____no_output_____
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
/dependencies/pymc3/pymc3/tuning/starting.py:130: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
"In future versions, set the optimization algorithm with a string. "
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
/dependencies/pymc3/pymc3/tuning/starting.py:130: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
"In future versions, set the optimization algorithm with a string. "
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.9.0
re 2.2.1
numpy 1.18.5
matplotlib 3.2.1
matplotlib.pylab 1.18.5
logging 0.5.1.2
last updated: Fri Jun 12 2020
CPython 3.7.7
IPython 7.15.0
watermark 2.0.2
###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from theano import shared
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
###Output
_____no_output_____
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sigma=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
_____no_output_____
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
/dependencies/pymc3/pymc3/tuning/starting.py:130: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
"In future versions, set the optimization algorithm with a string. "
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
/dependencies/pymc3/pymc3/tuning/starting.py:130: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
"In future versions, set the optimization algorithm with a string. "
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.9.0
re 2.2.1
numpy 1.18.5
matplotlib 3.2.1
matplotlib.pylab 1.18.5
logging 0.5.1.2
last updated: Fri Jun 12 2020
CPython 3.7.7
IPython 7.15.0
watermark 2.0.2
###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from theano import shared
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
###Output
_____no_output_____
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sigma=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
_____no_output_____
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
###Output
_____no_output_____
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
###Output
_____no_output_____
###Markdown
Gaussian Process (GP) smoothingThis example deals with the case when we want to **smooth** the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis. It is important to note that we are **not** dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.If we assume the functional dependency between $x$ and $y$ is **linear** then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.However, the **functional form** of $y=f(x)$ is **not always known in advance**, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
###Code
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
###Output
_____no_output_____
###Markdown
Let's try a linear regression firstAs humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
###Code
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
###Output
_____no_output_____
###Markdown
Linear regression model recapThe linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:$$ y_i = a + b\, x_i + \epsilon_i $$where the observation errors at each data point satisfy:$$ \epsilon_i \sim N(0, \sigma^2) $$with the same $\sigma$, and the errors are independent:$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$. Gaussian Process smoothing modelThis model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by **controlling the standard deviation of the unobserved Brownian motion** we can achieve different levels of smoothness of the recovered functional dependency at the original data points. The particular model we are going to discuss assumes that the observed data points are **evenly spaced** across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:\begin{equation}\begin{aligned}z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \\z_1 & \sim ImproperFlat(-\infty,\infty) \\y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)\end{aligned}\end{equation}where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression. Let's describe the above GP-smoothing model in PyMC3
###Code
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
from theano import shared
###Output
_____no_output_____
###Markdown
Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
###Code
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sigma=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
###Output
_____no_output_____
###Markdown
Let's also make a helper function for inferring the most likely values of $z$:
###Code
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
###Output
_____no_output_____
###Markdown
Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice. Exploring different levels of smoothingLet's try to allocate 50% variance to the noise, and see if the result matches our expectations.
###Code
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title(f"Smoothing={smoothing}");
###Output
/dependencies/pymc3/pymc3/tuning/starting.py:130: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
"In future versions, set the optimization algorithm with a string. "
###Markdown
It appears that the variance is split evenly between the noise and the hidden process, as expected. Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
###Code
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title(f"Smoothing={smoothing}");
###Output
/dependencies/pymc3/pymc3/tuning/starting.py:130: UserWarning: In future versions, set the optimization algorithm with a string. For example, use `method="L-BFGS-B"` instead of `fmin=sp.optimize.fmin_l_bfgs_b"`.
"In future versions, set the optimization algorithm with a string. "
###Markdown
Smoothing "to the limits"By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
###Code
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title(f'Smoothing={smoothing:05.4f}')
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.9.0
re 2.2.1
numpy 1.18.5
matplotlib 3.2.1
matplotlib.pylab 1.18.5
logging 0.5.1.2
last updated: Fri Jun 12 2020
CPython 3.7.7
IPython 7.15.0
watermark 2.0.2
|
Part1/Ch3/.ipynb_checkpoints/Deep Networks-checkpoint.ipynb | ###Markdown
**Imports**
###Code
# importing required libraries
import numpy as np
import pandas as pd
import pickle # saving and loading trained model
from os import path
# importing required libraries for normalizing data
from sklearn import preprocessing
from sklearn.preprocessing import (StandardScaler, OrdinalEncoder,LabelEncoder, MinMaxScaler, OneHotEncoder)
from sklearn.preprocessing import Normalizer, MaxAbsScaler , RobustScaler, PowerTransformer
# importing library for plotting
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
from sklearn.metrics import accuracy_score # for calculating accuracy of model
from sklearn.model_selection import train_test_split # for splitting the dataset for training and testing
from sklearn.metrics import classification_report # for generating a classification report of model
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
from keras.layers import Dense # importing dense layer
from keras.models import Sequential #importing Sequential layer
from keras.layers import Input
from keras.models import Model
# representation of model layers
from keras.utils.vis_utils import plot_model
###Output
_____no_output_____
###Markdown
**Reading And Pre-Processing*** Load dataset* Pre-process data Reading Data Data Description as presented in https://www.unb.ca/cic/datasets/nsl.htmlNSL-KDD is a data set suggested to solve some of the inherent problems of the KDD'99 data. Although, this new version of the KDD data set still suffers from some of the problems discussed by McHugh and may not be a perfect representative of existing real networks, because of the lack of public data sets for network-based IDSs, we believe it still can be applied as an effective benchmark data set to help researchers compare different intrusion detection methods.Furthermore, the number of records in the NSL-KDD train and test sets are reasonable. This advantage makes it affordable to run the experiments on the complete set without the need to randomly select a small portion. Consequently, evaluation results of different research work will be consistent and comparable.Data files- KDDTrain+.ARFF: The full NSL-KDD train set with binary labels in ARFF format- KDDTrain+.TXT: The full NSL-KDD train set including attack-type labels and difficulty level in CSV format- KDDTrain+_20Percent.ARFF: A 20% subset of the KDDTrain+.arff file- KDDTrain+_20Percent.TXT: A 20% subset of the KDDTrain+.txt file- KDDTest+.ARFF: The full NSL-KDD test set with binary labels in ARFF format- KDDTest+.TXT: The full NSL-KDD test set including attack-type labels and difficulty level in CSV format- KDDTest-21.ARFF: A subset of the KDDTest+.arff file which does not include records with difficulty level of 21 out of 21- KDDTest-21.TXT: A subset of the KDDTest+.txt file which does not include records with difficulty level of 21 out of 21
###Code
train = 'NSL-KDD/KDDTrain+.txt'
test = 'NSL-KDD/KDDTest+.txt'
test21 = 'NSL-KDD/KDDTest-21.txt'
feature=["duration","protocol_type","service","flag","src_bytes","dst_bytes","land","wrong_fragment","urgent","hot",
"num_failed_logins","logged_in","num_compromised","root_shell","su_attempted","num_root","num_file_creations","num_shells",
"num_access_files","num_outbound_cmds","is_host_login","is_guest_login","count","srv_count","serror_rate","srv_serror_rate",
"rerror_rate","srv_rerror_rate","same_srv_rate","diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate","dst_host_srv_diff_host_rate","dst_host_serror_rate",
"dst_host_srv_serror_rate","dst_host_rerror_rate","dst_host_srv_rerror_rate","label","difficulty"]
flag=['OTH','RSTOS0','SF','SH','RSTO','S2','S1','REJ','S3','RSTR','S0']
protocol_type=['tcp','udp','icmp']
service=['http','smtp','finger','domain_u','auth','telnet','ftp','eco_i','ntp_u','ecr_i','other','private','pop_3','ftp_data',
'rje','time','mtp','link','remote_job','gopher','ssh','name','whois','domain','login','imap4','daytime','ctf','nntp',
'shell','IRC','nnsp','http_443','exec','printer','efs','courier','uucp','klogin','kshell','echo','discard','systat',
'supdup','iso_tsap','hostnames','csnet_ns','pop_2','sunrpc','uucp_path','netbios_ns','netbios_ssn','netbios_dgm',
'sql_net','vmnet','bgp','Z39_50','ldap','netstat','urh_i','X11','urp_i','pm_dump','tftp_u','tim_i','red_i','icmp',
'http_2784','harvest','aol','http_8001']
binary_attack=['normal','ipsweep', 'nmap', 'portsweep','satan', 'saint', 'mscan','back', 'land', 'neptune', 'pod', 'smurf',
'teardrop', 'apache2', 'udpstorm', 'processtable','mailbomb','buffer_overflow', 'loadmodule', 'perl', 'rootkit',
'xterm', 'ps', 'sqlattack','ftp_write', 'guess_passwd', 'imap', 'multihop','phf', 'spy', 'warezclient',
'warezmaster','snmpgetattack','named', 'xlock', 'xsnoop','sendmail', 'httptunnel', 'worm', 'snmpguess']
multiclass_attack={ 'normal': 'normal',
'probe': ['ipsweep.', 'nmap.', 'portsweep.','satan.', 'saint.', 'mscan.'],
'dos': ['back.', 'land.', 'neptune.', 'pod.', 'smurf.','teardrop.', 'apache2.', 'udpstorm.', 'processtable.','mailbomb.'],
'u2r': ['buffer_overflow.', 'loadmodule.', 'perl.', 'rootkit.','xterm.', 'ps.', 'sqlattack.'],
'r2l': ['ftp_write.', 'guess_passwd.', 'imap.', 'multihop.','phf.', 'spy.', 'warezclient.', 'warezmaster.','snmpgetattack.',
'named.', 'xlock.', 'xsnoop.','sendmail.', 'httptunnel.', 'worm.', 'snmpguess.']}
train_data=pd.read_csv(train,names=feature)
test_data=pd.read_csv(test,names=feature)
test_21 = pd.read_csv(test21, names= feature)
train_data
# remove attribute 'difficulty_level'
train_data.drop(['difficulty'],axis=1,inplace=True)
train_data.shape
###Output
_____no_output_____
###Markdown
Data Type Checking and Statistical Reports
###Code
train_data.info()
train_data.describe().T
# number of attack labels
train_data['label'].value_counts()
###Output
_____no_output_____
###Markdown
**Data Analysis*** Data Visualization* Data Mining Label
###Code
# number of attack labels
train_data['label'].value_counts()
###Output
_____no_output_____
###Markdown
Within the data set exists **4 different classes of attacks**: * **Denial of Service (DoS)*** **Probe*** **User to Root(U2R)*** **Remote to Local (R2L)** **DoS** is an attack that **tries to shut down traffic flow** to and from the target system. **The IDS is flooded with an abnormal amount of traffic**, which the **system can’t handle**, and **shuts down to protect itself**. This prevents normal traffic from visiting a network. An example of this could be an online retailer getting flooded with online orders on a day with a big sale, and because the network can’t handle all the requests, it will shut down preventing paying customers to purchase anything. **This is the most common attack in the data set**. **Probe** or surveillance is an attack that **tries to get information from a network**. The goal here is to act like a thief and **steal important information**, whether it be personal information about clients or banking information. **U2R** is an attack that **starts off with a normal user account** and **tries to gain access to the system or network, as a super-user (root)**. The attacker attempts to exploit the vulnerabilities in a system to **gain root privileges/access**. **R2L** is an attack that tries to **gain local access to a remote machine**. **An attacker does not have local access to the system/network**, and tries to “hack” their way into the network. It is noticed from the descriptions above that **DoS acts differently from the other three attacks**, where **DoS attempts to shut down a system to stop traffic flow altogether**, whereas the **other three attempts to quietly infiltrate the system undetected**.
###Code
# changing attack labels to their respective attack class
def change_label(df):
df.label.replace(['apache2','back','land','neptune','mailbomb','pod','processtable','smurf','teardrop','udpstorm','worm'],'Dos',inplace=True)
df.label.replace(['ftp_write','guess_passwd','httptunnel','imap','multihop','named','phf','sendmail','snmpgetattack','snmpguess','spy','warezclient','warezmaster','xlock','xsnoop'],'R2L',inplace=True)
df.label.replace(['ipsweep','mscan','nmap','portsweep','saint','satan'],'Probe',inplace=True)
df.label.replace(['buffer_overflow','loadmodule','perl','ps','rootkit','sqlattack','xterm'],'U2R',inplace=True)
change_label(train_data)
# distribution of attack classes
train_data.label.value_counts()
###Output
_____no_output_____
###Markdown
Protocol **Data Prepration** * For Binary and Multi-class Classification* **Label encoding** with One-Hot Binary Classification* bin_data_train -> ready dataframe for Modeling* numeric_bin_data -> just numeric features for feature selection
###Code
train_data
# changing attack labels into two categories 'normal' and 'abnormal'
bin_label = pd.DataFrame(train_data.label.map(lambda x:'normal' if x=='normal' else 'abnormal'))
# creating a dataframe with binary labels (normal,abnormal)
bin_data = train_data.copy()
bin_data['label'] = bin_label
bin_data
# label encoding (0,1) binary labels (abnormal,normal)
le1 = preprocessing.LabelEncoder()
enc_label = bin_label.apply(le1.fit_transform)
bin_data['intrusion'] = enc_label
bin_data
# one-hot-encoding attack label
#numeric_bin_data = pd.get_dummies(bin_data,columns=['label'],prefix="",prefix_sep="")
bin_data = pd.get_dummies(train_data,columns=['protocol_type','service','flag'],prefix="",prefix_sep="")
#bin_data['label'] = bin_label
bin_data['intrusion'] =enc_label
bin_data
#bin_data_train is dataset that is ready for modeling ... X=bin_data_train[:,:122] / y=bin_data_train[:,:-1]
bin_data_train = bin_data.copy()
bin_data_train.drop(labels= [ 'label'], axis=1, inplace=True)
bin_data_train
# this data set is include just numeric features with multi labels
#created for feature selection
# creating a dataframe with only numeric attributes of binary class dataset and encoded label attribute
numeric_col = train_data.select_dtypes(include='number').columns
numeric_bin_data = train_data[numeric_col]
numeric_bin_data['intrusion'] = bin_data['intrusion']
numeric_bin_data
###Output
_____no_output_____
###Markdown
Multi-class Classification* multi_data_train -> ready dataframe for Modeling* numeric_multi_data -> just numeric features for feature selection
###Code
# creating a dataframe with multi-class labels (Dos,Probe,R2L,U2R,normal)
multi_data = train_data.copy()
multi_label = pd.DataFrame(multi_data.label)
multi_label
# label encoding (0,1,2,3,4) multi-class labels (Dos,normal,Probe,R2L,U2R)
le2 = preprocessing.LabelEncoder()
enc_label = multi_label.apply(le2.fit_transform)
multi_data['intrusion'] = enc_label
#y_mul = multi_data['intrusion']
multi_data
# one-hot-encoding attack label
multi_data = pd.get_dummies(multi_data,columns=['protocol_type','service','flag','label'],prefix="",prefix_sep="")
multi_data['label'] = multi_label
multi_data
# multi_data_train is dataset that is ready for modeling ... X=bin_data_train[:,:122] / y=bin_data_train[:,:-5]
multi_data_train = multi_data.copy()
multi_data_train.drop(labels= [ 'label', 'intrusion' ], axis=1, inplace=True)
multi_data_train
# this data set is include just numeric features with multi labels
#created for feature selection
numeric_multi_data = train_data[numeric_col]
numeric_multi_data['label'] = multi_label
numeric_multi_data = pd.get_dummies(numeric_multi_data,columns=['label'],prefix="",prefix_sep="")
numeric_multi_data
#num_dataset_bin is just include numeric features with binary labels
num_dataset_bin = numeric_bin_data.copy()
y_train_num_bin= num_dataset_bin[['intrusion']]
X_train_num_bin= num_dataset_bin.drop(labels=['intrusion'], axis=1)
print('X_train has shape:',X_train_num_bin.shape,'\ny_train has shape:',y_train_num_bin.shape)
#dataset_bin is include hole features (with encoded non numeric features like services, protocol and flag) with binary labels
dataset_bin = bin_data_train.copy()
y_train_bin= dataset_bin[['intrusion']]
X_train_bin= dataset_bin.drop(labels=['intrusion'], axis=1)
print('X_train has shape:',X_train_bin.shape,'\ny_train has shape:',y_train_bin.shape)
#num_dataset_multi is just include numeric features with multi-class labels
num_dataset_multi = numeric_multi_data.copy()
y_train_num_multi= num_dataset_multi.loc[:, 'Dos':]
X_train_num_multi= num_dataset_multi.loc[:, :'dst_host_srv_rerror_rate']
print('X_train has shape:',X_train_num_multi.shape,'\ny_train has shape:',y_train_num_multi.shape)
#dataset_multi is include hole features (with encoded non numeric features like services, protocol and flag) with multi-class labels
dataset_multi = multi_data_train.copy()
y_train_multi= dataset_multi.loc[:, 'Dos':]
X_train_multi= dataset_multi.loc[:, :'SH']
print('X_train has shape:',X_train_multi.shape,'\ny_train has shape:',y_train_multi.shape)
###Output
X_train has shape: (125973, 122)
y_train has shape: (125973, 5)
###Markdown
pearson corrolation for binary class dataset* feature extraction from numeric_bin_data * pearson_bin_dataset is binary-class dataset based on pearson corrolation between numeric features and binary-class ( Intrusion-> yes(1)/no(0))
###Code
# finding the attributes which have more than 0.5 correlation with encoded attack label attribute
corr= numeric_bin_data.corr()
corr_y = abs(corr['intrusion'])
highest_corr = corr_y[corr_y >0.5]
highest_corr.sort_values(ascending=True)
highest_corr_columns= highest_corr.index
plt.figure(figsize=(15,10))
g=sns.heatmap(bin_data[highest_corr.index].corr(),annot=True,cmap="RdYlGn")
# then joining encoded, one-hot-encoded, and original attack label attribute
pearson_bin_dataset = numeric_bin_data[highest_corr_columns]
pearson_bin_dataset
###Output
_____no_output_____
###Markdown
pearson corrolation for multi-class dataset* feature selection from numeric_multi_data* pearson_multi_dataset is multi-class dataset based on pearson corrolation between numeric features and multi-class ( Types of Attacks)
###Code
# finding the attributes which have more than 0.5 correlation with encoded attack label attribute
corr = numeric_multi_data.corr()
corr_y = abs(corr[y_train_num_multi.columns])
highest_corr = corr_y[corr_y >0.5]
highest_corr
Dos_features= highest_corr[highest_corr.Dos.notnull()].index
Probe_features= highest_corr[highest_corr.Probe.notnull()].index
R2L_features= highest_corr[highest_corr.R2L.notnull()].index
U2R_features= highest_corr[highest_corr.U2R.notnull()].index
normal_features= highest_corr[highest_corr.normal.notnull()].index
Dos_features.intersection(Probe_features)
pearson_multi_features = list(set(Dos_features.union(normal_features).union(Probe_features).union(R2L_features).union(U2R_features)))
for lab in y_train_num_multi.columns:
pearson_multi_features.remove(lab)
pearson_multi_features
# then joining encoded, one-hot-encoded, and original attack label attribute
pearson_multi_dataset = numeric_multi_data[pearson_multi_features]
pearson_multi_dataset
pearson_multi_dataset = pearson_multi_dataset.join(y_train_num_multi)
pearson_multi_dataset
###Output
_____no_output_____
###Markdown
Chi Squareused for feature selection (binary classification) 1. Define Hypothesis.* Null Hypothesis (H0): Two variables are independent.* Alternate Hypothesis (H1): Two variables are not independent.2. Build a Contingency table.* number of sample in dataset that use tcp protocol type and led to intusion* number of sample in dataset that use tcp protocol type and led to normal situation* Degrees of freedom for contingency table is given as (r-1) * (c-1) where r,c are rows and columns.3. Find the expected values.* Based on the null hypothesis that the two variables are independent. We can say if A, B are two independent events P(A^B)= P(A)*P(B)* Let’s calculate the expected value for the first cell that is those who are Males and are Exited from the bank. E1= n*p4. Calculate the Chi-Square statistic.* O- Observed Valued / E- Expected Values* (Square of O-E)/E5. Accept or Reject the Null Hypothesis.
###Code
from sklearn.feature_selection import chi2
from sklearn.feature_selection import SelectKBest
#feature selection of all features (numeric and categorical features)
chi_scores = chi2(X_train_bin,y_train_bin)
chi_scores
p_values = pd.Series(chi_scores[1],index = X_train_bin.columns)
p_values.sort_values(ascending = False , inplace = True)
#select 20 best features
p_values = p_values[:20]
p_values.plot.bar()
#feature selection of numeric features
bestfeatures = SelectKBest(score_func=chi2, k=10)
fit = bestfeatures.fit(X_train_num_bin,y_train_num_bin)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X_train_num_bin.columns)
#concat two dataframes for better visualization
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score'] #naming the dataframe columns
featureScores
data_chi2_10best = pd.DataFrame(featureScores.nlargest(10,'Score')) #print 10 best features
data_chi2_10best
data_chi2_10best = list(data_chi2_10best['Specs'])
Chi2_dataset = X_train_num_bin[data_chi2_10best]
Chi2_dataset
###Output
_____no_output_____
###Markdown
Tree Based Classifiers
###Code
from sklearn.ensemble import ExtraTreesClassifier
import matplotlib.pyplot as plt
model = ExtraTreesClassifier()
model.fit(X_train_multi,y_train_multi)
print(model.feature_importances_) #use inbuilt class feature_importances of tree based classifiers
#plot graph of feature importances for better visualization
feat_importances = pd.Series(model.feature_importances_, index=X_train_multi.columns)
feat_importances.nlargest(10).plot(kind='barh')
plt.show()
from sklearn.ensemble import ExtraTreesClassifier
import matplotlib.pyplot as plt
model = ExtraTreesClassifier()
model.fit(X_train_num_multi,y_train_num_multi)
#plot graph of feature importances for better visualization
feat_importances = pd.Series(model.feature_importances_, index=X_train_num_multi.columns)
feat_importances.nlargest(10).plot(kind='barh')
plt.show()
###Output
_____no_output_____
###Markdown
**Data Standardization**
###Code
# selecting numeric attributes columns from data
numeric_col = train_data.select_dtypes(include='number').columns
# using standard scaler for normalizing
std_scaler = StandardScaler()
def standardization(df,col):
for i in col:
arr = df[i]
arr = np.array(arr)
df[i] = std_scaler.fit_transform(arr.reshape(len(arr),1))
return df
# data before normalization
train_data
# calling the normalization() function
data = standardization(train_data.copy(),numeric_col)
# data after normalization
data.head()
# selecting categorical data attributes
cat_col = ['protocol_type','service','flag']
# creating a dataframe with only categorical attributes
categorical = data[cat_col]
categorical.head()
# one-hot-encoding categorical attributes using pandas.get_dummies() function
categorical = pd.get_dummies(categorical,columns=cat_col)
categorical.head()
Normalized_dataset = pd.concat([categorical, data],axis=1)
Normalized_dataset.drop(labels=cat_col, axis=1, inplace=True)
#Normalized_dataset = pd.get_dummies(Normalized_dataset, columns=Normalized_dataset['label'])
#Normalized_dataset
X = Normalized_dataset.loc[:,:'dst_host_srv_rerror_rate']
y_bin = numeric_bin_data['intrusion']
#multi_data.loc[multi_data['label']=='normal','intrusion']
y_multi = multi_data['label']
from sklearn.preprocessing import LabelBinarizer
y_multi = LabelBinarizer().fit_transform(y_multi)
y_multi
###Output
_____no_output_____
###Markdown
**Classification*** Binary Classification* Multi-class Classification Binary Classification
###Code
# splitting the dataset 80% for training and 20% testing
X_train, X_test, y_train, y_test = train_test_split(X,y_bin, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Long Short-Term Memory Classifier (Binary Classification)
###Code
mlp = Sequential() # initializing model
# input layer and first layer with 50 neurons
mlp.add(Dense(units=50, input_dim=X_train.shape[1], activation='relu'))
# output layer with softmax activation
mlp.add(Dense(1,activation='sigmoid'))
# defining loss function, optimizer, metrics and then compiling model
mlp.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# training the model on training dataset
history = mlp.fit(X_train, y_train, epochs=100, batch_size=5000,validation_split=0.2)
# predicting target attribute on testing dataset
test_results = mlp.evaluate(X_test, y_test, verbose=1)
print(f'Test results - Loss: {test_results[0]} - Accuracy: {test_results[1]*100}%')
# Plot of accuracy vs epoch of train and test dataset
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title("Plot of accuracy vs epoch for train and test dataset")
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
# Plot of loss vs epoch of train and test dataset
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title("Plot of loss vs epoch for train and test dataset")
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
y_pred = mlp.predict(X_test)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
auc = auc(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr, label='Keras (area = {:.3f})'.format(auc))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
y_classes = (mlp.predict(X_test)>0.5).astype('int32')
print("Recall Score - ",recall_score(y_test,y_classes))
print("F1 Score - ",f1_score(y_test,y_classes))
print("Precision Score - ",precision_score(y_test,y_classes))
###Output
Recall Score - 0.9926985546118313
F1 Score - 0.9933276176985872
Precision Score - 0.9939574785527788
###Markdown
Multi-class Classification
###Code
# splitting the dataset 80% for training and 20% testing
X_train, X_test, y_train, y_test = train_test_split(X,y_multi, test_size=0.20, random_state=42)
mlp2 = Sequential() # initializing model
# input layer and first layer with 50 neurons
mlp2.add(Dense(units=50, input_dim=X_train.shape[1], activation='relu'))
# output layer with softmax activation
mlp2.add(Dense(units=5,activation='softmax'))
# defining loss function, optimizer, metrics and then compiling model
mlp2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# summary of model layers
mlp2.summary()
# training the model on training dataset
history = mlp2.fit(X_train, y_train, epochs=100, batch_size=5000,validation_split=0.2)
# predicting target attribute on testing dataset
test_results = mlp2.evaluate(X_test, y_test, verbose=1)
print(f'Test results - Loss: {test_results[0]} - Accuracy: {test_results[1]*100}%')
# Plot of accuracy vs epoch for train and test dataset
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title("Plot of accuracy vs epoch for train and test dataset")
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
# Plot of loss vs epoch for train and test dataset
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title("Plot of loss vs epoch for train and test dataset")
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
###Output
_____no_output_____ |
Udacity project 1/SageMaker Project.ipynb | ###Markdown
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. General OutlineRecall the general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app. Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
--2020-05-04 06:42:03-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 22.8MB/s in 4.3s
2020-05-04 06:42:07 (18.5 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
###Output
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
###Markdown
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
###Code
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
###Output
IMDb reviews (combined): train = 25000, test = 25000
###Markdown
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
###Code
print(train_X[100])
print(train_y[100])
###Output
Words can't describe how bad this movie is. I can't explain it by writing only. You have too see it for yourself to get at grip of how horrible a movie really can be. Not that I recommend you to do that. There are so many clichés, mistakes (and all other negative things you can imagine) here that will just make you cry. To start with the technical first, there are a LOT of mistakes regarding the airplane. I won't list them here, but just mention the coloring of the plane. They didn't even manage to show an airliner in the colors of a fictional airline, but instead used a 747 painted in the original Boeing livery. Very bad. The plot is stupid and has been done many times before, only much, much better. There are so many ridiculous moments here that i lost count of it really early. Also, I was on the bad guys' side all the time in the movie, because the good guys were so stupid. "Executive Decision" should without a doubt be you're choice over this one, even the "Turbulence"-movies are better. In fact, every other movie in the world is better than this one.
0
###Markdown
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
###Code
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
###Output
_____no_output_____
###Markdown
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
###Code
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
###Output
_____no_output_____
###Markdown
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input? **Answer: It uses the process of stemming in which after passing a review to the the function, it result in words that are not actual words but stems are created by removing the suffixes or prefixes used with a word such as -ed, -ize etc. It creates a list of stemmed words that were present in the review which will later be sent to the bag-of-words algorithm. It also converts it to lower case and removes the html tags** The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
###Code
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
###Code
import numpy as np
import operator
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for d in data:
for word in d:
if word in word_count:
word_count[word]+=1
else:
word_count[word]=1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = list(dict(sorted(word_count.items(),key=operator.itemgetter(1),reverse=True)).keys())
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
###Output
_____no_output_____
###Markdown
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer: The five most frequently appearing (tokenized) words in the training set are:-** ['movi', 'film', 'one', 'like', 'time']The following output makes sense that these words appear frequently in the training set
###Code
# TODO: Use this space to determine the five most frequently appearing words in the training set.
list(word_dict.keys())[:5]
###Output
_____no_output_____
###Markdown
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
###Code
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
###Output
_____no_output_____
###Markdown
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
###Code
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
###Output
_____no_output_____
###Markdown
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
###Code
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
len(train_X[109])
###Output
_____no_output_____
###Markdown
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** This will not be a problem hence it makes sense to process both the training and testing set using the same preprocessing techniques to have a consistency of the data that is going to be trained on and also tested upon Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
###Code
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
###Code
!pygmentize train/model.py
###Output
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [36mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
###Markdown
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
###Code
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
###Output
_____no_output_____
###Markdown
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
###Code
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
output = model(batch_X)
loss = loss_fn(output,batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
###Output
_____no_output_____
###Markdown
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
###Code
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
###Output
Epoch: 1, BCELoss: 0.6949022769927978
Epoch: 2, BCELoss: 0.6819980144500732
Epoch: 3, BCELoss: 0.6704551339149475
Epoch: 4, BCELoss: 0.657850193977356
Epoch: 5, BCELoss: 0.6434584379196167
###Markdown
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. (TODO) Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
###Output
2020-05-06 11:30:13 Starting - Starting the training job...
2020-05-06 11:30:16 Starting - Launching requested ML instances......
2020-05-06 11:31:24 Starting - Preparing the instances for training......
2020-05-06 11:32:42 Downloading - Downloading input data......
2020-05-06 11:33:22 Training - Downloading the training image.[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2020-05-06 11:33:48,232 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training[0m
[34m2020-05-06 11:33:48,257 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[34m2020-05-06 11:33:51,278 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[34m2020-05-06 11:33:51,497 sagemaker-containers INFO Module train does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2020-05-06 11:33:51,497 sagemaker-containers INFO Generating setup.cfg[0m
[34m2020-05-06 11:33:51,497 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2020-05-06 11:33:51,498 sagemaker-containers INFO Installing module with the following command:[0m
[34m/usr/bin/python -m pip install -U . -r requirements.txt[0m
[34mProcessing /opt/ml/code[0m
[34mCollecting pandas (from -r requirements.txt (line 1))[0m
[34m Downloading https://files.pythonhosted.org/packages/74/24/0cdbf8907e1e3bc5a8da03345c23cbed7044330bb8f73bb12e711a640a00/pandas-0.24.2-cp35-cp35m-manylinux1_x86_64.whl (10.0MB)[0m
[34mCollecting numpy (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/38/92/fa5295d9755c7876cb8490eab866e1780154033fa45978d9cf74ffbd4c68/numpy-1.18.4-cp35-cp35m-manylinux1_x86_64.whl (20.0MB)[0m
[34mCollecting nltk (from -r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/92/75/ce35194d8e3022203cca0d2f896dbb88689f9b3fce8e9f9cff942913519d/nltk-3.5.zip (1.4MB)[0m
[34mCollecting beautifulsoup4 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/e8/b5/7bb03a696f2c9b7af792a8f51b82974e51c268f15e925fc834876a4efa0b/beautifulsoup4-4.9.0-py3-none-any.whl (109kB)[0m
[34mCollecting html5lib (from -r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/a5/62/bbd2be0e7943ec8504b517e62bab011b4946e1258842bc159e5dfde15b96/html5lib-1.0.1-py2.py3-none-any.whl (117kB)[0m
[34mCollecting pytz>=2011k (from pandas->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/4f/a4/879454d49688e2fad93e59d7d4efda580b783c745fd2ec2a3adf87b0808d/pytz-2020.1-py2.py3-none-any.whl (510kB)[0m
[34mRequirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.5/dist-packages (from pandas->-r requirements.txt (line 1)) (2.7.5)[0m
[34mRequirement already satisfied, skipping upgrade: click in /usr/local/lib/python3.5/dist-packages (from nltk->-r requirements.txt (line 3)) (7.0)[0m
[34mCollecting joblib (from nltk->-r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/28/5c/cf6a2b65a321c4a209efcdf64c2689efae2cb62661f8f6f4bb28547cf1bf/joblib-0.14.1-py2.py3-none-any.whl (294kB)[0m
[34mCollecting regex (from nltk->-r requirements.txt (line 3))[0m
[34m Downloading https://files.pythonhosted.org/packages/4c/e7/eee73c42c1193fecc0e91361a163cbb8dfbea62c3db7618ad986e5b43a14/regex-2020.4.4.tar.gz (695kB)[0m
[34mCollecting tqdm (from nltk->-r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/c9/40/058b12e8ba10e35f89c9b1fdfc2d4c7f8c05947df2d5eb3c7b258019fda0/tqdm-4.46.0-py2.py3-none-any.whl (63kB)[0m
[34mCollecting soupsieve>1.2 (from beautifulsoup4->-r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/05/cf/ea245e52f55823f19992447b008bcbb7f78efc5960d77f6c34b5b45b36dd/soupsieve-2.0-py2.py3-none-any.whl[0m
[34mCollecting webencodings (from html5lib->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl[0m
[34mRequirement already satisfied, skipping upgrade: six>=1.9 in /usr/local/lib/python3.5/dist-packages (from html5lib->-r requirements.txt (line 5)) (1.11.0)[0m
[34mBuilding wheels for collected packages: nltk, train, regex
Running setup.py bdist_wheel for nltk: started[0m
[34m Running setup.py bdist_wheel for nltk: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/ae/8c/3f/b1fe0ba04555b08b57ab52ab7f86023639a526d8bc8d384306
Running setup.py bdist_wheel for train: started
Running setup.py bdist_wheel for train: finished with status 'done'
Stored in directory: /tmp/pip-ephem-wheel-cache-ij6y9z7e/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3
Running setup.py bdist_wheel for regex: started[0m
2020-05-06 11:33:47 Training - Training image download completed. Training in progress.[34m Running setup.py bdist_wheel for regex: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e6/9b/ae/2972da29cc7759b71dee015813b7c6931917d6a51e64ed5e79[0m
[34mSuccessfully built nltk train regex[0m
[34mInstalling collected packages: pytz, numpy, pandas, joblib, regex, tqdm, nltk, soupsieve, beautifulsoup4, webencodings, html5lib, train
Found existing installation: numpy 1.15.4[0m
[34m Uninstalling numpy-1.15.4:
Successfully uninstalled numpy-1.15.4[0m
[34mSuccessfully installed beautifulsoup4-4.9.0 html5lib-1.0.1 joblib-0.14.1 nltk-3.5 numpy-1.18.4 pandas-0.24.2 pytz-2020.1 regex-2020.4.4 soupsieve-2.0 tqdm-4.46.0 train-1.0.0 webencodings-0.5.1[0m
[34mYou are using pip version 18.1, however version 20.1 is available.[0m
[34mYou should consider upgrading via the 'pip install --upgrade pip' command.[0m
[34m2020-05-06 11:34:13,813 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"model_dir": "/opt/ml/model",
"network_interface_name": "eth0",
"resource_config": {
"hosts": [
"algo-1"
],
"current_host": "algo-1",
"network_interface_name": "eth0"
},
"input_dir": "/opt/ml/input",
"log_level": 20,
"additional_framework_parameters": {},
"num_gpus": 1,
"hyperparameters": {
"epochs": 10,
"hidden_dim": 200
},
"user_entry_point": "train.py",
"module_name": "train",
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"framework_module": "sagemaker_pytorch_container.training:main",
"num_cpus": 4,
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_config_dir": "/opt/ml/input/config",
"current_host": "algo-1",
"output_data_dir": "/opt/ml/output/data",
"job_name": "sagemaker-pytorch-2020-05-06-11-30-13-284",
"module_dir": "s3://sagemaker-us-east-1-315994857526/sagemaker-pytorch-2020-05-06-11-30-13-284/source/sourcedir.tar.gz",
"hosts": [
"algo-1"
],
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HPS={"epochs":10,"hidden_dim":200}[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-315994857526/sagemaker-pytorch-2020-05-06-11-30-13-284/source/sourcedir.tar.gz[0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_NUM_CPUS=4[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_HP_HIDDEN_DIM=200[0m
[34mSM_MODULE_NAME=train[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_NUM_GPUS=1[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[34mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mPYTHONPATH=/usr/local/bin:/usr/lib/python35.zip:/usr/lib/python3.5:/usr/lib/python3.5/plat-x86_64-linux-gnu:/usr/lib/python3.5/lib-dynload:/usr/local/lib/python3.5/dist-packages:/usr/lib/python3/dist-packages[0m
[34mSM_HP_EPOCHS=10[0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_USER_ARGS=["--epochs","10","--hidden_dim","200"][0m
[34mSM_CHANNELS=["training"][0m
[34mSM_USER_ENTRY_POINT=train.py[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"epochs":10,"hidden_dim":200},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","job_name":"sagemaker-pytorch-2020-05-06-11-30-13-284","log_level":20,"model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-315994857526/sagemaker-pytorch-2020-05-06-11-30-13-284/source/sourcedir.tar.gz","module_name":"train","network_interface_name":"eth0","num_cpus":4,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train.py"}
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python -m train --epochs 10 --hidden_dim 200
[0m
[34mUsing device cuda.[0m
[34mGet train data loader.[0m
###Markdown
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model.
###Code
# TODO: Deploy the trained model
estimator_predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.p2.xlarge')
###Output
-----------------!
###Markdown
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
###Code
pd.DataFrame(test_X).head()
pd.DataFrame(test_X_len).head()
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
test_X.head()
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, estimator_predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** The accuracy of the project using XGBoost model was similar to the RNN. Hence, We can conclude that it does not matter which of the two algorithms we use for sentimental analysis. (TODO) More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
###Code
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
###Output
_____no_output_____
###Markdown
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
###Code
# TODO: Convert test_review into a form usable by the model and save the results in test_data
train = review_to_words(test_review)
converted_review, converted_len = convert_and_pad(word_dict,train)
test_data = np.array([[converted_len]+converted_review])
###Output
_____no_output_____
###Markdown
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
###Code
estimator_predictor.predict(test_data)
###Output
_____no_output_____
###Markdown
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
###Code
estimator.delete_endpoint()
###Output
_____no_output_____
###Markdown
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference codeBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.
###Code
!pygmentize serve/predict.py
###Output
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.optim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.utils.data[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, convert_and_pad
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load the PyTorch model from the `model_dir` directory."""[39;49;00m
[34mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[34mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: {}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = LSTMClassifier(model_info[[33m'[39;49;00m[33membedding_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mvocab_size[39;49;00m[33m'[39;49;00m])
[37m# Load the store model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[37m# Load the saved word_dict.[39;49;00m
word_dict_path = os.path.join(model_dir, [33m'[39;49;00m[33mword_dict.pkl[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(word_dict_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.word_dict = pickle.load(f)
model.to(device).eval()
[34mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32minput_fn[39;49;00m(serialized_input_data, content_type):
[34mprint[39;49;00m([33m'[39;49;00m[33mDeserializing the input data.[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m content_type == [33m'[39;49;00m[33mtext/plain[39;49;00m[33m'[39;49;00m:
data = serialized_input_data.decode([33m'[39;49;00m[33mutf-8[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m data
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mRequested unsupported ContentType in content_type: [39;49;00m[33m'[39;49;00m + content_type)
[34mdef[39;49;00m [32moutput_fn[39;49;00m(prediction_output, accept):
[34mprint[39;49;00m([33m'[39;49;00m[33mSerializing the generated output.[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m [36mstr[39;49;00m(prediction_output)
[34mdef[39;49;00m [32mpredict_fn[39;49;00m(input_data, model):
[34mprint[39;49;00m([33m'[39;49;00m[33mInferring sentiment of input data.[39;49;00m[33m'[39;49;00m)
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m model.word_dict [35mis[39;49;00m [36mNone[39;49;00m:
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mModel has not been loaded properly, no word_dict.[39;49;00m[33m'[39;49;00m)
[37m# TODO: Process input_data so that it is ready to be sent to our model.[39;49;00m
[37m# You should produce two variables:[39;49;00m
[37m# data_X - A sequence of length 500 which represents the converted review[39;49;00m
[37m# data_len - The length of the review[39;49;00m
data_X ,data_len = convert_and_pad(model.word_dict, review_to_words(input_data))
[37m# Using data_X and data_len we construct an appropriate input tensor. Remember[39;49;00m
[37m# that our model expects input data of the form 'len, review[500]'.[39;49;00m
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape([34m1[39;49;00m, -[34m1[39;49;00m)
data = torch.from_numpy(data_pack)
data = data.to(device)
[37m# Make sure to put the model into evaluation mode[39;49;00m
model.eval()
[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should[39;49;00m
[37m# be a numpy array which contains a single integer which is either 1 or 0[39;49;00m
[34mwith[39;49;00m torch.no_grad():
output = model.forward(data)
result = np.round(output.numpy())
[34mreturn[39;49;00m result
###Markdown
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
###Code
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
---------------!
###Markdown
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
###Code
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
###Output
_____no_output_____
###Markdown
As an additional test, we can try sending the `test_review` that we looked at earlier.
###Code
predictor.predict(test_review)
###Output
_____no_output_____
###Markdown
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
###Code
predictor.endpoint
###Output
_____no_output_____
###Markdown
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**. Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.**TODO:** Make sure that you include the edited `index.html` file in your project submission. Now that your web app is working, trying playing around with it and see how well it works.**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review? **Answer:** A critice review on Avengers Endgame:"It’s an epic cultural event, the kind of thing that transcends traditional film criticism to become a shared experience with fans around the world."The review was positive Delete the endpointRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
ipynb/residency/task_residencies_youtube.ipynb | ###Markdown
YouTube per-CPU CGroup residency analysis=======================This is a run of experiments/run_youtube.py with the cgroups module enabled.This notebook parses and plots the trace.html
###Code
#!/usr/bin/env python
%pylab inline
import trappy
from trace import Trace
import logging
import pandas as pd
import numpy as np
import os
from conf import LisaLogging
LisaLogging.setup(level=logging.ERROR)
logging.info('#### Setup FTrace')
path_to_html = "/home/joelaf/repo/lisa-aosp/external/lisa/results/YouTube_cgroups/trace.html"
tr = Trace(None, path_to_html,
cgroup_info = {
'cgroups': ['foreground', 'background', 'system-background', 'top-app', 'rt'],
'controller_ids': { 4: 'cpuset', 2: 'schedtune' }
},
events=[ 'sched_switch', 'cgroup_attach_task_devlib', 'cgroup_attach_task', 'sched_process_fork' ],
normalize_time=False)
###Output
_____no_output_____
###Markdown
Total amount of time spent per Cgroup (schedtune)===========================(NaN is the idle task)
###Code
tr.data_frame.cpu_residencies_cgroup('schedtune')
###Output
_____no_output_____
###Markdown
Plot per-CPU breakdown without considering idle time------------------------------------------------------------
###Code
tr.analysis.residency.plot_cgroup('schedtune', idle=False)
###Output
/home/joelaf/anaconda2/lib/python2.7/site-packages/matplotlib/__init__.py:938: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
###Markdown
Plot per-CPU breakdown WITH considering idle time (yellow slice)------------------------------------------------------------
###Code
tr.analysis.residency.plot_cgroup('schedtune', idle=True)
tr.analysis.residency.plot_cgroup('schedtune', cgroup='background')
tr.analysis.residency.plot_cgroup('schedtune', cgroup='root')
tr.analysis.residency.plot_cgroup('schedtune', cgroup='top-app')
tr.analysis.residency.plot_cgroup('schedtune', cgroup='foreground')
###Output
_____no_output_____ |
benchmarking/Time_Calculations.ipynb | ###Markdown
Calculating the run time of the libraries
###Code
# Simple Convex function for 10 iterations
###Output
_____no_output_____
###Markdown
Random strategy runtime computation
###Code
from mango.tuner import Tuner
def get_param_dict():
param_dict = {
'x': range(-5000, 5000)
}
return param_dict
def objfunc(args_list):
results = []
for hyper_par in args_list:
x = hyper_par['x']
result = -(x**2)
results.append(result)
return results
def get_conf():
conf = dict()
conf['batch_size'] = 1
conf['num_iteration'] = 1
conf['optimizer'] = "Random"
return conf
param_dict = get_param_dict()
conf= get_conf()
tuner = Tuner(param_dict, objfunc,conf)
num_of_experiments = 1000
import time
total_time = 0.0
for i in range(num_of_experiments):
t1 = time.time()
results = tuner.maximize()
t2 = time.time()
total_time = total_time + (t2 - t1)
(total_time) # this is in milliseconds because we did 1000 experiments and resolution was in seconds
###Output
_____no_output_____
###Markdown
Mango strategy
###Code
from mango.tuner import Tuner
def get_param_dict():
param_dict = {
'x': range(-5000, 5000)
}
return param_dict
def objfunc(args_list):
results = []
for hyper_par in args_list:
x = hyper_par['x']
result = -(x**2)
results.append(result)
return results
def get_conf():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 1
conf['num_iteration'] = 1
conf['domain_size'] = 5000
return conf
param_dict = get_param_dict()
conf= get_conf()
tuner = Tuner(param_dict, objfunc,conf)
num_of_experiments = 1000
import time
total_time = 0.0
for i in range(num_of_experiments):
t1 = time.time()
results = tuner.maximize()
t2 = time.time()
total_time = total_time + (t2 - t1)
(total_time) # this is in milliseconds because we did 1000 experiments and resolution was in seconds
###Output
_____no_output_____
###Markdown
Hyperopt Optimizer
###Code
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
def objective(x):
return {'loss': x ** 2, 'status': STATUS_OK }
num_of_tries = 1000
import time
total_time = 0.0
for i in range(num_of_tries):
t1 = time.time()
trials = Trials()
best_20 = fmin(fn=objective, space=hp.uniform('x', -5000, 5000), algo=tpe.suggest, trials=trials, max_evals=2)
t2 = time.time()
total_time = total_time + (t2 - t1)
(total_time) # this is in milliseconds because we did 1000 experiments and resolution was in seconds
###Output
_____no_output_____
###Markdown
Time to evaluate the objective function
###Code
def objfunc(args_list):
results = []
for hyper_par in args_list:
x = hyper_par['x']
result = -(x**2)
results.append(result)
return results
arg_list = dict()
arg_list['x'] = 100
num_of_tries = 1000
import time
total_time = 0.0
for i in range(num_of_tries):
t1 = time.time()
res = objfunc([arg_list])
t2 = time.time()
total_time = total_time + (t2 - t1)
(total_time) # this is in milliseconds because we did 1000 experiments and resolution was in seconds
###Output
_____no_output_____ |
EastAfrica_Financial_Inclusion.ipynb | ###Markdown
**FINANCIAL INCLUSION IN KENYA,UGANDA,TANZANIA AND RWANDA** QuestionWhat is the relationship between Variables and factors that influence the likelihood of an individual to have or use a bank account? Metric For Success-Determine the relationship between Variables.-Determine factors that influence individuals having or using a bank account Understanding The ContextFinancial Inclusion is a method of offeringbanking and financial services to individuals.It is essential especially in Africa for economic growth and to eventually eradicate poverty by individuals building savings,investing and availing credit.Traditionally, access to bank accounts has been regarded as an indicator of financial inclusion. Therefore, access to bank accounts is an essential contributor to long-term economic growth. Recording Experimental Design.* Loading Dataset.* Data Exploration. 1. Previewing Dataset head and tail. 2. Dataset info. 3. Dataframe description. 4. Dataframe shape. 5. dtypes.* Tidying Dataset. 1. check for duplicates. 2. check for null values and dropping. 3. change column names. 4. check for outliers and dropping them. 5. check for anormalities.* Univariate EDA.* Bivariate EDA .* Multivariate EDA. * Data RelevanceThe data given is relevant in our analysis as it contains all needed variale for performing the analysis.The dataset contains variables that have an infuennce on individual having or using a bank account such as:Age,Location,Country,Job,Marital Status etc. Loading Dataset
###Code
#import libraries to be used in our analysis.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
!pip install researchpy
import researchpy as rp
import seaborn as sns
#loading our variable definitions dataset.
url='http://bit.ly/VariableDefinitions'
df_def=pd.read_csv(url)
df_def
#loading our financial dataset
url='http://bit.ly/FinancialDataset'
df_finance=pd.read_csv(url)
###Output
_____no_output_____
###Markdown
Data Exploration
###Code
#previewing the top of our dataset
df_finance.head()
#previewing the bottom of our dataset.
df_finance.tail()
#checking our dataset
df_finance.info()
#checking the number of records in our dataset.
df_finance.shape
#decribing our dataset
df_finance.describe()
#checking our datasets datatypes.
df_finance.dtypes
###Output
_____no_output_____
###Markdown
Tidying The Dataset.
###Code
#cleaning the dataset
#checking for duplicates and null values in our dataset
df_finance.duplicated().sum() #there are no duplicated variables in our dataset
#checking for null values in our dataset.
df_finance.isnull().sum()
#dropping null values since most of our dataset is categorical data.
df_finance.dropna(inplace=True)
#previewing our dataset to check for null after dropping
df_finance.isnull().sum()
#changing column names
df_finance.columns=['Country','Year','UniqueID','Bank_Account','Location_Type','Cellphone_Access','Household_Size','Respondent_Age',
'Respondent_Gender','Relation_Head','Marital_Status','Education_Level','Job_type']
#previewing our dataset column names after changing column names
df_finance.columns
#checking for outliers in our dataset
#df_finance.boxplot(column=['Respondent Age','year','household_size'],grid=False)
#checking for outliers using boxplot
df_finance.boxplot()
#dropping outliers
Q1=df_finance.quantile(0.25)
Q3=df_finance.quantile(0.75)
IQR=Q3-Q1
df_finance1=df_finance[~((df_finance<(Q1-1.5*IQR))|(df_finance>(Q3+1.5*IQR))).any(axis=1)]
#checking the number of records in our dataset before removing outliers.
df_finance.shape
#checking for the number of records in our dataset after removing outliers.
df_finance1.shape
#checking for anomalities and removing them
df_finance1['Respondent_Age'].unique() #no anomalities in Respondent_Age
#checking for anomalities in year
q1_Year = df_finance1['Year'].quantile(.25)
q3_Year = df_finance1['Year'].quantile(.75)
iqr_Year = q3_Year - q1_Year
print(iqr_Year)
#checking for anormalities in Age
q1_Age = df_finance1['Respondent_Age'].quantile(.25)
q3_Age = df_finance1['Respondent_Age'].quantile(.75)
iqr_Age = q3_Age - q1_Age
print(iqr_Age)
df_finance1.dtypes
###Output
_____no_output_____
###Markdown
UNIVARIATE MEASURES OF CENTRAL TENDANCY
###Code
#calculating all measure of central tendancy on Respondent Age
print(f"Age mean :{df_finance1['Respondent_Age'].mean()}")
print(f"Age median :{df_finance1['Respondent_Age'].median()}")
print(f"Age max :{df_finance1['Respondent_Age'].max()}")
print(f"Age min :{df_finance1['Respondent_Age'].min()}")
print(f"Age mode :{df_finance1['Respondent_Age'].mode()}")
#calculating all measures of central tendancy on household size
print(f"household size mean :{df_finance1['Household_Size'].mean()}")
print(f"household size median :{df_finance1['Household_Size'].median()}")
print(f"household size max :{df_finance1['Household_Size'].max()}")
print(f"household size min :{df_finance1['Household_Size'].min()}")
print(f"household size mode :{df_finance1['Household_Size'].mode()}")
###Output
household size mean :3.57984598459846
household size median :3.0
household size max :9.0
household size min :0.0
household size mode :0 2.0
dtype: float64
###Markdown
MEASURES OF DISPERSION
###Code
#calculating standard deviation of Respondent Age
df_finance1['Respondent_Age'].std()
#there is a low standard deviation meaning that the data points are closer to the mean
#calculating standard deviation of household size
df_finance1['Household_Size'].std()
#there is a low standard deviation meaning that the data points are closer to the mean
#calculating Variance of Respondent Age.
df_finance1['Respondent_Age'].var()
#the dataset has very large dis_similarities among its members.
#calculating Variance of Household size.
df_finance1['Household_Size'].var()
#the dataset has no major dis_similarities among its members.
#checking for range
df_finance1_max=df_finance1['Respondent_Age'].max()
df_finance1_min=df_finance1['Respondent_Age'].min()
#the range of respondent age is
df_finance1_max-df_finance1_min
#checking for range
df_finance1_max=df_finance1['Household_Size'].max()
df_finance1_min=df_finance1['Household_Size'].min()
#the range of household size is
df_finance1_max-df_finance1_min
#calculating the quartile ranges
df_finance1["Household_Size"].quantile([0.25,0.5,0.75])
#calculating the quartile ranges
df_finance1["Respondent_Age"].quantile([0.25,0.5,0.75])
#kurtosis
df_finance1["Respondent_Age"].kurt()
#the respondent age is a platykurtic distribution.This is because it is less than zero meaning it is light tailed.
#kurtosis
df_finance1["Household_Size"].kurt()
#the household size is a platykurtic distribution.This is because it is less than zero meaning it is light tailed.
#skewness
df_finance1["Respondent_Age"].skew()
#Respondent Age is Positively Skewed
#skewness
df_finance1["Household_Size"].skew()
#the household size is positively skewed.
###Output
_____no_output_____
###Markdown
VISUALIZATION
###Code
#visualization of respondent age using boxplot
sns.boxplot(df_finance1['Respondent_Age'], showmeans = True);
#boxplot of household size
sns.boxplot(df_finance1['Household_Size'], showmeans = True);
df_finance1.columns
#Histogram showin the marital status of those interviewed.
df_finance1['Marital_Status'].hist(figsize=[10,10])
plt.title('Respondents Marital Status')
plt.xlabel('Marital Status')
plt.show();
#histogram showing location of those interviewed.
df_finance1['Location_Type'].hist(bins=5,histtype='bar')
plt.title('Respondents Location')
plt.xlabel('Location');
#a histogram representing respondents access to cell phones.
df_finance1['Cellphone_Access'].hist(bins=5,histtype='bar')
plt.title('Access to Cell Phone')
plt.xlabel('Cellphone');
#a histogram of Respondents gender.
df_finance1['Respondent_Gender'].hist(histtype='bar',bins=5)
plt.title('Respondents Gender')
plt.xlabel('Gender');
#a histogram of countries.
df_finance1['Country'].hist(histtype='bar',bins=10)
plt.title('Countries')
plt.xlabel('Country');
###Output
_____no_output_____
###Markdown
BIVARIATE Numerical and Numerical
###Code
#plotting a scatter graph to show relationship between the numeric variables.
df_finance1.plot(x = 'Respondent_Age', y = 'Household_Size', kind='scatter')
plt.title('Relationship between household size and Respondent Age');
#there is zero correlation between Respondent age and Household_size
#this means that increase in respondent age has no effect on household size.
#correlation between Respondent Age and Household Size
df_finance1['Respondent_Age'].corr(df_finance1['Household_Size'])
#There is an inverse weak correlation between household size and respondent age
###Output
_____no_output_____
###Markdown
Categorical and Categorical
###Code
#Selecting the categorical columns
cat_cols = df_finance1.select_dtypes(include ='object').columns.to_list()
cat_cols
#a stacked graph showing Respondent Gender and the location.
df_finance1.groupby('Bank_Account')['Location_Type'].value_counts().unstack().plot.bar(stacked=True)
#a stacked graph showing marital status and location of respondents.
df_finance1.groupby('Marital_Status')['Bank_Account'].value_counts().unstack().plot.bar(stacked=True)
#individuals who are married and living together have bank accounts compared to other individuals
#gender and phone access stacked graph
df_finance1.groupby('Respondent_Gender')['Cellphone_Access'].value_counts().unstack().plot.bar(stacked=True)
#stacked graph showing bank account and gender
df_finance1.groupby('Bank_Account')['Education_Level'].value_counts().unstack().plot.bar(stacked=True)
#bank account and Country stacked graph
df_finance1.groupby('Bank_Account')['Country'].value_counts().unstack().plot.bar(stacked=True)
###Output
_____no_output_____
###Markdown
Numerical and Categorical
###Code
#Using T test
#
from scipy.stats import ttest_ind
x= df_finance1['Bank_Account'].value_counts()
y= df_finance1['Respondent_Age'].value_counts()
t_statistic, p_value = ttest_ind(x, y)
# Then displaying the t-statistic and p value
print("P Value",p_value)
print(" T statistic",t_statistic)
#
#Using T test
#
from scipy.stats import ttest_ind
x= df_finance1['Bank_Account'].value_counts()
y= df_finance1['Household_Size'].value_counts()
t_statistic, p_value = ttest_ind(x, y)
# Then displaying the t-statistic and p value
print("P Value",p_value)
print(" T statistic",t_statistic)
#
###Output
P Value 0.014125604854719101
T statistic 2.9666879392390975
###Markdown
MULTIVARIATE Dicriminant Analysis(LDA)
###Code
# Step 1
#dividing the dataset into features and corresponding labels,
#divide the resultant dataset into training and test sets.
#
X =finance.iloc[:, 13:].values
y =finance.iloc[:,3].values
X
y
# Step 2:
# divide data into training and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# Step 3
#Feature scaling
#we perform feature scaling
#
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Step 4
# Peforming LDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=27)
X_train = lda.fit_transform(X_train,y_train)
X_test = lda.transform(X_test)
# In the script above the LinearDiscriminantAnalysis class is imported as LDA.
# We have to pass the value for the n_components parameter of the LDA,
# which refers to the number of linear discriminates that we want to retrieve.
# In this case we set the n_components to 1, since we first want to check the performance
# of our classifier with a single linear discriminant.
# Finally we execute the fit and transform methods to actually retrieve the linear discriminants.
# Notice, in case of LDA, the transform method takes two parameters: the X_train and the y_train.
# This reflects the fact that LDA takes the output class labels into account while selecting the linear discriminants.
# Step 7
# Training and Making Predictions
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(max_depth=2, random_state=0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
# Step 8
# Evaluating the Performance
# evaluation of algorithm performance.
#
#
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy' + str(accuracy_score(y_test, y_pred)))
#there is 100%accuracy
###Output
[[3896 0]
[ 0 649]]
Accuracy1.0
###Markdown
###Code
fin['Country']=df_finance1["Country"]
###Output
_____no_output_____ |
_notebooks/2020-08-26-01-Summary-Statistics-with-Python.ipynb | ###Markdown
Summary Statistics with Python> Summary statistics gives you the tools you need to boil down massive datasets to reveal the highlights. In this chapter, you'll explore summary statistics including mean, median, and standard deviation, and learn how to accurately interpret them. You'll also develop your critical thinking skills, allowing you to choose the best summary statistics for your data. This is the Summary of lecture "Introduction to Statistics in Python", via datacamp.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Datacamp, Statistics]- image: images/iqr.png
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 8)
###Output
_____no_output_____
###Markdown
What is statistics?- Statistics - the practice and study of collecting and analyzing data - Summary Statistic - A fact about or summary of some data- Example - How likely is someone to purchase a product? Are people more likely to purchase it if they can use a different payment system? - How many occupants will your hotel have? How can you optimize occupancy? - How many sizes of jeans need to be manufactured so they can fit 95% of the population? Should the same number of each size be produced? - A/B tests: Which ad is more effective in getting people to purchase product?- Type of statistics - Descriptive statistics - Describe and summarize data - Inferential statistics - Use a sample of data to make inferences about a larger population- Type of data - Numeric (Quantitative) - Continuous (Measured) - Discrete (Counted) - Categorical (Qualitative) - Nomial (Unordered) - Ordinal (Ordered) Measures of center Mean and medianIn this chapter, you'll be working with the [2018 Food Carbon Footprint Index](https://www.nu3.de/blogs/nutrition/food-carbon-footprint-index-2018) from nu3. The `food_consumption` dataset contains information about the kilograms of food consumed per person per year in each country in each food category (`consumption`) as well as information about the carbon footprint of that food category (`co2_emissions`) measured in kilograms of carbon dioxide, or $CO_2$, per person per year in each country.In this exercise, you'll compute measures of center to compare food consumption in the US and Belgium.
###Code
food_consumption = pd.read_csv('./dataset/food_consumption.csv', index_col=0)
food_consumption.head()
# Filter for Belgium
be_consumption = food_consumption[food_consumption['country'] == 'Belgium']
# Filter for USA
usa_consumption = food_consumption[food_consumption['country'] == 'USA']
# Calculate mean and median consumption in Belgium
print(np.mean(be_consumption['consumption']))
print(np.median(be_consumption['consumption']))
# Calculate mean and median consumption of USA
print(np.mean(usa_consumption['consumption']))
print(np.median(usa_consumption['consumption']))
###Output
42.132727272727266
12.59
44.650000000000006
14.58
###Markdown
or
###Code
# Subset for Belgium and USA
be_and_usa = food_consumption[(food_consumption['country'] == 'Belgium') |
(food_consumption['country'] == 'USA')]
# Group by country, select consumption column, and compute mean and median
print(be_and_usa.groupby('country')['consumption'].agg([np.mean, np.median]))
###Output
mean median
country
Belgium 42.132727 12.59
USA 44.650000 14.58
###Markdown
Mean vs. medianIn the video, you learned that the mean is the sum of all the data points divided by the total number of data points, and the median is the middle value of the dataset where 50% of the data is less than the median, and 50% of the data is greater than the median. In this exercise, you'll compare these two measures of center.
###Code
# Subset for food_category equals rice
rice_consumption = food_consumption[food_consumption['food_category'] == 'rice']
# Histogram of co2_emission for rice and show plot
rice_consumption['co2_emission'].hist();
# Calculate mean and median of co2_emission with .agg()
print(rice_consumption['co2_emission'].agg([np.mean, np.median]))
###Output
mean 37.591615
median 15.200000
Name: co2_emission, dtype: float64
###Markdown
Measures of spread- Variance - Average distance from each data point to the data's mean- Standard deviation- Mean absolute deviation- Standard deviation vs. mean absolute deviation - Standard deviation squares distances, penalizing longer distances more than shorter ones - Mean absolute deviation penalizes each distance equally- Quantiles (Or percentiles)- Interquartile range (IQR) - Height of the box in a boxplot- Outliers - Data point that is substantially different from the others Quartiles, quantiles, and quintilesQuantiles are a great way of summarizing numerical data since they can be used to measure center and spread, as well as to get a sense of where a data point stands in relation to the rest of the data set. For example, you might want to give a discount to the 10% most active users on a website.In this exercise, you'll calculate quartiles, quintiles, and deciles, which split up a dataset into 4, 5, and 10 pieces, respectively.
###Code
# Calculate the quartiles of co2_emission
print(np.quantile(food_consumption['co2_emission'], np.linspace(0, 1, 5)))
# Calculate the quintiles of co2_emission
print(np.quantile(food_consumption['co2_emission'], np.linspace(0, 1, 6)))
# Calculate the deciles of co2_emission
print(np.quantile(food_consumption['co2_emission'], np.linspace(0, 1, 10)))
###Output
[ 0. 5.21 16.53 62.5975 1712. ]
[ 0. 3.54 11.026 25.59 99.978 1712. ]
[0.00000000e+00 9.05555556e-01 4.19111111e+00 8.05333333e+00
1.32000000e+01 2.10944444e+01 3.58666667e+01 7.90622222e+01
1.86115556e+02 1.71200000e+03]
###Markdown
Variance and standard deviationVariance and standard deviation are two of the most common ways to measure the spread of a variable, and you'll practice calculating these in this exercise. Spread is important since it can help inform expectations. For example, if a salesperson sells a mean of 20 products a day, but has a standard deviation of 10 products, there will probably be days where they sell 40 products, but also days where they only sell one or two. Information like this is important, especially when making predictions.
###Code
# Print variance and sd of co2_emission for each food_category
print(food_consumption.groupby('food_category')['co2_emission'].agg([np.var, np.std]))
# Create histogram of co2_emission for food_category 'beef'
food_consumption[food_consumption['food_category'] == 'beef']['co2_emission'].hist();
# Create histogram of co2_emission for food_category 'eggs'
food_consumption[food_consumption['food_category'] == 'eggs']['co2_emission'].hist();
###Output
_____no_output_____
###Markdown
Finding outliers using IQROutliers can have big effects on statistics like mean, as well as statistics that rely on the mean, such as variance and standard deviation. Interquartile range, or IQR, is another way of measuring spread that's less influenced by outliers. IQR is also often used to find outliers. If a value is less than $Q1−1.5×IQR$ or greater than $Q3+1.5×IQR$, it's considered an outlier. In this exercise, you'll calculate IQR and use it to find some outliers.
###Code
# Calculate total co2_emission per country: emissions_by_country
emissions_by_country = food_consumption.groupby('country')['co2_emission'].sum()
print(emissions_by_country)
# Compute the first and third quartiles and IQR of emissions_by_country
q1 = np.quantile(emissions_by_country, 0.25)
q3 = np.quantile(emissions_by_country, 0.75)
iqr = q3 - q1
# Calculate the lower and upper cutoffs for outliers
lower = q1 - 1.5 * iqr
upper = q3 + 1.5 * iqr
# Subset emissions_by_country to find outliers
outliers = emissions_by_country[(emissions_by_country > upper) | (emissions_by_country < lower)]
print(outliers)
###Output
country
Argentina 2172.4
Name: co2_emission, dtype: float64
|
week09_pagerank/practice_pagerank.ipynb | ###Markdown
PageRank This page demonstrates the use of a short Python implementation of the PageRank algorithm on the link structure contained in the graph on the [PageRank Wikipedia](http://en.wikipedia.org/wiki/PageRank) page:
###Code
from IPython.display import Image
Image(url='http://upload.wikimedia.org/wikipedia/commons/f/fb/PageRanks-Example.svg')
import numpy as np
###Output
_____no_output_____
###Markdown
First, we will encode the links present on this graph as a count matrix `M_counts`.
###Code
n_pages = 11 # numbering pages A through K as 0 to 10
M_counts = np.zeros((n_pages, n_pages)) # will hold the number of link counts (assumed 0 or 1)
# columns = starting page, row = destination page, ie M_ij = whether or not there is a link from j to i
M_counts[:,0] = 1 # page 0 (A in the graphic) is a sink because it has no outgoing links at all;
# however, M cannot contain an all-zero column, so do as if A was linking to all other pages (ie put 1's everywhere)
M_counts[2,1] = 1 # B->C
M_counts[1,2] = 1 # C->B
M_counts[0,3] = 1 # D->A
M_counts[1,3] = 1 # D->B
M_counts[1,4] = 1 # E->B
M_counts[3,4] = 1 # E->D
M_counts[5,4] = 1 # E->F
M_counts[1,5] = 1 # F->B
M_counts[4,5] = 1 # F->E
M_counts[1,6] = 1 # G,H,I->B,E
M_counts[4,6] = 1
M_counts[1,7] = 1
M_counts[4,7] = 1
M_counts[1,8] = 1
M_counts[4,8] = 1
M_counts[4,9] = 1 # J,K->E
M_counts[4,10] = 1
print(M_counts)
###Output
[[ 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 0.]
[ 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1.]
[ 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
###Markdown
Now we can make an adjacency matrix `M` out of `M_counts`, by dividing each column by its sum, ie we are making sure columns sum to 1 :
###Code
M = np.empty((n_pages, n_pages))
for j in range(n_pages):
M[:,j] = M_counts[:,j] / M_counts[:,j].sum()
np.set_printoptions(precision=3)
print(M)
###Output
[[ 0.091 0. 0. 0.5 0. 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 1. 0.5 0.333 0.5 0.5 0.5 0.5 0. 0. ]
[ 0.091 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0.333 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0. 0.5 0.5 0.5 0.5 1. 1. ]
[ 0.091 0. 0. 0. 0.333 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[ 0.091 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]]
###Markdown
Let us check that all the conditions on M are fulfilled.
###Code
import numpy
def check_M(M):
"""
check that M has the right format to be used by pagerank function
"""
n_pages = M.shape[0] # n_pages is the number of rows of M
np.testing.assert_equal(M.shape[0], M.shape[1], err_msg = 'M should be square')
np.testing.assert_array_almost_equal(M.sum(axis=0), np.ones((n_pages)),
err_msg = 'assert each column sums to one (M is assumed column-stochastic)')
for j in range(n_pages):
M_column = M[:,j]
n_nonzero = np.count_nonzero(M[:,j])
np.testing.assert_array_almost_equal(M_column[M_column.nonzero()], np.ones((n_nonzero)) / n_nonzero,
err_msg = 'in column %g, all non-zero entries should be equal (and equal to 1 divided by their number)' % j)
check_M(M) # will produce error if M does not have the right format
###Output
_____no_output_____
###Markdown
And we are now ready to apply the `pagerank` function, which will iteratively apply page transitions to an randomly initialized distribution over the pages, until convergence.
###Code
import numpy as np
def pagerank(M, d=0.85, square_error=1e-6):
"""
M : the adjacency matrix of the pages. It is assumed to be column-stochastic (ie column sum to 1); all links have equal weight.
A page with no outgoing links (sink) is represented as a page with outgoing links to each other page (ie restart page).
d: damping factor
square_error : the algorithm iterates until the difference between two successive PageRank vectors v is less than this (in squared norm)
returns the PageRanks of all pages
"""
n_pages = M.shape[0] # n_pages is the number of rows of M
v = np.random.rand(n_pages) # initialize to random vector
v = v / v.sum() # make v sum to 1
last_v = np.ones((n_pages)) # will contain the previous v
M_hat = d * M + (1-d)/n_pages * np.ones((n_pages, n_pages)) # equation (***) in Wikipedia page
while np.square(v - last_v).sum() > square_error:
last_v = v
v = M_hat.dot(v) # at each iteration, progress one timestep
return v
pagerank(M)
###Output
_____no_output_____ |
Lung_Cancer.ipynb | ###Markdown
###Code
#Importing Libraries
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import cifar10
#kaggle installing API
!pip install -q kaggle
#create directory
!mkdir -p ~/.kaggle
#importing API to colab
from google.colab import files
uploaded=files.upload()
! cp kaggle.json ~/.kaggle/
#disbale key
! chmod 600 /root/.kaggle/kaggle.json
#importing the dataset
! kaggle datasets download -d mohamedhanyyy/chest-ctscan-images
#unzipping dataset
! unzip -q /content/chest-ctscan-images.zip
#defining the object
model=tf.keras.models.Sequential()
#Adding First CNN Layer
# 1. Filters (kernel) = 64
# 2. kernel size = 3
# 3. padding = same
# 4. activation = ReLU
# 5. input shape = (32, 32, 3)
model.add(tf.keras.layers.Conv2D(filters=64,kernel_size=4,padding='same',activation='relu',input_shape=[64,64,3]))
#maxpool layer parameters
# 1. pool size =2
# 2. strides = 2
# 3. padding = valid
model.add(tf.keras.layers.MaxPool2D(pool_size = (2,2), strides=2 , padding='valid'))
#Adding Second CNN Layer
# 1. Filters (kernel) = 32
# 2. kernel size = 3
# 3. padding = same
# 4. activation = ReLU
#model.add(tf.keras.layers.Conv2D(filters=64,kernel_size=3,padding='same',activation='relu',))
#model.add(tf.keras.layers.MaxPool2D(pool_size = (2, 2), strides=2 , padding='valid'))
#Adding Flattening Layer
#Converting array into vectors
model.add(tf.keras.layers.Flatten())
#Adding the dropout Layer
model.add(tf.keras.layers.Dropout(0.5))
#Adding first dense fayer
model.add(tf.keras.layers.Dense(units=128, activation='relu'))
#Adding first dense fayer
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
#Compiling the model
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
from tensorflow.keras.preprocessing.image import ImageDataGenerator
training_data_dir='/content/Data/train'
test_data_dir = '/content/Data/test'
datagen = ImageDataGenerator(rescale=1./255)
traning_set = datagen.flow_from_directory(directory=training_data_dir, target_size=(32,32),class_mode='categorical',batch_size=20
)
test_set = datagen.flow_from_directory(directory=test_data_dir, target_size=(32,32),class_mode='binary',batch_size=20
)
len(traning_set),len(test_set)
test_set.batch_size
history = model.fit_generator(generator=traning_set, steps_per_epoch=31,epochs=20, validation_data=test_set, validation_steps=16)
def learning_curve(history,epoch):
epoch_range = range(1, epoch+1)
plt.plot(epoch_range,history.history['accuracy'])
plt.plot(epoch_range,history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.legend(['Accuracy','Val_Accuracy'])
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.show
learning_curve(history, 20)
def learning_curve1(history,epoch):
epoch_range = range(1, epoch+1)
plt.plot(epoch_range,history.history['loss'])
plt.plot(epoch_range,history.history['val_loss'])
plt.title('Model Accuracy')
plt.legend(['Loss','Val_Loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show
learning_curve1(history, 20)
loss , accuracy=model.evaluate(test_set)
y_pred=np.argmax(model.predict_classes(test_set))
y_pred
test_set
###Output
_____no_output_____ |
examples/notebooks/Creating Models/3-negative-particle-problem.ipynb | ###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
_____no_output_____
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Neumann"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2020-05-30 11:11:30,262 - [WARNING] processed_variable.get_spatial_scale(497): No scale set for spatial variable r_n. Using default of 1 [m].
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./6-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
[33mWARNING: You are using pip version 21.0.1; however, version 21.1.2 is available.
You should consider upgrading via the '/home/user/Documents/PyBaMM/env/bin/python3.8 -m pip install --upgrade pip' command.[0m
Note: you may need to restart the kernel to use updated packages.
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Neumann"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2021-05-27 10:32:57,012 - [WARNING] processed_variable.get_spatial_scale(471): No length scale set for negative particle. Using default of 1 [m].
###Markdown
In the [next notebook](./4-comparing-full-and-reduced-order-models.ipynb) we consider the limit of fast diffusion in the particle. This leads to a reduced-order model for the particle behaviour, which we compare with the full (Fickian diffusion) model. ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj.
[4] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, and others. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17(3):261–272, 2020. doi:10.1038/s41592-019-0686-2.
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Neumann"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2021-01-24 19:28:49,750 - [WARNING] processed_variable.get_spatial_scale(518): No length scale set for negative particle. Using default of 1 [m].
###Markdown
In the [next notebook](./4-comparing-full-and-reduced-order-models.ipynb) we consider the limit of fast diffusion in the particle. This leads to a reduced-order model for the particle behaviour, which we compare with the full (Fickian diffusion) model. ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj.
[4] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, and others. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17(3):261–272, 2020. doi:10.1038/s41592-019-0686-2.
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
_____no_output_____
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Dirichlet"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {"primary": {r: {"min": pybamm.Scalar(0), "max": R}}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"][0].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
_____no_output_____
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Dirichlet"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2020-05-30 11:11:30,262 - [WARNING] processed_variable.get_spatial_scale(497): No scale set for spatial variable r_n. Using default of 1 [m].
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./6-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Neumann"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.Uniform1DSubMesh}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2021-11-19 15:29:22,931 - [WARNING] processed_variable.get_spatial_scale(520): No length scale set for negative particle. Using default of 1 [m].
###Markdown
In the [next notebook](./4-comparing-full-and-reduced-order-models.ipynb) we consider the limit of fast diffusion in the particle. This leads to a reduced-order model for the particle behaviour, which we compare with the full (Fickian diffusion) model. ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). Journal of Open Research Software, 9(1):14, 2021. doi:10.5334/jors.309.
[4] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, and others. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17(3):261–272, 2020. doi:10.1038/s41592-019-0686-2.
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
_____no_output_____
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Dirichlet"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {"primary": {r: {"min": pybamm.Scalar(0), "max": R}}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"][0].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./6-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Neumann"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.Uniform1DSubMesh}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2021-11-19 15:29:22,931 - [WARNING] processed_variable.get_spatial_scale(520): No length scale set for negative particle. Using default of 1 [m].
###Markdown
In the [next notebook](./4-comparing-full-and-reduced-order-models.ipynb) we consider the limit of fast diffusion in the particle. This leads to a reduced-order model for the particle behaviour, which we compare with the full (Fickian diffusion) model. ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). Journal of Open Research Software, 9(1):14, 2021. doi:10.5334/jors.309.
[4] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, and others. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17(3):261–272, 2020. doi:10.1038/s41592-019-0686-2.
###Markdown
A step towards the Single Particle Model In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is,\begin{equation*} \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c),\end{equation*}with the following boundary and initial conditions:\begin{equation*} \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0,\end{equation*}where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. In this example we use the following parameters:| Symbol | Units | Value ||:-------|:-------------------|:-----------------------------------------------|| $R$ | m | $10 \times 10^{-6}$ || $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ || $j$ | A m$^{-2}$ | $1.4$ || $F$ | C mol$^{-1}$ | $96485$ || $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). Setting up the modelAs before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel`
###Code
import pybamm
import numpy as np
import matplotlib.pyplot as plt
model = pybamm.BaseModel()
###Output
_____no_output_____
###Markdown
We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb).
###Code
R = pybamm.Parameter("Particle radius [m]")
D = pybamm.Parameter("Diffusion coefficient [m2.s-1]")
j = pybamm.Parameter("Interfacial current density [A.m-2]")
F = pybamm.Parameter("Faraday constant [C.mol-1]")
c0 = pybamm.Parameter("Initial concentration [mol.m-3]")
c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle")
###Output
_____no_output_____
###Markdown
Now we define our model equations, boundary and initial conditions, as in the previous example.
###Code
# governing equations
N = -D * pybamm.grad(c) # flux
dcdt = -pybamm.div(N)
model.rhs = {c: dcdt}
# boundary conditions
lbc = pybamm.Scalar(0)
rbc = -j / F / D
model.boundary_conditions = {c: {"left": (lbc, "Dirichlet"), "right": (rbc, "Neumann")}}
# initial conditions
model.initial_conditions = {c: c0}
###Output
_____no_output_____
###Markdown
Finally, we add any variables of interest to the dictionary `model.variables`
###Code
model.variables = {
"Concentration [mol.m-3]": c,
"Surface concentration [mol.m-3]": pybamm.surf(c),
"Flux [mol.m-2.s-1]": N,
}
###Output
_____no_output_____
###Markdown
Using the model In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values
###Code
param = pybamm.ParameterValues(
{
"Particle radius [m]": 10e-6,
"Diffusion coefficient [m2.s-1]": 3.9e-14,
"Interfacial current density [A.m-2]": 1.4,
"Faraday constant [C.mol-1]": 96485,
"Initial concentration [mol.m-3]": 2.5e4,
}
)
###Output
_____no_output_____
###Markdown
Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$
###Code
r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar")
geometry = {"negative particle": {r: {"min": pybamm.Scalar(0), "max": R}}}
###Output
_____no_output_____
###Markdown
Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values
###Code
param.process_model(model)
param.process_geometry(geometry)
###Output
_____no_output_____
###Markdown
We can now set up our mesh, choose a spatial method, and discretise our model
###Code
submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)}
var_pts = {r: 20}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {"negative particle": pybamm.FiniteVolume()}
disc = pybamm.Discretisation(mesh, spatial_methods)
disc.process_model(model);
###Output
_____no_output_____
###Markdown
The model is now discretised and ready to be solved. Solving the model As is the previous example, we choose a solver and times at which we want the solution returned.
###Code
# solve
solver = pybamm.ScipySolver()
t = np.linspace(0, 3600, 600)
solution = solver.solve(model, t)
# post-process, so that the solution can be called at any time t or space r
# (using interpolation)
c = solution["Concentration [mol.m-3]"]
c_surf = solution["Surface concentration [mol.m-3]"]
# plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))
ax1.plot(solution.t, c_surf(solution.t))
ax1.set_xlabel("Time [s]")
ax1.set_ylabel("Surface concentration [mol.m-3]")
r = mesh["negative particle"].nodes # radial position
time = 1000 # time in seconds
ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time))
ax2.set_xlabel("Particle radius [microns]")
ax2.set_ylabel("Concentration [mol.m-3]")
ax2.legend()
plt.tight_layout()
plt.show()
###Output
2020-05-30 11:11:30,262 - [WARNING] processed_variable.get_spatial_scale(497): No scale set for spatial variable r_n. Using default of 1 [m].
|
python/20201230_train_new_model_CCS.ipynb | ###Markdown
group strings that have the same first 8 characters
###Code
peptides = inp['Sequence'].tolist()
#peptides
peptides_sorted = inp['Sequence'].tolist()
peptides_sorted.sort()
peptides_sorted
### get the first 8 amino acids for the 'core sequence'
first8 = [x[:8] for x in peptides]
#first8
print(len(first8))
print(len(set(first8)))
print(len(first8))
print(len(set(first8)))
unique_cores = set(first8)
### make a dict with the indexes of the positions for each of the unique core sequences
core_seqs_dict = {}
n=0
for x in list(unique_cores):
#print(n)
core_seqs_dict[n] = [i for i, y in enumerate(first8) if y == x ]
n+=1
core_seqs_dict[202]
len(inp['Sequence'])
# are there duplicate peptides?
len(set(inp['Sequence']))
inp['namelen']= [len(str(i)) for i in inp['Sequence']]
input1 = inp[ (inp['namelen'] >= 2) ]
#parameters, max length of sequence with
maxlen = max([len(x) for x in input1.Sequence]) ### list comprehension to get max length of sequences
###Output
_____no_output_____
###Markdown
encoding sequence by index for embedding layer
###Code
### make vocabulary for encoding
seq = input1['Sequence']
vocab = set(''.join([str(i) for i in seq]))
vocab.add('END') # not using END cause they are all the same length
len_vocab = len(vocab)
print(vocab)
len_vocab
### always use alphabetical character index in the future
vocab_list = list(vocab)
vocab_list.sort()
vocab_list
char_index = dict((vocab_list[i], i) for i in range(len(vocab_list)))
X = []
x_name = [str(i)[0:maxlen] for i in seq]
for i in x_name:
tmp = [char_index[j] for j in str(i)]
for k in range(0,maxlen - len(str(i))):
tmp.append(char_index["END"])
X.append(tmp)
char_index
# compare the encoding with the sequences using above dictionary
print(peptides[0:6])
print(X[0:6])
# Create an empty list
all_int_list =[]
# Iterate over each row
for index, rows in input1.iterrows():
# Create list for the current row
my_list=[rows.Intensity] #[rows.CRT,
# append the list to the final list
all_int_list.append(my_list)
# Print the list
print(all_int_list[0])
#intensities = input1['INTENSITY']
np.asarray(all_int_list).shape
# some values are below 0
min(all_int_list)
# function to plot distributions of the intensity values across MHC
def pltgroup(ylabels):
''' only plots four histograms of y values to quickly compare distributions'''
plt.rcParams['figure.figsize'] = [5, 5]
fig, (ax1) = plt.subplots(1,1, sharex='row', sharey='row')
p0 = ax1.hist(ylabels, bins=100)
#p1 = ax2.hist(ylabels[:,1], bins=100)
ax1.set_title("CCS Train")
plt.xlabel("CCS")
plt.ylabel("number of examples")
pltgroup(np.asarray(all_int_list))
X = np.asarray(X)
Y = np.asarray(all_int_list)
print(X.shape) # regression from 10 values (max amino acid length)
print(Y.shape) # To 5 values (each MHC complex)
len(core_seqs_dict)
## train/test split based on the core 8 AA sequences
coretrainall, coretest = train_test_split(range(0, len(core_seqs_dict)), test_size=0.10, random_state=42)
# split train into train/validation
coretrain, coreval = train_test_split(coretrainall, test_size=0.20, random_state=42)
coretrain[0:10]
print(len(coretrainall))
print(len(coretrain))
print(len(coreval))
print(len(coretest))
print(len(coretrain)+ len(coretest)+len(coreval))
## use those indexes in the dictionary core_seq_dict to assign actual training and testing
tmplist = [core_seqs_dict[x] for x in coretrain]
trainindex = []
for x in tmplist:
trainindex+=x
len(trainindex)
## use those indexes in the dictionary core_seq_dict to assign actual training and testing
tmplist = [core_seqs_dict[x] for x in coreval]
valindex = []
for x in tmplist:
valindex+=x
len(valindex)
## use those indexes in the dictionary core_seq_dict to assign actual training and testing
tmplist = [core_seqs_dict[j] for j in coretest]
testindex = []
for x in tmplist:
testindex += x
len(testindex)
Xtrain = X[trainindex]
Xval = X[valindex]
Xtest = X[testindex]
Ytrain = Y[trainindex]
Yval = Y[valindex]
Ytest = Y[testindex]
print(len(Xtrain))
print(len(Xval))
print(len(Xtest))
print(len(Ytrain))
print(len(Yval))
print(len(Ytest))
Xtestseq = inp['Sequence'][testindex]
Y[testindex[0]]
testindex[0:10]
char_index
## compare order of xtestsequence version and xtest numeric encoding
Xtestseq[0:10]
print(Xtest)
Ytest
## save numpy arrays of each for the hyperas hyperparameter search loop
np.savetxt('20210524_A1101_xtrain.txt', Xtrain)
np.savetxt('20210524_A1101_xval.txt', Xval)
np.savetxt('20210524_A1101_xtest.txt', Xtest)
np.savetxt('20210524_A1101_ytrain.txt', Ytrain)
np.savetxt('20210524_A1101_yval.txt', Yval)
np.savetxt('20210524_A1101_ytest.txt', Ytest)
Ytrain
###Output
_____no_output_____
###Markdown
check that the train and test sets have similar class distro
###Code
print(np.loadtxt('20210429_CRT_CCS_ytrain.txt'))
ylabels = np.loadtxt('20210429_CRT_CCS_ytrain.txt') # train set
plt.rcParams['figure.figsize'] = [5, 5]
fig, (ax1) = plt.subplots(1,1, sharex='row', sharey='row')
p0 = ax1.hist(ylabels, bins=100)
#p1 = ax2.hist(ylabels[:,1], bins=100)
ax1.set_title("CCS Train")
plt.xlabel("CCS")
plt.ylabel("number of examples")
plt.savefig('ytrain_CCS.svg')
plt.show()
# validation set
ylabels = np.loadtxt('20210429_CRT_CCS_yval.txt')
plt.rcParams['figure.figsize'] = [5, 5]
fig, (ax1) = plt.subplots(1,1, sharex='row', sharey='row')
p0 = ax1.hist(ylabels, bins=100)
#p1 = ax2.hist(ylabels[:,1], bins=100)
ax1.set_title("CCS Validation")
plt.xlabel("CCS")
plt.ylabel("number of examples")
plt.savefig('yval_CCS.svg')
plt.show()
# test set
ylabels = np.loadtxt('20210429_CRT_CCS_ytest.txt')
plt.rcParams['figure.figsize'] = [5, 5]
fig, (ax1) = plt.subplots(1,1, sharex='row', sharey='row')
p0 = ax1.hist(ylabels, bins=100)
#p1 = ax2.hist(ylabels[:,1], bins=100)
ax1.set_title("CCS Test")
plt.xlabel("CCS")
plt.ylabel("number of examples")
plt.savefig('ytest_CCS.svg')
plt.show()
###Output
_____no_output_____
###Markdown
quick test one parameter set showing model works
###Code
keras.__version__
tf.__version__
#Model for CCS
os.environ["TF_FORCE_GPU_ALLOW_GROWTH"]="true"
X_train = np.loadtxt('20210429_CRT_CCS_xtrain.txt')
X_val = np.loadtxt('20210429_CRT_CCS_xval.txt')
Y_train = np.loadtxt('20210429_CRT_CCS_ytrain.txt')
Y_val = np.loadtxt('20210429_CRT_CCS_yval.txt')
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(output_dim=50 , input_dim=21, input_length=10))
model.add(tf.keras.layers.LSTM(128, return_sequences=False, input_shape=(10,21)))
model.add(tf.keras.layers.Dropout(0.5324275624952207))
#model.add(tf.keras.layers.LSTM(128, return_sequences=False))
#model.add(tf.keras.layers.Dropout(0.1829057341070159))
#model.add(tf.keras.layers.LSTM(128, return_sequences=False))
#model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(64))
#model.add(tf.keras.layers.LeakyReLU(alpha=0.3))
#model.add(BatchNormalization())
model.add(tf.keras.layers.Dropout(0.08657063211846627))
model.add(tf.keras.layers.Dense(1))
optimizermodel = tf.keras.optimizers.Adam(0.001)
optimizermodel.learning_rate.assign(0.005)
#print(optimizer.learning_rate)
model.compile(loss='mse', optimizer = optimizermodel, metrics=['mse'])
hist = model.fit(X_train, Y_train,
batch_size=128,
epochs= 200,
verbose=2,
validation_data=(X_val, Y_val))
# plot validation and training loss to assess overfitting
plt.semilogy(hist.history['val_loss'])
plt.semilogy(hist.history['loss'])
plt.legend(['val_loss', 'loss'])
plt.savefig('lossplot_CCS.svg')
# without batchnorm looks similar
plt.semilogy(hist.history['val_loss'])
plt.semilogy(hist.history['loss'])
plt.legend(['val_loss', 'loss'])
###Output
_____no_output_____
###Markdown
trained on the raw intensity output so no need to inverse_transform
###Code
model.summary() # best model architecture
model.save('20210603_CCS_200epoch.model') # save model for later
def data():
X_train = np.loadtxt('20210429_CRT_CCS_xtrain.txt')
X_test = np.loadtxt('20210429_CRT_CCS_xtest.txt')
Y_train = np.loadtxt('20210429_CRT_CCS_ytrain.txt')
Y_test = np.loadtxt('20210429_CRT_CCS_ytest.txt')
return X_train, Y_train, X_test, Y_test
def model(X_train, Y_train, X_test, Y_test):
'''
Model providing function:
Create Keras model with double curly brackets dropped-in as needed.
Return value has to be a valid python dictionary with two customary keys:
- loss: Specify a numeric evaluation metric to be minimized
- status: Just use STATUS_OK and see hyperopt documentation if not feasible
The last one is optional, though recommended, namely:
- model: specify the model just created so that we can later use it again.
'''
model = Sequential()
model.add(Embedding(output_dim=50 , input_dim=21, input_length=10))
model.add(LSTM(128, return_sequences=False, input_shape=(10,21)))
model.add(Dropout({{uniform(0, 0.8)}}))
#model.add(LSTM(128, return_sequences=False))
#model.add(Dropout({{uniform(0, 0.6)}}))
model.add(Dense(64))
#model.add(keras.layers.LeakyReLU(alpha=0.3))
#model.add(BatchNormalization())
model.add(Dropout({{uniform(0, 0.8)}}))
model.add(Dense(1))
optimizermodel = tf.keras.optimizers.Adam(0.001)
optimizermodel.learning_rate.assign({{choice([0.001, 0.005, 0.01, 0.05, 0.1])}})
#print(optimizer.learning_rate)
model.compile(loss='mse', optimizer = optimizermodel, metrics=['mse'])
model.compile(loss='mse',
optimizer='adam',
metrics=['mse'])
model.fit(X_train, Y_train,
batch_size={{choice([32, 64, 128, 256])}},
epochs= {{choice([ 100, 500, 1000])}},
verbose=0,
validation_data=(X_test, Y_test))
score, acc = model.evaluate(X_test, Y_test, verbose=0)
print('Test accuracy:', acc)
return {'loss': acc, 'status': STATUS_OK, 'model': model} #put negative sign in from of 'acc' for accuracy metric
best_run0, best_model0 = optim.minimize(model=model,
notebook_name='20201230_train_new_model_MHC_all5',
data=data,
max_evals=1,
algo=tpe.suggest,
trials = Trials())
best_model0.save('20210505_testhyperparameter.model')
print(best_run0)
print (best_run0)
###Output
{'Dropout': 0.04999730587706014, 'Dropout_1': 0.295346240262041, 'Dropout_2': 0.3257983840159514, 'assign': 4, 'batch_size': 2, 'epochs': 0}
###Markdown
Get model predictions, inverse transform, and compare
###Code
X_test = np.loadtxt('20210429_CRT_CCS_xtest.txt')
y_test = np.loadtxt('20210429_CRT_CCS_ytest.txt')
import tensorflow as tf
#from tensorflow.compat.v1.keras.backend import get_session
#tf.compat.v1.disable_v2_behavior()
#shap.explainers._deep.deep_tf.op_handlers["AddV2"] = shap.explainers._deep.deep_tf.passthrough
model = tf.keras.models.load_model('20210603_CCS_200epoch.model')
y_pred = model.predict(X_test)
#y_pred_transform = mm.inverse_transform(y_pred)
y_pred.shape
y_test.shape
# wrong somehow
plt.rcParams['figure.figsize'] = [5, 5]
fig, (ax1) = plt.subplots(1,1, sharex='row', sharey='row')
ax1.scatter(y_pred[:,0], y_test)
ax1.set_title("CCS")
plt.xlabel("predicted intensity")
plt.ylabel("true intensity")
plt.savefig('ccs_truevspredict.svg')
import scipy
#print(y_pred[:,1])
#print(y_test)
print(scipy.stats.spearmanr(y_pred[:,0], y_test))
#scipy.stats.spearmanr(y_pred[:,1], y_test[:,1])
import sklearn
import scipy
import seaborn as sns
ccs = []
CCSpred = []
for value in y_pred:
CCSpred.append(value[0])
print(CCSpred[0])
CCSreal = []
for value in y_test:
CCSreal.append(value)
ccs.append(CCSpred)
ccs.append(CCSreal)
print(np.asarray(ccs))
dccs = {'Predicted CCS': CCSpred, 'True CCS': CCSreal}
dfccs = pd.DataFrame(data = dccs)
#fig, (ax1,ax2, ax3, ax4, ax5) = plt.subplots(1,5, sharex='row', sharey='row')
sns.set(style="ticks", color_codes=True, font_scale=2.5)
j = sns.jointplot('Predicted CCS', 'True CCS', data = dfccs, kind='reg',color='y', height=8, xlim = (225,525), ylim = (225,525))
#j.annotate(stats.pearsonr, loc = ("upper left"), fontsize=14)
j.fig.suptitle("CCS")
prCCS, pCCS = scipy.stats.spearmanr(CCSpred, CCSreal)
print(scipy.stats.spearmanr(CCSpred, CCSreal))
mseCCS=round(sklearn.metrics.mean_squared_error(CCSpred, CCSreal),4)
j.ax_joint.text(235,465,"MSE = " + str(mseCCS), fontsize=24)
j.ax_joint.text(235,440,"r = " + str(round(prCCS,4)), fontsize=24)
j.ax_joint.text(235,415,"p = " + str(pCCS), fontsize=24)
plt.savefig('CCSrealvspredicted.png')
plt.savefig('CCSrealvspredicted.svg')
plt.show()
###Output
417.70056
[[417.70056152 355.11120605 342.77307129 ... 348.03259277 345.27685547
351.3927002 ]
[426.80462365 338.4297709 340.4088373 ... 353.2415585 361.0859204
334.0286007 ]]
|
examples/solving search problems.ipynb | ###Markdown
Solving search problem.An introduction to different methods for findings paths, including: adjacency matrix BFS find loops DFS DFScan bidi-BFS TSP [critical path method] First things first. Let's load the imports for this chapter
###Code
from graph import Graph
###Output
_____no_output_____
###Markdown
The adjacency matrix Breadth First Search (BFS) finding loops Depth First Search (DFS) Bidirectional breadth first search (BidiBFS) Critical path methodThe [critical path method](https://en.wikipedia.org/wiki/Critical_path_method) (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities.A critical path is determined by identifying the longest stretch of dependent activities and, commonly, measuring the time required to complete them from start to finish.An example is shown below where the critical path constitutes the path ABCDE:We can load these values into a Graph as follows:
###Code
tasks = {'A': 10, 'B': 20, 'C': 5, 'D': 10, 'E': 20, 'F': 15, 'G': 5, 'H': 15}
dependencies = [
('A', 'B'),
('B', 'C'),
('C', 'D'),
('D', 'E'),
('A', 'F'),
('F', 'G'),
('G', 'E'),
('A', 'H'),
('H', 'E'),
]
g = Graph()
for n, d in tasks.items():
g.add_node(n, obj=d)
for n1, n2 in dependencies:
g.add_edge(n1, n2, 0)
###Output
_____no_output_____
###Markdown
And we can calculate the schedule and the length of the critical path as:
###Code
critical_path_length, schedule = g.critical_path()
print("The critical path has duration", critical_path_length)
print("The tasks are:")
from graph import Task
for task_id, task in sorted(schedule.items()):
print(task_id, task)
###Output
The tasks are:
A Task('A', 10, 0, 0, 10, 10)
B Task('B', 20, 10, 10, 30, 30)
C Task('C', 5, 30, 30, 35, 35)
D Task('D', 10, 35, 35, 45, 45)
E Task('E', 20, 45, 45, 65, 65)
F Task('F', 15, 10, 25, 25, 40)
G Task('G', 5, 25, 40, 30, 45)
H Task('H', 15, 10, 30, 25, 45)
###Markdown
The properties of each `Task` are:- task id- duration - earliest start time- latest start time- earliest finish time- latest finish time.and the slack in the schedule can be calculated as:
###Code
slack = sum(t.slack for t in schedule.values())
print("The total slack in the schedule is", slack)
###Output
The total slack in the schedule is 50
###Markdown
Minimising slackIn cases where the tasks are commodities, such as CPU time, it can be convenient to minimise the number of concurrently active resources.As you may have noticed above in the diagram, the dependencies indicate that the graph has 3 paths at it's widest, whereby it would be logical to assign 3 CPUs to compute the tasks. However a little search can illustrate that it is possible to solve the tasks with 2 CPUs without extending the critical path.This can be done by inserting artificial dependencies. Here is an example:The method to minimise the slack, is conveniently called `critical_path_minimize_for_slack` and this is how it is used:
###Code
g2 = g.critical_path_minimize_for_slack()
###Output
_____no_output_____
###Markdown
We can verify that the critical path length is the same, and we can verify that this schedule does indeed have less slack:
###Code
critical_path_length2, schedule2 = g2.critical_path()
slack2 = sum(t.slack for t in schedule2.values())
print("The total slack in the schedule was", slack, "and is now", slack2)
print("The tasks remain the same, though with changed timings::")
from graph import Task
for task_id, task in sorted(schedule2.items()):
print(task_id, task)
###Output
The tasks remain the same, though with changed timings::
A Task('A', 10, 0, 0, 10, 10)
B Task('B', 20, 10, 10, 30, 30)
C Task('C', 5, 30, 30, 35, 35)
D Task('D', 10, 35, 35, 45, 45)
E Task('E', 20, 45, 45, 65, 65)
F Task('F', 15, 25, 25, 40, 40)
G Task('G', 5, 40, 40, 45, 45)
H Task('H', 15, 10, 10, 25, 25)
|
12.010_HW03_2021.ipynb | ###Markdown
12.010 PSET 03 Bessel functionsThe Bessel functions of the first kind $J_n(x)$ are defined as the solutions to the Bessel differential equation$$ x^2\frac{d^2y}{dx^2}+x\frac{dy}{dx}+(x^2-n^2)y=0 $$where $n$ is referred to as the order. For this problem set $n\geq$0 and $x$ will be real and $x\geq$0. Bessel functions are often encountered in problems involving cyclindical coordinates. Bessel functions can be computed from the series expansion:$$J_n(x)=\sum_{m=0}^{\infty} \frac{(-1)^m}{m!(n+m)!} (\frac{x}{2})^{(n+2m)}$$The functions can also be solve recursively using the recursive relationship$$J_{n+1}(x)=\frac{2n}{x}J_n(x)-J_{n-1}(x)$$so that once $J_0(x)$ and $J_1(x)$ have been generated, all higher order terms can be generated. The functions can also be computed from the following integral:$$ J_n(x)=\frac{1}{\pi} \int_{0}^{\pi} \cos(x \sin\theta - n\theta) d\theta$$Equations from: https://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html and https://www.math.colostate.edu/~shipman/47/volume2a2010/Sekeljik.pdf Part (1): Issues to be addressed in computing Bessel functions with above formulas(a) Solving differential equation We need to convert the $2^{nd}$ order equation into two first order equations. We do this by introducing new variable $v=\frac{dy}{dx}$ and the pair of differential equations become$$ \frac{dy}{dx}=v $$ $$ \frac{dv}{dx} = \frac{d^2y}{dx^2} = -\frac{v}{x} -\frac{(x^2-n^2)}{x^2} \frac{dy}{dx} $$ Issues to be addressed here 1. The behavior when $x=0$2. Scaling of the Bessel functions i.e., multiplying a function by a constant will still satisfy the differential equation because the RHS is zero.3. Initial conditions. To generate Bessel functions of the first kind, initial conditions must be set correctly. For $J_0$ and $J_1$, the initial conditions, i.e., value and first derivative, are at $x=0$, 1.0,0.0 and 0.0,0.5. But for $n>1$, the value and derivative at $x=0$ are 0.0,0.0. Solving the differential equation with these initial conditions will generate a trivial solution $y(x)=0$ for all $x$ values. Cells below give strategies to solve this problem.(b) Series summation.1. For $\frac{x}{2}>1$, the term being taken to the power will approach $\infty$ as $m$ approaches $\infty$. The sum converges because of the factorial m and n+m terms.2. Care is needed with factorial because this function grows rapidly with increasing which can cause integer overflows in languages like C and Fortran. In python 3, this will not be problem because the number of bytes in the integer grows as needed. If the factorial were a float, it can overflow the double precision representation. (c) Recursion1. Only problem here would be the growth of rounding error as the values of n increase.(d)Integral method1. Only issue here is assuring that the integral is evaluated accurately enough.
###Code
## PART (2): Use Python special functon jv. Its derivative jvp will also
# needed for initial conditions below.
import numpy as np
from scipy.special import jv, jvp
from tabulate import tabulate
# Set the X-values for 0 to 10 in increments of 0.05 for plotting.
# Use these same values in later parts of the PSET.
dx = 0.05
xt = np.arange(0.,10.+dx,dx)
jv_table = np.zeros((np.size(xt),7));
k = 0;
for x in xt:
jv_table[k] = [x,jv(0,x),jv(1,x),jv(2,x),jv(3,x),jv(4,x),jv(5,x)]
k += 1
N = int(0.5/dx) # Spacing to have 0.5 output interval
fancy_headers = ["x", "J_0(x)","J_1(x)","J_2(x)0","J_3(x)","J_4(x)","J_5(x)"]
print('\nBessel Function First Kind: scipy.special.')
print(tabulate(jv_table[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".6f",".6f",".6f",".6f",".6f",".6f")))
print('\nPIPE: Paste into Markdown\n\nBessel function of First kind\n')
MD_headers = ["x", "$J_0(x)$","$J_1(x)$","$J_2(x)$","$J_3(x)$","$J_4(x)$","$J_5(x)$"]
print(tabulate(jv_table[0::N], MD_headers, tablefmt="pipe",\
floatfmt=(".1f", ".6f",".6f",".6f",".6f",".6f",".6f")))
###Output
Bessel Function First Kind: scipy.special.
╒══════╤═══════════╤═══════════╤═══════════╤═══════════╤═══════════╤═══════════╕
│ x │ J_0(x) │ J_1(x) │ J_2(x)0 │ J_3(x) │ J_4(x) │ J_5(x) │
╞══════╪═══════════╪═══════════╪═══════════╪═══════════╪═══════════╪═══════════╡
│ 0.0 │ 1.000000 │ 0.000000 │ 0.000000 │ 0.000000 │ 0.000000 │ 0.000000 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 0.5 │ 0.938470 │ 0.242268 │ 0.030604 │ 0.002564 │ 0.000161 │ 0.000008 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 1.0 │ 0.765198 │ 0.440051 │ 0.114903 │ 0.019563 │ 0.002477 │ 0.000250 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 1.5 │ 0.511828 │ 0.557937 │ 0.232088 │ 0.060964 │ 0.011768 │ 0.001799 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 2.0 │ 0.223891 │ 0.576725 │ 0.352834 │ 0.128943 │ 0.033996 │ 0.007040 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 2.5 │ -0.048384 │ 0.497094 │ 0.446059 │ 0.216600 │ 0.073782 │ 0.019502 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 3.0 │ -0.260052 │ 0.339059 │ 0.486091 │ 0.309063 │ 0.132034 │ 0.043028 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 3.5 │ -0.380128 │ 0.137378 │ 0.458629 │ 0.386770 │ 0.204405 │ 0.080442 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 4.0 │ -0.397150 │ -0.066043 │ 0.364128 │ 0.430171 │ 0.281129 │ 0.132087 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 4.5 │ -0.320543 │ -0.231060 │ 0.217849 │ 0.424704 │ 0.348423 │ 0.194715 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 5.0 │ -0.177597 │ -0.327579 │ 0.046565 │ 0.364831 │ 0.391232 │ 0.261141 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 5.5 │ -0.006844 │ -0.341438 │ -0.117315 │ 0.256118 │ 0.396717 │ 0.320925 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 6.0 │ 0.150645 │ -0.276684 │ -0.242873 │ 0.114768 │ 0.357642 │ 0.362087 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 6.5 │ 0.260095 │ -0.153841 │ -0.307430 │ -0.035347 │ 0.274803 │ 0.373565 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 7.0 │ 0.300079 │ -0.004683 │ -0.301417 │ -0.167556 │ 0.157798 │ 0.347896 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 7.5 │ 0.266340 │ 0.135248 │ -0.230273 │ -0.258061 │ 0.023825 │ 0.283474 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 8.0 │ 0.171651 │ 0.234636 │ -0.112992 │ -0.291132 │ -0.105357 │ 0.185775 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 8.5 │ 0.041939 │ 0.273122 │ 0.022325 │ -0.262616 │ -0.207701 │ 0.067133 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 9.0 │ -0.090334 │ 0.245312 │ 0.144847 │ -0.180935 │ -0.265471 │ -0.055039 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 9.5 │ -0.193929 │ 0.161264 │ 0.227879 │ -0.065315 │ -0.269131 │ -0.161321 │
├──────┼───────────┼───────────┼───────────┼───────────┼───────────┼───────────┤
│ 10.0 │ -0.245936 │ 0.043473 │ 0.254630 │ 0.058379 │ -0.219603 │ -0.234062 │
╘══════╧═══════════╧═══════════╧═══════════╧═══════════╧═══════════╧═══════════╛
PIPE: Paste into Markdown
Bessel function of First kind
| x | $J_0(x)$ | $J_1(x)$ | $J_2(x)$ | $J_3(x)$ | $J_4(x)$ | $J_5(x)$ |
|-----:|-----------:|-----------:|-----------:|-----------:|-----------:|-----------:|
| 0.0 | 1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| 0.5 | 0.938470 | 0.242268 | 0.030604 | 0.002564 | 0.000161 | 0.000008 |
| 1.0 | 0.765198 | 0.440051 | 0.114903 | 0.019563 | 0.002477 | 0.000250 |
| 1.5 | 0.511828 | 0.557937 | 0.232088 | 0.060964 | 0.011768 | 0.001799 |
| 2.0 | 0.223891 | 0.576725 | 0.352834 | 0.128943 | 0.033996 | 0.007040 |
| 2.5 | -0.048384 | 0.497094 | 0.446059 | 0.216600 | 0.073782 | 0.019502 |
| 3.0 | -0.260052 | 0.339059 | 0.486091 | 0.309063 | 0.132034 | 0.043028 |
| 3.5 | -0.380128 | 0.137378 | 0.458629 | 0.386770 | 0.204405 | 0.080442 |
| 4.0 | -0.397150 | -0.066043 | 0.364128 | 0.430171 | 0.281129 | 0.132087 |
| 4.5 | -0.320543 | -0.231060 | 0.217849 | 0.424704 | 0.348423 | 0.194715 |
| 5.0 | -0.177597 | -0.327579 | 0.046565 | 0.364831 | 0.391232 | 0.261141 |
| 5.5 | -0.006844 | -0.341438 | -0.117315 | 0.256118 | 0.396717 | 0.320925 |
| 6.0 | 0.150645 | -0.276684 | -0.242873 | 0.114768 | 0.357642 | 0.362087 |
| 6.5 | 0.260095 | -0.153841 | -0.307430 | -0.035347 | 0.274803 | 0.373565 |
| 7.0 | 0.300079 | -0.004683 | -0.301417 | -0.167556 | 0.157798 | 0.347896 |
| 7.5 | 0.266340 | 0.135248 | -0.230273 | -0.258061 | 0.023825 | 0.283474 |
| 8.0 | 0.171651 | 0.234636 | -0.112992 | -0.291132 | -0.105357 | 0.185775 |
| 8.5 | 0.041939 | 0.273122 | 0.022325 | -0.262616 | -0.207701 | 0.067133 |
| 9.0 | -0.090334 | 0.245312 | 0.144847 | -0.180935 | -0.265471 | -0.055039 |
| 9.5 | -0.193929 | 0.161264 | 0.227879 | -0.065315 | -0.269131 | -0.161321 |
| 10.0 | -0.245936 | 0.043473 | 0.254630 | 0.058379 | -0.219603 | -0.234062 |
###Markdown
Tables pasted from output above into markdown cell.PIPE: Paste into MarkdownBessel function of First kind| x | $J_0(x)$ | $J_1(x)$ | $J_2(x)$ | $J_3(x)$ | $J_4(x)$ | $J_5(x)$ ||-----:|-----------:|-----------:|-----------:|-----------:|-----------:|-----------:|| 0.0 | 1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 || 0.5 | 0.938470 | 0.242268 | 0.030604 | 0.002564 | 0.000161 | 0.000008 || 1.0 | 0.765198 | 0.440051 | 0.114903 | 0.019563 | 0.002477 | 0.000250 || 1.5 | 0.511828 | 0.557937 | 0.232088 | 0.060964 | 0.011768 | 0.001799 || 2.0 | 0.223891 | 0.576725 | 0.352834 | 0.128943 | 0.033996 | 0.007040 || 2.5 | -0.048384 | 0.497094 | 0.446059 | 0.216600 | 0.073782 | 0.019502 || 3.0 | -0.260052 | 0.339059 | 0.486091 | 0.309063 | 0.132034 | 0.043028 || 3.5 | -0.380128 | 0.137378 | 0.458629 | 0.386770 | 0.204405 | 0.080442 || 4.0 | -0.397150 | -0.066043 | 0.364128 | 0.430171 | 0.281129 | 0.132087 || 4.5 | -0.320543 | -0.231060 | 0.217849 | 0.424704 | 0.348423 | 0.194715 || 5.0 | -0.177597 | -0.327579 | 0.046565 | 0.364831 | 0.391232 | 0.261141 || 5.5 | -0.006844 | -0.341438 | -0.117315 | 0.256118 | 0.396717 | 0.320925 || 6.0 | 0.150645 | -0.276684 | -0.242873 | 0.114768 | 0.357642 | 0.362087 || 6.5 | 0.260095 | -0.153841 | -0.307430 | -0.035347 | 0.274803 | 0.373565 || 7.0 | 0.300079 | -0.004683 | -0.301417 | -0.167556 | 0.157798 | 0.347896 || 7.5 | 0.266340 | 0.135248 | -0.230273 | -0.258061 | 0.023825 | 0.283474 || 8.0 | 0.171651 | 0.234636 | -0.112992 | -0.291132 | -0.105357 | 0.185775 || 8.5 | 0.041939 | 0.273122 | 0.022325 | -0.262616 | -0.207701 | 0.067133 || 9.0 | -0.090334 | 0.245312 | 0.144847 | -0.180935 | -0.265471 | -0.055039 || 9.5 | -0.193929 | 0.161264 | 0.227879 | -0.065315 | -0.269131 | -0.161321 || 10.0 | -0.245936 | 0.043473 | 0.254630 | 0.058379 | -0.219603 | -0.234062 |
###Code
# Now generate plot values
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [11, 8.5]
plt.rcParams['lines.linewidth'] = 1
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['lines.markersize'] = 2
plt.rcParams['legend.fontsize'] = 16
plt.rcParams['axes.titlesize'] = 18
fig = plt.figure()
ax = plt.subplot(111)
for n in range(0,6):
legnd = "$J_"+str(n)+"(x)$"
plt.plot(xt,jv(n,xt),label=legnd)
ax.autoscale(enable=True, axis='both', tight=True)
plt.xlabel('x')
plt.ylabel('Bessel function $1^{st}$ Kind')
plt.legend();
plt.title('Bessel Function: Special function jv');
## Part (3)
################################################
# Series expansion evaluation of Bessel function
################################################
import math
# dx = 0.05 # Due to difference from special.jv table we need to keep this value the same.
eps = 1e-7 # Sum terms until series contribution is < than eps.
xt = np.arange(0.0,10.0+dx,dx); nxt = np.size(xt)
sum_table = np.zeros((nxt,7));
sum_table[:,0] = xt
sum_diff = np.zeros((nxt,7));
sum_diff[:,0] = xt
maxm = 0
fig = plt.figure()
ax = plt.subplot(111)
for n in np.arange(0,6,1):
Jn = np.zeros(nxt)
for m in range(30):
dJn = (-1)**m/(math.factorial(m)*math.factorial(n+m))*(xt/2)**(n+2*m)
Jn = Jn + dJn
maxm = m if m > maxm else maxm # Save max m needed in summations
if abs(dJn[-1]) < eps: # Only add as many terms as needed.
break
legnd = "$J_"+str(n)+"(x)$"
plt.plot(xt,Jn,label=legnd)
sum_table[:,n+1] = Jn
sum_diff[:,n+1] = sum_table[:,n+1]-jv_table[:,n+1]
plt.xlabel('x')
plt.ylabel('Besselfunction $1^{st}$ Kind')
plt.legend();
plt.title('Bessel Function: Summation');
plt.show()
# Output table
fancy_headers = ["x", "J_0(x)","J_1(x)","J_2(x)0","J_3(x)","J_4(x)","J_5(x)"]
N = int(0.5/dx) # Spacing to have 0.5 output interval
print('\nBessel Function First Kind: Summation')
print(tabulate(sum_table[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".6f",".6f",".6f",".6f",".6f",".6f")))
print('\nDifference Summation solution - special.jv: ϵ ',eps,' Max m',maxm)
print(tabulate(sum_diff[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".3e",".3e",".3e",".3e",".3e",".3e")))
## PART 3 Continued: Solution using differential equation solution.
################################################
# ODE solution for Bessel functions:
################################################
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
from tabulate import tabulate
# Equations are:
# x^2 d^2y/dx^2 + x dy/dx + (x^2-n^2) y = 0
# Create pair of first order equations
# z = dy/dx
# x^2 dz/dx + x dy/dx + (x^2-n^2) y = 0
# dz/dx = (-x dy/dx - (x^2-n^2) y)/x^2
# = -1/x dy/dx - (x^2-n^2)/x^2 y
#
# [dz/dx ; dy/dx ] = [ -1/x -(x^2-n^2)/x^2 ; 1 0 ] [dy/dx y]
#
# The code below is adapted from:
# https://pythonnumericalmethods.berkeley.edu/notebooks/chapter22.06-Python-ODE-Solvers.html
#
x0 = 1.0 # Initial location to start integration (x1=0 has problems
# due to division by zero)
# dx = 0.05 # Due to difference from special.jv table we need to keep this value the same.
eps = 1e-7 # Set relative ans absolute error tolerance. Looking at differences below
# eps=1e-8 is neeed to retain 6-significant digits at x=10 and n=5.
# We will integation up and down from the initial value choosen. (We could have
# started at dx and only integrated up, see test code at bottom on notebook but
# technique used is likely more accurate.)
xu = np.arange(x0,10.0+dx,dx); nxu = np.size(xu)
xd = np.arange(x0, 0.0 ,-dx); nxd = np.size(xd)
xt = np.arange(0.0,10.0+dx,dx); nxt = np.size(xt) # This is sorted final table.
ode_table = np.zeros((nxt,7));
ode_table[:,0] = xt
ode_diff = np.zeros((nxt,7));
ode_diff[:,0] = xt
# lambda is a quick way to define a function that has only one-line of code.
# Here this is matrix multiply. s is [y';y] and output in [y'';y']
F = lambda x, s: np.dot(np.array([[-1/x, -(x**2-n**2)/x**2], [1, 0]]), s)
fig = plt.figure()
ax = plt.subplot(111)
for n in np.arange(0,6,1):
IV = 0.0 if n else 1.0 # Initial value at x=0 (this value not computed due to 1/x terms)
IC = [jvp(n,x0),jv(n,x0)] # Initial conditions (use special function but we could have
# used the summation formula. There is a derivative summation
# formuala as well. if x0 << 1 then only a few terms neeeded in
# the summation)
solu = solve_ivp(F, [xu[0],xu[-1]], IC, t_eval=xu, \
rtol = eps, atol = eps) # Solution integrating up to towrds 10.
sold = solve_ivp(F, [xd[0],xd[-1]], IC, t_eval=xd, \
rtol = eps, atol = eps) # Solution integrating down towards zero. Notice
# that x=0 is not included due to divide by zero
# problem.
downJ = np.flip(sold.y.T[:,1]) # Reverse downward integration is that x increases up.
upJ = solu.y.T[1:,1] # Up solution; Start at one because IC already in down
# ward array.
allJ = np.concatenate((np.array([IV]),downJ, upJ)) # Creat whole xt array.
legnd = "$J_"+str(n)+"(x)$"
plt.plot(xt,allJ,label=legnd)
ode_table[:,n+1] = allJ # Add colum to table.
ode_diff[:,n+1] = ode_table[:,n+1]-jv_table[:,n+1] # Compute difference from special.jv
plt.xlabel('x')
plt.ylabel('Besselfunction $1^{st}$ Kind')
plt.legend();
plt.title('Bessel Function: ODE solution');
plt.show()
# Output table
fancy_headers = ["x", "J_0(x)","J_1(x)","J_2(x)0","J_3(x)","J_4(x)","J_5(x)"]
N = int(0.5/dx) # Spacing to have 0.5 output interval
print('\nBessel Function First Kind: ODE solution')
print(tabulate(ode_table[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".6f",".6f",".6f",".6f",".6f",".6f")))
print('\nDifference ODE solution - special.jv: ϵ ',eps)
print(tabulate(ode_diff[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".3e",".3e",".3e",".3e",".3e",".3e")))
#################
## Pandas dataframe display highlighting values that exceed 10*eps tolerance
#################
import pandas as pd
df = pd.DataFrame(ode_diff[0::N], \
columns=['x', '$J_0(x)$', '$J_1(x)$','$J_2(x)$','$J_3(x)$','$J_4(x)$','$J_5(x)$',])
def abs_err(v, props=''):
return props if abs(v) > eps*10 and v<0.1 else None
mapper = {'x': '{0:.1f}',
'$J_0(x)$' : '{0:.3e}',
'$J_1(x)$' : '{0:.3e}',
'$J_2(x)$' : '{0:.3e}',
'$J_3(x)$' : '{0:.3e}',
'$J_4(x)$' : '{0:.3e}',
'$J_5(x)$' : '{0:.3e}'}
df.style.hide_index().applymap(abs_err, props='color:red;').format(mapper)\
.set_caption("Error in ODE Bessel Functions")
## PART 4: Use recursion algorithm.
# use special.jv to generate J_0 amnd J_1 to start the recursion
# No plots in this case.
########################################
# Recusive solution
########################################
# J_{n+1} = 2n/x J_n(x)-J_{n-1}(x)
# x = 0 will ne an issue
xt = np.arange(0.,10.+dx,dx); nxt = np.size(xt)
rec_table = np.zeros((np.size(xt),7));
rec_table[:,0] = xt
rec_diff = np.zeros((np.size(xt),7));
rec_diff[:,0] = xt
k = 0;
# Use the special jv function to generate the J_0 and J_1 values
rec_table[:,1] = jv(0,xt) ; rec_table[:,2] = jv(1,xt)
rec_diff[:,1] = 0 ; rec_diff[:,2] = 0
for n in range(1,5) :
# Generate n+1 term (Saved into n+2 slot because xt in first column)
# Start at second element so that x=0 is not in the calculation.
# All terms are zero except J_0
rec_table[1:,n+2] = 2*n*rec_table[1:,n+1]/xt[1:] - rec_table[1:,n]
rec_diff[:,n+2] = rec_table[:,n+2] - jv_table[:,n+2]
fancy_headers = ["x", "J_0(x)","J_1(x)","J_2(x)0","J_3(x)","J_4(x)","J_5(x)"]
N = int(0.5/dx) # Spacing to have 0.5 output interval
print('\nBessel Function First Kind: Recursion solution')
print(tabulate(rec_table[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".6f",".6f",".6f",".6f",".6f",".6f")))
print('\nDifference recursion solution - special.jv')
print(tabulate(rec_diff[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".3e",".3e",".3e",".3e",".3e",".3e")))
## PART 5: Use the θ integral appproach
# Just tablulate values no plot since all look the same.
################################################
# Integal solution for Bessel functions:
################################################
import scipy.integrate as solve_ivp
import scipy.special as special
eps = 1e10 # Use to set accuracy of quad integration. Works well; all errors at
# double precison so not sure epsabs and epsrel do anything. Setting
# eps 1e-1 changes errors slightly but still of order 1e-14
xt = np.arange(0.,10.+dx,dx) # This array has to match jv calculation so that
# differences can be computed.
int_table = np.zeros((np.size(xt),7));
int_table[:,0] = xt
int_diff = np.zeros((np.size(xt),7));
int_diff[:,0] = xt
k = 0;
for n in range(6) :
k = 0
for x in xt:
result = solve_ivp.quad(lambda θ : np.cos(x*np.sin(θ)-n*θ)/np.pi, 0, np.pi, \
epsabs = eps, epsrel = eps)
int_table[k,n+1] = result[0]
int_diff[k,n+1] = result[0] - jv_table[k,n+1]
k += 1
fancy_headers = ["x", "J_0(x)","J_1(x)","J_2(x)0","J_3(x)","J_4(x)","J_5(x)"]
N = int(0.5/dx) # Spacing to have 0.5 output interval
print('\nBessel Function First Kind: integral solution')
print(tabulate(int_table[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".6f",".6f",".6f",".6f",".6f",".6f")))
print('\nDifference integral solution - special.jv: ϵ ',eps)
print(tabulate(int_diff[0::N], fancy_headers, tablefmt="fancy_grid", \
floatfmt=(".1f", ".3e",".3e",".3e",".3e",".3e",".3e")))
## TEST CODE for ODE solution exploring initial conditions and starting
# value of x0
################################################
# ODE solution for Bessel functions: Testing code
################################################
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
# Equations are:
# x^2 d^2y/dx^2 + x dy/dx + (x^2-n^2) y = 0
# Create pair of first order equations
# z = dy/dx
# x^2 dz/dx + x dy/dx + (x^2-n^2) y = 0
# dz/dx = (-x dy/dx - (x^2-n^2) y)/x^2
# = -1/x dy/dx - (x^2-n^2)/x^2 y
#
# [dz/dx ; dy/dx ] = [ -1/x -(x^2-n^2)/x^2 ; 1 0 ] [dy/dx y]
n = 5 ; # Try different orders
x0 = 1.e-8 # Try redicing x0 to see what happens; keep an eye on the scale
# The shape will OK but size amplitude will decrease of x0 < 1e-2.
IC = [jvp(n,x0),jv(n,x0)]
F = lambda x, s: np.dot(np.array([[-1/x, -(x**2-n**2)/x**2], [1, 0]]), s)
t_eval = np.arange(x0, 10.0+2*x0, 0.01)
sol = solve_ivp(F, [x0, 10+2*x0], IC, t_eval=t_eval, \
rtol = 1e-13, atol = 1e-15)
plt.figure(figsize = (12, 8))
plt.plot(sol.t, sol.y.T[:, 0], sol.t, sol.y.T[:, 1])
plt.xlabel('x')
plt.ylabel('Bessel and Derivative')
plt.legend(['Deriv','B_n'])
plt.show()
###Output
_____no_output_____ |
testUnet1pctCO2.ipynb | ###Markdown
Multi-region analysis In this notebook we analyse the ability of a model trained on a region A to infer the subgrid forcing to achieve the same task on a different region, say region B.
###Code
import mlflow
import xarray as xr
import matplotlib.pyplot as plt
from analysis.utils import select_run
plt.rcParams["figure.figsize"] = (15, 10)
def plot_dataset(dataset : xr.Dataset, plot_type = None, *args, **kargs):
"""Calls the plot function of each variable in the dataset"""
plt.figure(figsize = (20, 5 * int(len(dataset) / 2)))
kargs_ = [dict() for i in range(len(dataset))]
def process_list_of_args(name: str):
if name in kargs:
if isinstance(kargs[name], list):
for i, arg_value in enumerate(kargs[name]):
kargs_[i][name] = arg_value
else:
for i in range(len(dataset)):
kargs_[i][name] = kargs[name]
kargs.pop(name)
process_list_of_args('vmin')
process_list_of_args('vmax')
for i, variable in enumerate(dataset):
plt.subplot(int(len(dataset) / 2), 2, i + 1)
if plot_type is None:
try:
# By default we set the cmap to coolwarm
kargs.setdefault('cmap', 'coolwarm')
dataset[variable].plot(*args, **kargs_[i], **kargs)
except AttributeError as e:
kargs.pop('cmap', None)
dataset[variable].plot(*args, **kargs)
else:
plt_func = getattr(dataset[variable].plot, plot_type)
plt_func(*args, **kargs)
import matplotlib.animation as animation
def dataset_to_movie(dataset : xr.Dataset, interval : int = 50,
*args, **kargs):
"""Generates animations for all the variables in the dataset"""
fig = plt.figure(figsize = (20, 5 * int(len(dataset) / 2)))
axes = list()
ims = list()
for i, variable in enumerate(dataset.keys()):
axes.append(fig.add_subplot(int(len(dataset) / 2), 2, i + 1))
for i, t in enumerate(dataset['time']):
im = list()
for axis, variable in zip(axes, dataset.keys()):
plt.sca(axis)
img = dataset[variable].isel(time=i).plot(vmin=-2, vmax=2,
cmap='coolwarm')
cb = img.colorbar
cb.remove()
im.append(img)
ims.append(im)
ani = animation.ArtistAnimation(fig, ims,
interval=interval, blit=True,
repeat_delay=1000)
return ani
client = mlflow.tracking.MlflowClient()
client.list_experiments()
import pandas as pd
def select_run(limit=1000, sort_by=None, cols=None, merge=None, *args, **kargs):
"""Allows to select a run from the tracking store interactively"""
mlflow_runs = mlflow.search_runs(*args, **kargs)
if cols is None:
cols = list()
cols = ['run_id', 'experiment_id' ] + cols
mlflow_runs = mlflow_runs.iloc[:limit]
# Remove possible duplicate columns
new_cols = list()
for e in cols:
if e not in new_cols:
new_cols.append(e)
cols = new_cols
print(len(mlflow_runs))
if merge is not None:
cols[cols.index('run_id')] = 'run_id_x'
cols[cols.index('experiment_id')] = 'experiment_id_x'
for name, key_left, key_right in merge:
experiment = mlflow.get_experiment_by_name(name)
df2 = mlflow.search_runs(experiment_ids=experiment.experiment_id)
mlflow_runs = pd.merge(mlflow_runs, df2, left_on=key_left,
right_on=key_right)
print(len(mlflow_runs))
if len(mlflow_runs) == 0:
raise Exception('No data found. Check that you correctly set \
the store')
if sort_by is not None:
mlflow_runs = mlflow_runs.sort_values(by=sort_by, ascending=False)
cols.append(sort_by)
print(mlflow_runs[cols])
id_ = int(input('Run id?'))
if id_ < 0:
sys.exit()
return mlflow_runs.loc[id_, :]
cols = ['start_time_x','params.model_cls_name', 'metrics.test loss', 'params.lat_min',
'params.lat_max', 'params.long_min', 'params.long_max', 'params.n_epochs_x', 'params.model_run_id']
run = select_run(sort_by='start_time_x', cols=cols, merge=[('Unet', 'params.model_run_id', 'run_id'),
('forcingdata1pct', 'params.data_run_id', 'run_id')], experiment_ids = ['11',])
for k,v in run.items():
print(f'{k}: {v}')
data_run_id = run['params.data_run_id']
data_run = client.get_run(data_run_id)
from analysis.base import get_test_datasets
test_datasets = get_test_datasets(run['run_id_x'])
id = 0
error = (test_datasets[id]['S_xpred'] - test_datasets[id]['S_x'])
error0 = test_datasets[id]['S_x']
((error**2).mean(dim='time') / (error0**2).mean(dim='time')).plot(vmin=0.2, vmax=1)
import numpy as np
model_output = test_datasets[id]
model_output['S_xscale'] = 1/(model_output['S_xscale'])
model_output['S_yscale'] = 1/(model_output['S_yscale'])
model_output['err_S_x'] = (model_output['S_x'] - model_output['S_xpred'])**2
model_output['err_S_y'] = (model_output['S_y'] - model_output['S_ypred'])**2
model_output['time_index'] = xr.DataArray(np.arange(len(model_output.coords['time'])),
dims = ('time',),
coords = {'time' : model_output['time']})
model_output = model_output.swap_dims({'time' : 'time_index'})
from random import randint
n_times = len(model_output['time'])
random_time = randint(0, n_times)
print(random_time)
plot_dataset(model_output.isel(time_index=random_time)[['u_surf', 'v_surf', 'S_x', 'S_y', 'S_xpred', 'S_ypred',
'S_xscale', 'S_yscale', 'err_S_x', 'err_S_y']],
vmin = [-2]*6 + [0., 0., 0., 0.], vmax = [2]*6 + [1, 1,1,1])
(model_output['err_S_x']).mean(dim='time_index').plot(vmax=1)
fig = plt.figure(figsize=(30, 30))
long = -45
lat = 44
plt.subplot(2, 1, 1)
time = slice(0, 600)
model_output['S_x'].isel(time_index=time).sel(longitude=long, latitude=lat, method='nearest').plot(linewidth=3)
model_output['S_xpred'].isel(time_index=time).sel(longitude=long, latitude=lat, method='nearest').plot(linewidth=3)
uB = model_output['S_xpred'] + 1.96 * model_output['S_xscale']
lB = model_output['S_xpred'] - 1.96 * model_output['S_xscale']
uB.isel(time_index=time).sel(longitude=long, latitude=lat, method='nearest').plot(linestyle='--',color='gray')
lB.isel(time_index=time).sel(longitude=long, latitude=lat, method='nearest').plot(linestyle='--',color='gray')
plt.legend(('Sx', 'Sx_pred'))
correlations = (model_output['S_y'] * model_output['S_ypred']).mean(dim='time_index') / np.sqrt((model_output['S_y']**2).mean(dim='time_index') * (model_output['S_ypred']**2).mean(dim='time_index'))
correlations.plot(vmin=0.1, vmax=1)
###Output
/home/ag7531/miniconda3/envs/mlflow-2428cb47d93b4486f90b5f8bb1835e697d4a2328/lib/python3.7/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide
x = np.divide(x1, x2, out)
|
Process.ipynb | ###Markdown
Data extraction from csv file
###Code
author = []
content = []
date = []
title = []
with open('data.csv', 'r', encoding='utf8') as file:
reader = csv.reader(file)
#fields = next(reader)
for row in reader:
author.append(row[0])
content.append(row[1])
date.append(row[2])
title.append(row[3])
###Output
_____no_output_____
###Markdown
Data cleaning
###Code
def cleaning(rawcontent):
#rawcontent is known to be a list of strings
content=[]
textcontent=[]
alphabets = set(string.ascii_letters)
endings = set([' ', '!', '?', '.'])
for text in rawcontent:
# for each string "text" in rawcontent
clean = ""
# Consider only alphabets and convert them into lowercase, seperated by spaces
for letter in text:
if letter in alphabets:
clean+=letter.lower()
elif letter in endings:
clean+=letter
# remove words with high occurence but no use
clean = clean.split()
clean=[word for word in clean if word not in stop_words]
content.append(set(clean))
textcontent.append(' '.join(clean))
return (content, textcontent)
titles, texttitles = cleaning(title)
contents, textcontents = cleaning(content)
bjp = ['bjp', 'amit shah', 'narendra modi', 'aadityanath yogi']
congress = ['congress', 'inc', 'ncp', 'rahul gandhi', 'sonia gandhi']
socialmedia = ['facebook', 'wechat', 'whatsapp', 'instagram', 'twitter', 'youtube', 'qq', 'tumblr', 'reddit', 'tiktok', 'linkedin', 'snapchat', 'pinterest']
news = ['aaj tak', 'abp news', 'cnbc', 'ndtv', 'india tv', 'republic bharat', 'news18 india', 'zee news', 'cnn', 'dd india', 'times now', 'bbc']
def count(trends):
counter = 0
for x in range(len(contents)):
for trend in trends:
if ' ' not in trend:
if trend[0] in titles[x] or trend[0] in contents[x]:
counter+=1
break
else:
if all(word in titles[x] or word in contents[x] for word in trend.split()):
rgx = '\\b' + trend + '\\b'
if re.search(rgx, textcontents[x]) or re.search(rgx, texttitles[x]):
counter+=1
break
return counter
print("Social media: " + str(count(socialmedia)))
print("BJP: " + str(count(bjp)))
print("Congress: " + str(count(congress)))
print("News networks: " + str(count(news)))
###Output
Social media: 125
BJP: 403
Congress: 265
News networks: 233
###Markdown
FE
###Code
from sklearn.neighbors import KNeighborsClassifier
def apply_fe(df, ss=None, mm=None, knn_dict={}):
cols = [c for c in df.columns if 'var_' in c]
df_out = df.copy(deep=True)
if ss:
df_ss = ss.transform(df[cols])
# if not knn_dict:
# for i in [50]:
# t = time.time()
# knn_dict[i] = KNeighborsClassifier(i, n_jobs=4).fit(df_ss, df['target'])
# print('Trained KNN{} in {:.2f} sec'.format(i, time.time()-t))
# for name,knn in knn_dict.items():
# t = time.time()
# df_out['knn_{}'.format(name)] = knn.predict_proba(df_ss)[:,1]
# print('Applied KNN{} in {:.2f} sec'.format(name, time.time()-t))
# for i in range(1,5):
# df_out['n_sigma{}'.format(i)] = (df_ss >= i).sum(axis=1)
# df_out['sum_sigmas'] = df_ss.sum(axis=1)
del df_ss
# if mm:
# df_mm = mm.transform(df[cols])
# df_out['sum_mm'] = df_mm.sum(axis=1)
# del df_mm
return df_out, knn_dict
from sklearn.preprocessing import RobustScaler, StandardScaler, MinMaxScaler
ss = StandardScaler().fit(df_trn.iloc[:,2:])
mm = MinMaxScaler().fit(df_trn.iloc[:,2:])
df_trn, knn = apply_fe(df_trn.iloc[:,:], ss=ss, mm=mm, knn_dict={})
df_tst, _ = apply_fe(df_tst, ss=ss, mm=mm, knn_dict=knn)
df_trn.head()
###Output
_____no_output_____
###Markdown
Training
###Code
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
from sklearn.base import clone, ClassifierMixin, RegressorMixin
import lightgbm as lgb
from sklearn.metrics import mean_squared_error, roc_auc_score
from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor
import time
def train_single_model(clf_, X_, y_, random_state_=314, opt_parameters_={}, fit_params_={}):
'''
A wrapper to train a model with particular parameters
'''
c = clone(clf_)
c.set_params(**opt_parameters_)
c.set_params(random_state=random_state_)
return c.fit(X_, y_, **fit_params_)
def train_model_in_CV(model, X, y, metric, metric_args={},
model_name='xmodel',
seed=31416, n=5,
opt_parameters_={}, fit_params_={},
verbose=True,
groups=None, y_eval=None,
mlf=None, mlf_metric_name=None
):
# the list of classifiers for voting ensable
clfs = []
# performance
perf_eval = {'score_i_oof': 0,
'score_i_ave': 0,
'score_i_std': 0,
'score_i': [],
'fit_time': []
}
# full-sample oof prediction
y_full_oof = pd.Series(np.zeros(shape=(y.shape[0],)),
index=y.index)
sample_weight=None
if 'sample_weight' in metric_args:
sample_weight=metric_args['sample_weight']
index_weight=None
if 'index_weight' in metric_args:
index_weight=metric_args['index_weight']
del metric_args['index_weight']
doSqrt=False
if 'sqrt' in metric_args:
doSqrt=True
del metric_args['sqrt']
if groups is None:
cv = StratifiedKFold(n, shuffle=True, random_state=seed) #Stratified
else:
cv = GroupKFold(n)
# The out-of-fold (oof) prediction for the k-1 sample in the outer CV loop
y_oof = pd.Series(np.zeros(shape=(X.shape[0],)),
index=X.index)
scores = []
clfs = []
feature_importances = []
for n_fold, (trn_idx, val_idx) in enumerate(cv.split(X, (y!=0).astype(np.int8), groups=groups)):
X_trn, y_trn = X.iloc[trn_idx], y.iloc[trn_idx]
X_val, y_val = X.iloc[val_idx], y.iloc[val_idx]
if 'LGBMRanker' in type(model).__name__ and groups is not None:
G_trn, G_val = groups.iloc[trn_idx], groups.iloc[val_idx]
if fit_params_:
# use _stp data for early stopping
fit_params_["eval_set"] = [(X_trn,y_trn), (X_val,y_val)]
fit_params_['verbose'] = verbose
if index_weight is not None:
fit_params_["sample_weight"] = y_trn.index.map(index_weight).values
fit_params_["eval_sample_weight"] = [None, y_val.index.map(index_weight).values]
if 'LGBMRanker' in type(model).__name__ and groups is not None:
fit_params_['group'] = G_trn.groupby(G_trn, sort=False).count()
fit_params_['eval_group'] = [G_trn.groupby(G_trn, sort=False).count(),
G_val.groupby(G_val, sort=False).count()]
#display(y_trn.head())
t = time.time()
clf = train_single_model(model, X_trn, y_trn, 314+n_fold, opt_parameters_, fit_params_)
perf_eval['fit_time'].append(time.time()-t)
clfs.append(('{}{}'.format(model_name,n_fold), clf))
# oof predictions
if isinstance(clf, RegressorMixin):
y_oof.iloc[val_idx] = clf.predict(X_val)
elif isinstance(clf, ClassifierMixin):
y_oof.iloc[val_idx] = clf.predict_proba(X_val)[:,1]
else:
y_oof.iloc[val_idx] = clf.predict(X_val)
# prepare weights for evaluation
if sample_weight is not None:
metric_args['sample_weight'] = y_val.map(sample_weight)
elif index_weight is not None:
metric_args['sample_weight'] = y_val.index.map(index_weight).values
# prepare target values
y_true_tmp = y_val if 'LGBMRanker' not in type(model).__name__ and y_eval is None else y_eval.iloc[val_idx]
y_pred_tmp = y_oof.iloc[val_idx] if y_eval is None else y_oof.iloc[val_idx]
#store evaluated metric
metric_value = metric(y_true_tmp, y_pred_tmp, **metric_args)
scores.append(metric_value)
if mlf is not None:
mlf.log_metric("{}_Fold{}".format(mlf_metric_name, n_fold), metric_value)
#
fi_tmp = pd.DataFrame()
fi_tmp["feature"] = X.columns
if hasattr(clf, 'feature_importances_'):
fi_tmp["importance"] = clf.feature_importances_
fi_tmp["fold"] = n_fold + 1
feature_importances.append(fi_tmp)
#cleanup
del X_trn, y_trn, X_val, y_val, y_true_tmp, y_pred_tmp
# Store performance info for this CV
if sample_weight is not None:
metric_args['sample_weight'] = y_oof.map(sample_weight)
elif index_weight is not None:
metric_args['sample_weight'] = y_oof.index.map(index_weight).values
perf_eval['score_i_oof'] = metric(y, y_oof, **metric_args)
perf_eval['score_i'] = scores
if doSqrt:
for k in perf_eval.keys():
if 'score' in k:
perf_eval[k] = np.sqrt(perf_eval[k])
scores = np.sqrt(scores)
perf_eval['score_i_ave'] = np.mean(scores)
perf_eval['score_i_std'] = np.std(scores)
return clfs, perf_eval, y_oof, pd.concat(feature_importances, axis=0)
def print_perf_clf(name, perf_eval):
print('Performance of the model:')
print('Mean(Val) score inner {} Classifier: {:.4f}+-{:.4f}'.format(name,
perf_eval['score_i_ave'],
perf_eval['score_i_std']
))
print('Min/max scores on folds: {:.4f} / {:.4f}'.format(np.min(perf_eval['score_i']),
np.max(perf_eval['score_i'])))
print('OOF score inner {} Classifier: {:.4f}'.format(name, perf_eval['score_i_oof']))
print('Scores in individual folds: {}'.format(['{:.4f}'.format(x) for x in perf_eval['score_i']]))
def learning_rate_decay_power(current_iter):
'''
The function defines learning rate deay for LGBM
'''
base_learning_rate = 1e-1
min_lr = 1e-2
lr = base_learning_rate * np.power(.95, current_iter)
return lr if lr > min_lr else min_lr
def learning_rate_steps_5k(current_iter):
'''
The function defines learning rate deay for LGBM
'''
period = 5000
return 1e-2 / (1 + current_iter//period)
learning_rate_decay_power(1000)
mdl_inputs = {
'lgbm_base': (
lgb.LGBMClassifier(
max_depth=-1,
min_child_samples=40,
random_state=314,
silent=True,
metric='None',
n_jobs=4,
n_estimators=25000,
importance_type='gain'),
{
'colsample_bytree': 0.02,
'subsample': 0.05,
'min_child_weight': 100.0,
'min_child_samples': 100,
'learning_rate': 0.005,
'num_leaves': 25,
# 'class_weight': 'balanced'
},
# {
# 'colsample_bytree': 0.05,
# 'subsample': 0.05,
# 'min_child_weight': 100.0,
# 'min_child_samples': 100,
# 'learning_rate': 0.01,
# 'num_leaves': 15,
# # 'scale_pos_weight': 0.3,
# # 'reg_alpha': 0,
# # 'reg_lambda': 50,
# # 'class_weight': 'balanced'
# },
{
"early_stopping_rounds": 1000,
"eval_metric": 'auc',
# 'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_steps_5k)],
# 'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_decay_power)],
},
df_trn['target'],
None),
# 'lgbm_rf': (lgb.LGBMClassifier(boosting_type='rf', reg_lambda=0, #min_child_samples=400,
# max_depth=-1, random_state=314, silent=True, metric='None',
# n_jobs=4, bagging_freq=1, importance_type='gain'),
# {'colsample_bytree': 0.05, 'min_child_samples': 300, 'min_child_weight': 100,
# 'num_leaves': 100, 'subsample': 0.2, 'n_estimators':1500,},
# {},
# df_trn['target'],
# None),
}
from scipy.stats import randint as sp_randint
from scipy.stats import uniform as sp_uniform
param_test = {
'num_leaves': [7, 15, 25, 70],#sp_randint(6, 100),
# 'min_child_samples': [100,300,500,700,900],#sp_randint(100, 1000),
'min_child_weight': [10,100],#[1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4],
'subsample': [0.1, 0.15, 0.2, 0.4],#sp_uniform(loc=0.5, scale=0.5),
'colsample_bytree': [0.02,0.05,0.1],#[0.35,0.50,0.65,0.75,0.85,0.95],#sp_uniform(loc=0.5, scale=0.5),
# 'reg_alpha': [0, 1e-1],
# 'reg_lambda': [0, 1e-3, 1e-1, 1]
}
from sklearn.model_selection import ParameterSampler, ParameterGrid
par_sampler = ParameterSampler(param_test, n_iter=2, random_state=31416)
par_grid = ParameterGrid([#{'colsample_bytree':[0.035,0.065]},
#{'subsample': [0.1, 0.15, 0.25, 0.3, 0.4]}
{'n_estimators': [4000, 8000]}
])
features_not2use = ['ID_code', 'target']
# for v in ['var_89', 'var_139', 'var_12', 'var_81']:
# kg.plot_var_for2classes(df_trn, v, bins=50,
# target_name='target')
# plt.title(v)
###Output
_____no_output_____
###Markdown
Hyper-parameter optimisation
###Code
feats = [c for c in df_trn.columns if c not in features_not2use]
n_trn = None
seed_cv = 31416
mdls = {}
results = {}
y_oofs = {}
fis = {} # feature importances
mlflow.set_experiment('TMP')
for name, (mdl, mdl_pars, fit_pars, y_, g_) in mdl_inputs.items():
for ps in par_sampler: # par_sampler
mdl_pars_ = copy.deepcopy(mdl_pars)
mdl_pars_.update(ps)
with mlflow.start_run(source_type=SourceType.NOTEBOOK,
source_version=kg.get_last_git_commit()):
print('--------------- {} -----------'.format(name))
mlflow.set_tag('model_name', name)
# Logging
for k, v in mdl_pars_.items():
mlflow.log_param(k, v)
if n_trn is not None:
mlflow.log_param('n_trn', n_trn)
mlflow.log_param('seed_cv', seed_cv)
mdl_, perf_eval_, y_oof_, fi_ = train_model_in_CV(
mdl,
df_trn[feats].iloc[:n_trn, :],
y_.iloc[:n_trn],
roc_auc_score,
metric_args={},
model_name=name,
opt_parameters_=mdl_pars_,
fit_params_=fit_pars,
n=n_cv,
seed=seed_cv,
verbose=2000, groups=g_,
mlf=mlflow, mlf_metric_name='AUC'
)
results[name] = perf_eval_
mdls[name] = mdl_
y_oofs[name] = y_oof_
fis[name] = fi_
print_perf_clf(name, perf_eval_)
# metrics
mlflow.log_metric("AUC", perf_eval_['score_i_ave'])
mlflow.log_metric("AUC_STD", perf_eval_['score_i_std'])
# for i in range(n_cv):
# m = mdls['lgbm_base'][i][1]
# for j in range(m.best_iteration_):
# mlflow.log_metric("AUC_Fold{}".format(i), m.evals_result_['valid_1']['auc'][j])
fit_time_ave = np.mean(perf_eval_['fit_time'])
if hasattr(mdl_[0][1],'n_estimators'):
n_trees_ave = np.nanmean([m[1].n_estimators for m in mdl_])
if getattr(mdl_[0][1], 'best_iteration_', None) is not None:
n_trees_ave = np.nanmean([m[1].best_iteration_ for m in mdl_])
mlflow.log_metric('Time_sec', fit_time_ave)
mlflow.log_metric('N_trees', n_trees_ave)
mlflow.log_metric('Time_per_tree_msec', fit_time_ave / n_trees_ave * 1000)
# artifacts
# OOF
y_oof_.to_csv('out/oof.csv', index=False)
mlflow.log_artifact(os.getcwd()+'/out/oof.csv')
# Submission
sub = pd.DataFrame(index=df_tst['ID_code'])
sub['target'] = 0
for n, m in mdl_:
sub['target'] += m.predict_proba(df_tst[feats])[:,1]/n_cv
sub.to_csv('out/sub.csv', index=False)
mlflow.log_artifact(os.getcwd()+'/out/sub.csv')
#
for v in fis:
kg.display_importances(fis[v], n_feat=20, fout_name='out/fi.png')
mlflow.log_artifact(os.getcwd()+'/out/fi.png')
###Output
--------------- lgbm_base -----------
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.893961 valid_1's auc: 0.874072
[4000] training's auc: 0.917147 valid_1's auc: 0.88836
[6000] training's auc: 0.929889 valid_1's auc: 0.894898
[8000] training's auc: 0.93826 valid_1's auc: 0.89824
[10000] training's auc: 0.944614 valid_1's auc: 0.899907
[12000] training's auc: 0.950071 valid_1's auc: 0.900662
[14000] training's auc: 0.95505 valid_1's auc: 0.901063
[16000] training's auc: 0.959597 valid_1's auc: 0.90119
[18000] training's auc: 0.963753 valid_1's auc: 0.90127
[20000] training's auc: 0.967584 valid_1's auc: 0.901323
[22000] training's auc: 0.971056 valid_1's auc: 0.90134
Early stopping, best iteration is:
[21270] training's auc: 0.969818 valid_1's auc: 0.901365
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.894929 valid_1's auc: 0.871847
[4000] training's auc: 0.91747 valid_1's auc: 0.886087
[6000] training's auc: 0.930292 valid_1's auc: 0.892348
[8000] training's auc: 0.938604 valid_1's auc: 0.895209
[10000] training's auc: 0.9449 valid_1's auc: 0.896517
[12000] training's auc: 0.950355 valid_1's auc: 0.897083
[14000] training's auc: 0.955324 valid_1's auc: 0.897296
[16000] training's auc: 0.959826 valid_1's auc: 0.897449
Early stopping, best iteration is:
[16392] training's auc: 0.960661 valid_1's auc: 0.897475
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.894431 valid_1's auc: 0.866799
[4000] training's auc: 0.916982 valid_1's auc: 0.883079
[6000] training's auc: 0.929691 valid_1's auc: 0.891188
[8000] training's auc: 0.938019 valid_1's auc: 0.895246
[10000] training's auc: 0.944432 valid_1's auc: 0.897379
[12000] training's auc: 0.949947 valid_1's auc: 0.898481
[14000] training's auc: 0.95492 valid_1's auc: 0.899012
[16000] training's auc: 0.959495 valid_1's auc: 0.899255
[18000] training's auc: 0.963659 valid_1's auc: 0.899384
Early stopping, best iteration is:
[18581] training's auc: 0.964798 valid_1's auc: 0.89943
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.893642 valid_1's auc: 0.873265
[4000] training's auc: 0.916329 valid_1's auc: 0.888682
[6000] training's auc: 0.929177 valid_1's auc: 0.895484
[8000] training's auc: 0.937557 valid_1's auc: 0.898933
[10000] training's auc: 0.944004 valid_1's auc: 0.900649
[12000] training's auc: 0.949543 valid_1's auc: 0.901519
[14000] training's auc: 0.95455 valid_1's auc: 0.901917
[16000] training's auc: 0.959187 valid_1's auc: 0.902105
[18000] training's auc: 0.963424 valid_1's auc: 0.902195
Early stopping, best iteration is:
[18127] training's auc: 0.963685 valid_1's auc: 0.902208
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.89576 valid_1's auc: 0.863383
[4000] training's auc: 0.91819 valid_1's auc: 0.879039
[6000] training's auc: 0.930794 valid_1's auc: 0.886453
[8000] training's auc: 0.93907 valid_1's auc: 0.890065
[10000] training's auc: 0.94543 valid_1's auc: 0.891983
[12000] training's auc: 0.950807 valid_1's auc: 0.893062
[14000] training's auc: 0.955689 valid_1's auc: 0.89355
[16000] training's auc: 0.960191 valid_1's auc: 0.893776
[18000] training's auc: 0.964311 valid_1's auc: 0.893866
[20000] training's auc: 0.968089 valid_1's auc: 0.893977
Early stopping, best iteration is:
[20089] training's auc: 0.968243 valid_1's auc: 0.893991
Performance of the model:
Mean(Val) score inner lgbm_base Classifier: 0.8989+-0.0029
Min/max scores on folds: 0.8940 / 0.9022
OOF score inner lgbm_base Classifier: 0.8988
Scores in individual folds: ['0.9014', '0.8975', '0.8994', '0.9022', '0.8940']
The list of features with 0 importance:
[]
--------------- lgbm_base -----------
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.97108 valid_1's auc: 0.899269
[4000] training's auc: 0.979587 valid_1's auc: 0.901006
[6000] training's auc: 0.98588 valid_1's auc: 0.901505
[8000] training's auc: 0.990515 valid_1's auc: 0.901637
Early stopping, best iteration is:
[8479] training's auc: 0.991466 valid_1's auc: 0.901688
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.971919 valid_1's auc: 0.89464
[4000] training's auc: 0.980029 valid_1's auc: 0.89692
[6000] training's auc: 0.986173 valid_1's auc: 0.897472
[8000] training's auc: 0.990764 valid_1's auc: 0.897738
Early stopping, best iteration is:
[8026] training's auc: 0.99082 valid_1's auc: 0.897748
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.970306 valid_1's auc: 0.89693
[4000] training's auc: 0.979002 valid_1's auc: 0.899201
[6000] training's auc: 0.985571 valid_1's auc: 0.899793
[8000] training's auc: 0.990391 valid_1's auc: 0.900065
Early stopping, best iteration is:
[8804] training's auc: 0.991943 valid_1's auc: 0.900171
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.970738 valid_1's auc: 0.899868
[4000] training's auc: 0.979203 valid_1's auc: 0.901959
[6000] training's auc: 0.985564 valid_1's auc: 0.902623
Early stopping, best iteration is:
[6827] training's auc: 0.987756 valid_1's auc: 0.902754
Training until validation scores don't improve for 1000 rounds.
[2000] training's auc: 0.971641 valid_1's auc: 0.890678
[4000] training's auc: 0.979995 valid_1's auc: 0.892969
[6000] training's auc: 0.986019 valid_1's auc: 0.893625
[8000] training's auc: 0.990589 valid_1's auc: 0.893581
Early stopping, best iteration is:
[7517] training's auc: 0.989627 valid_1's auc: 0.893729
Performance of the model:
Mean(Val) score inner lgbm_base Classifier: 0.8992+-0.0032
Min/max scores on folds: 0.8937 / 0.9028
OOF score inner lgbm_base Classifier: 0.8990
Scores in individual folds: ['0.9017', '0.8977', '0.9002', '0.9028', '0.8937']
The list of features with 0 importance:
[]
###Markdown
Single model run
###Code
feats = [c for c in df_trn.columns if c not in features_not2use]
n_trn = None#-100000
n_ho = None
seed_cv = 31416
mdls = {}
results = {}
y_oofs = {}
fis = {} # feature importances
mlflow.set_experiment('TMP')
for name, (mdl, mdl_pars, fit_pars, y_, g_) in mdl_inputs.items():
mdl_pars_ = copy.deepcopy(mdl_pars)
with mlflow.start_run(source_type=SourceType.NOTEBOOK,
source_version=kg.get_last_git_commit()):
print('--------------- {} -----------'.format(name))
mlflow.set_tag('model_name', name)
# Logging
for k, v in mdl_pars_.items():
mlflow.log_param(k, v)
if n_trn is not None:
mlflow.log_param('n_trn', n_trn)
mlflow.log_param('seed_cv', seed_cv)
mdl_, perf_eval_, y_oof_, fi_ = train_model_in_CV(mdl,
df_trn[feats].iloc[:n_trn, :], y_.iloc[:n_trn],
roc_auc_score,
metric_args={},
model_name=name,
opt_parameters_=mdl_pars_,
fit_params_=fit_pars,
n=n_cv, seed=seed_cv,
verbose=500, groups=g_,
mlf=mlflow, mlf_metric_name='AUC')
results[name] = perf_eval_
mdls[name] = mdl_
y_oofs[name] = y_oof_
fis[name] = fi_
print_perf_clf(name, perf_eval_)
# metrics
mlflow.log_metric("AUC", perf_eval_['score_i_ave'])
mlflow.log_metric("AUC_STD", perf_eval_['score_i_std'])
# for i in range(n_cv):
# m = mdls['lgbm_base'][i][1]
# for j in range(m.best_iteration_):
# mlflow.log_metric("AUC_Fold{}".format(i), m.evals_result_['valid_1']['auc'][j])
fit_time_ave = np.mean(perf_eval_['fit_time'])
if hasattr(mdl_[0][1],'n_estimators'):
n_trees_ave = np.nanmean([m[1].n_estimators for m in mdl_])
if getattr(mdl_[0][1], 'best_iteration_', None) is not None:
n_trees_ave = np.nanmean([m[1].best_iteration_ for m in mdl_])
mlflow.log_metric('Time_sec', fit_time_ave)
mlflow.log_metric('N_trees', n_trees_ave)
mlflow.log_metric('Time_per_tree_sec', fit_time_ave / n_trees_ave * 1000)
# # artifacts
# # OOF
# y_oof_.to_csv('out/oof.csv', index=False)
# mlflow.log_artifact(os.getcwd()+'/out/oof.csv')
# # Submission
# sub = pd.DataFrame(index=df_tst['ID_code'])
# sub['target'] = 0
# for n, m in mdl_:
# sub['target'] += m.predict_proba(df_tst[feats])[:,1]/n_cv
# sub.to_csv('out/sub.csv', index=False)
# mlflow.log_artifact(os.getcwd()+'/out/sub.csv')
# #
# kg.display_importances(fis['lgbm_base'], n_feat=20, fout_name='out/fi.png')
# mlflow.log_artifact(os.getcwd()+'/out/fi.png')
# oof's for different targets
oof_0 = y_oofs['lgbm_base'][df_trn['target']==0]
oof_1 = y_oofs['lgbm_base'][df_trn['target']==1]
# ranked oof's for different targets
oof_rnk_0 = y_oofs['lgbm_base'].rank(pct=True)[df_trn['target']==0]
oof_rnk_1 = y_oofs['lgbm_base'].rank(pct=True)[df_trn['target']==1]
plt.figure(figsize=(12,6))
oof_0.plot.hist(bins=20, alpha=0.6)
oof_1.plot.hist(bins=20, alpha=0.6)
plt.yscale('log')
plt.figure(figsize=(12,6))
oof_rnk_0.plot.hist(bins=20, alpha=0.6)
oof_rnk_1.plot.hist(bins=20, alpha=0.6)
#plt.yscale('log')
oof_rnk_0.nlargest(10)
oof_rnk_1.nlargest(10)
df_trn.loc[oof_rnk_0.nlargest(10).index]
df_trn.loc[oof_rnk_0.nlargest(10).index]
oof_0.shape
oof_1.shape
###Output
_____no_output_____
###Markdown
Importances SHAP
###Code
import shap
shap.initjs()
n_shap=10000
explainer = shap.TreeExplainer(mdls['lgbm_base'][0][1])
shap_values = explainer.shap_values(df_trn[feats].iloc[:n_shap,:])
shap.summary_plot(shap_values, df_trn[feats].iloc[:n_shap,:], plot_type="bar")
###Output
_____no_output_____
###Markdown
Define Time Structure
###Code
import json
import random
import re
import copy
with open('relations.json', 'r') as f:
relations = json.load(f)
assert isinstance(relations, dict)
mapping = {1: 'Jan', 2: 'Feb', 3: "Mar", 4: "Apr", 5: "May",
6: 'Jun', 7: 'Jul', 8: "Aug", 9: "Sep", 10: 'Oct',
11: "Nov", 12: 'Dec'}
imapping = {v: k for k, v in mapping.items()}
imapping.update({
'January': 1, 'February': 2, 'March': 3, 'April': 4, 'May': 5, 'June': 6,'July': 7, 'August': 8, 'September': 9, 'October': 10,
'November': 11, 'December': 12
})
class Time(object):
def __init__(self, time_str):
splits = [int(_) for _ in time_str.split('-')]
self.year = max(splits[0], 1)
self.month = splits[1]
self.date = splits[2]
if self.month == 1 and self.date == 1:
self.month = 0
self.date = 0
elif self.month == 0 or self.date == 0:
self.month = 0
self.date = 0
assert self.year > 0
def __gt__(self, other):
assert isinstance(other, Time)
if self.year > other.year:
return True
elif self.year < other.year:
return False
else:
if self.month > other.month:
return True
elif self.month < other.month:
return False
else:
if self.date > other.date:
return True
else:
return False
def __eq__(self, other):
assert isinstance(other, Time), other
return self.year == other.year and self.month == other.month and self.date == other.date
def __lt__(self, other):
assert isinstance(other, Time)
if self.year < other.year:
return True
elif self.year > other.year:
return False
else:
if self.month < other.month:
return True
elif self.month > other.month:
return False
else:
if self.date < other.date:
return True
else:
return False
def __repr__(self):
if self.month == 0:
return '{}'.format(self.year)
else:
return '{} {}'.format(mapping[self.month], str(self.year))
def __str__(self):
return self.__repr__()
@classmethod
def parse(cls, time):
assert isinstance(time, str)
if ' ' not in time:
return cls(f'{time}-0-0')
else:
month, year = time.split(' ')
month = month.lower().capitalize()
month = imapping[month]
return cls(f'{year}-{month}-1')
@classmethod
def minus_one_year(cls, time):
return cls('{}-{}-{}'.format(time.year - 1, time.month, time.date))
@classmethod
def minus_k_year(cls, time, k):
return cls('{}-{}-{}'.format(max(time.year - k, 2), time.month, time.date))
@classmethod
def add_one_year(cls, time):
return cls('{}-{}-{}'.format(time.year + 1, time.month, time.date))
@classmethod
def add_k_year(cls, time, k):
return cls('{}-{}-{}'.format(time.year + k, time.month, time.date))
@classmethod
def add_one_month(cls, time):
new_time = copy.deepcopy(time)
if new_time.month < 12:
new_time.month += 1
return new_time
else:
new_time.month = 1
new_time.year += 1
return new_time
def random_pop(time_range):
cur = time_range[0]
end = time_range[1]
candidates = []
cur = Time.add_one_month(cur)
while cur < end or cur == end:
candidates.append(cur)
cur = Time.add_one_month(cur)
if candidates:
return random.choice(candidates)
else:
return random.choice(time_range)
def too_close(time1, time2):
delta = (time2.year - time1.year) * 12
delta += time2.month - time1.month
return delta <= 2
def prop(time, first_last=None, difficulty='easy'):
if isinstance(time, tuple) or isinstance(time, list):
assert len(time) == 2, time
assert isinstance(time[0], Time) and isinstance(time[1], Time)
if too_close(time[0], time[1]):
return 'in {}'.format(str(time[0]))
else:
if difficulty == 'easy':
option = random.choice(['between'])
elif difficulty == 'hard':
if first_last == 'first':
option = random.choice(['in', 'between-subset', 'before'])
elif first_last == 'last':
option = random.choice(['in', 'between-subset', 'after'])
elif first_last is None:
option = random.choice(['in', 'between-subset'])
else:
raise ValueError()
else:
raise ValueError()
if option == 'in':
options = ['in {}'.format(str(random_pop(time)))]
if time[1].year // 10 > time[0].year // 10:
if time[1].year % 10 >= 3:
options.append('in early {}s'.format(time[1].year // 10 * 10))
if time[0].year % 10 <= 7:
options.append('in late {}s'.format(time[0].year // 10 * 10))
return random.choice(options)
elif option == 'between':
return 'from {} to {}'.format(str(time[0]), str(time[1]))
elif option == 'between-subset':
x1 = random_pop(time)
x2 = random_pop((x1, time[1]))
return 'between {} and {}'.format(str(x1), str(x2))
elif option == 'before':
x = random_pop(time)
return 'before {}'.format(str(x))
elif option == 'after':
x = random_pop(time)
return 'after {}'.format(str(x))
else:
raise ValueError('Not Existing')
else:
return 'in {}'.format(str(time))
def link_2_name(string):
string = string.replace('/wiki/', '')
string = string.replace('_', ' ')
return string
###Output
_____no_output_____
###Markdown
Generating Train/Test Dataset
###Code
from tqdm import tqdm
import gzip
import json
def enc(string, split):
string = json.dumps(string)
if split == 'train':
return (string + '\n').encode()
else:
return string + '\n'
def split_paragraphs(paras):
# Process the data
ctxs = []
buffer = {"title": paras[0], "text": ""}
for para in paras[1:]:
if para[0].isupper() and len(para.split(' ')) <= 4:
if len(buffer["text"].split(' ')) > 15:
ctxs.append(buffer)
buffer = {"title": para.strip(' .'), "text": ""}
else:
if len(buffer['text'].split(' ')) + len(para.split(' ')) > 100:
if len(buffer['text'].split(' ')) > 15:
ctxs.append(buffer)
buffer = {"title": ctxs[-1]['title'], "text": ""}
tokens = para.split(' ')
for j in range(0, len(tokens), 100):
buffer['text'] = ' '.join(tokens[j: j + 100])
ctxs.append(buffer)
buffer = {"title": ctxs[-1]['title'], "text": ""}
else:
buffer['text'] += ' ' + para
if buffer['text']:
ctxs.append(buffer)
ctxs = ctxs[:100]
return ctxs
###Output
_____no_output_____
###Markdown
Standard Train/Dev/Test
###Code
splits = ['train', 'dev', 'test']
difficulties = ['easy', 'hard']
for split in splits:
with open(f'dataset/annotated_{split}.json', 'r') as f:
data = json.load(f)
for difficulty in difficulties:
if split == 'train':
file = gzip.open(f'dataset/{split}.{difficulty}.json.gzip', 'wb')
else:
file = open(f'dataset/{split}.{difficulty}.json', 'w')
for d in tqdm(data, desc=f'{split}-{difficulty}'):
assert isinstance(d['type'], str)
paragraphs = split_paragraphs(d['paras'])
assert isinstance(paragraphs, list)
templates = relations[d['type']]['template']
template = random.choice(templates)
template = template.replace('$1', link_2_name(d['link']))
qas = []
for i, entry in enumerate(d['questions']):
assert len(re.findall('\?$', template)) == 1, template
time_step = [Time.parse(entry[0][0]), Time.parse(entry[0][1])]
assert isinstance(entry[1], list), entry[1]
assert isinstance(entry[1][0], dict), entry[1]
if i == 0:
specifier = prop(time_step, 'first', difficulty)
elif i == len(d['questions']) - 1:
specifier = prop(time_step, 'last', difficulty)
else:
specifier = prop(time_step, None, difficulty)
if '$4' in template:
question = template.replace('$4', specifier)
elif '$2' in template:
question = template.replace('$2', specifier)
else:
raise "It's not a template"
qas.append((question, entry[1]))
while len(qas) < 3 and difficulty == 'hard' and relations[d['type']]['mode'] == 'accumulate':
start_ = Time.parse(d['questions'][0][0][0])
end_ = Time.parse(d['questions'][-1][0][1])
options = [(Time.minus_k_year(start_, 10), start_)]
recent = Time('2020-0-0')
if end_ < recent:
options.append((end_, min(recent, Time.add_k_year(end_, 10))))
choice = random.choice(range(len(options)))
if choice == 0:
specifier = prop(options[0], 'first', difficulty)
else:
specifier = prop(options[1], 'last', difficulty)
assert '$4' in template
question = template.replace('$4', specifier)
qas.append((question, [{'para': 0, 'from': 0, 'end': 0, 'answer': ''}]))
for q_index, qs in enumerate(qas):
answers = [_['answer'] for _ in qs[1]]
if split in ['dev', 'test']:
tmp = {'idx': d['index'] + '#' + str(q_index), 'question': qs[0], 'context': ' '.join(d['paras']),
'targets': answers, 'paragraphs': paragraphs}
q_index += 1
file.write(enc(tmp, split))
else:
from_ = []
end_ = []
offset = [0]
for para in d['paras'][:-1]:
offset.append(len(para) + 1 + offset[-1])
for ans in qs[1]:
from_.append(offset[ans['para']] + ans['from'])
end_.append(offset[ans['para']] + ans['end'])
passage = ' '.join(d['paras'])
assert passage[from_[0]: end_[0]] == qs[1][0]['answer'], passage[from_[0]: end_[0]] + ' # ' + qs[1][0]['answer']
tmp = {'idx': d['index'] + '#' + str(q_index), 'question': qs[0], 'context': ' '.join(d['paras']),
'targets': answers, 'from': from_, 'end': end_, 'paragraphs': paragraphs}
file.write(enc(tmp, split))
file.close()
###Output
_____no_output_____
###Markdown
Generaing easy/hard Human rewritten questions
###Code
string = 'What racing sport did Sergio Perez play from May 2005 to 2006?'
re.search(r'(from|From) (\S+|\S+ \d+) to', string).group(2)
import json
import re
def extract_time_span(string):
if 'from ' in string.lower() and ' to ' in string.lower():
#print(string)
match = re.search(r'(from|From) (\S+|\S+ \d+) to', string)
start = match.group(2)
match = re.search(r'(from|From) (\S+|\S+ \d+) to (\S+ \d+|\d+)\b', string)
end = match.group(3)
template = re.sub(r'(from|From) (\S+|\S+ \d+) to (\S+ \d+|\d+)\b', '$4', string)
assert start is not None, string
assert end is not None, string
return start, end, template
else:
return None, None, string
splits = ['train', 'test']
difficulties = ['easy', 'hard']
extract_succ, extract_fail = 0, 0
for split in splits:
with open(f'dataset/human_annotated_{split}.json', 'r') as f:
data = json.load(f)
for difficulty in difficulties:
if split == 'train':
file = gzip.open(f'dataset/human_{split}.{difficulty}.json.gzip', 'wb')
else:
file = open(f'dataset/human_{split}.{difficulty}.json', 'w')
for d in tqdm(data, desc=f'{split}-{difficulty}'):
assert isinstance(d['type'], str)
paragraphs = split_paragraphs(d['paras'])
assert isinstance(paragraphs, list)
qas = []
for i, entry in enumerate(d['questions']):
start, end, question = extract_time_span(entry[0])
if start and end:
extract_succ += 1
time_step = [Time.parse(start), Time.parse(end)]
if i == 0:
specifier = prop(time_step, 'first', difficulty)
elif i == len(d['questions']) - 1:
specifier = prop(time_step, 'last', difficulty)
else:
specifier = prop(time_step, None, difficulty)
question = question.replace('$4', specifier)
else:
extract_fail += 1
assert isinstance(entry[1], list), entry[1]
assert isinstance(entry[1][0], dict), entry[1]
qas.append((question, entry[1]))
for q_index, qs in enumerate(qas):
answers = [_['answer'] for _ in qs[1]]
if split in ['dev', 'test']:
tmp = {'idx': d['index'] + '#' + str(q_index), 'question': qs[0], 'context': ' '.join(d['paras']),
'targets': answers, 'paragraphs': paragraphs}
q_index += 1
file.write(enc(tmp, split))
else:
from_ = []
end_ = []
offset = [0]
for para in d['paras'][:-1]:
offset.append(len(para) + 1 + offset[-1])
for ans in qs[1]:
from_.append(offset[ans['para']] + ans['from'])
end_.append(offset[ans['para']] + ans['end'])
passage = ' '.join(d['paras'])
assert passage[from_[0]: end_[0]] == qs[1][0]['answer'], passage[from_[0]: end_[0]] + ' # ' + qs[1][0]['answer']
tmp = {'idx': d['index'] + '#' + str(q_index), 'question': qs[0], 'context': ' '.join(d['paras']),
'targets': answers, 'from': from_, 'end': end_, 'paragraphs': paragraphs}
file.write(enc(tmp, split))
file.close()
print('succ', extract_succ, 'fail', extract_fail)
###Output
train-easy: 100%|██████████| 320/320 [00:01<00:00, 172.65it/s]
train-hard: 100%|██████████| 320/320 [00:04<00:00, 72.41it/s]
test-easy: 100%|██████████| 257/257 [00:00<00:00, 747.30it/s]
test-hard: 100%|██████████| 257/257 [00:02<00:00, 120.17it/s]
###Markdown
Sensitivity Analysis
###Code
splits = ['dev']
difficulties = ['hard']
repeat = 3
sample_size = 500
for split in splits:
with open(f'dataset/annotated_{split}.json', 'r') as f:
data = json.load(f)
for difficulty in difficulties:
file = open(f'dataset/{split}.{difficulty}.repeat.json', 'w')
for d in tqdm(data[:sample_size], desc=f'{split}-{difficulty}'):
assert isinstance(d['type'], str)
paragraphs = split_paragraphs(d['paras'])
assert isinstance(paragraphs, list)
templates = relations[d['type']]['template']
template = random.choice(templates)
template = template.replace('$1', link_2_name(d['link']))
qas = []
for i, entry in enumerate(d['questions']):
assert len(re.findall('\?$', template)) == 1, template
time_step = [Time.parse(entry[0][0]), Time.parse(entry[0][1])]
assert isinstance(entry[1], list), entry[1]
assert isinstance(entry[1][0], dict), entry[1]
if '$4' in template:
for _ in range(3):
specifier = prop(time_step, None, difficulty)
question = template.replace('$4', specifier)
qas.append((question, entry[1]))
for q_index, qs in enumerate(qas):
answers = [_['answer'] for _ in qs[1]]
if split in ['dev', 'test']:
tmp = {'idx': d['index'] + '#' + str(q_index), 'question': qs[0], 'context': ' '.join(d['paras']),
'targets': answers, 'paragraphs': paragraphs}
q_index += 1
file.write(enc(tmp, split))
else:
from_ = []
end_ = []
offset = [0]
for para in d['paras'][:-1]:
offset.append(len(para) + 1 + offset[-1])
for ans in qs[1]:
from_.append(offset[ans['para']] + ans['from'])
end_.append(offset[ans['para']] + ans['end'])
passage = ' '.join(d['paras'])
assert passage[from_[0]: end_[0]] == qs[1][0]['answer'], passage[from_[0]: end_[0]] + ' # ' + qs[1][0]['answer']
tmp = {'idx': d['index'] + '#' + str(q_index), 'question': qs[0], 'context': ' '.join(d['paras']),
'targets': answers, 'from': from_, 'end': end_, 'paragraphs': paragraphs}
file.write(enc(tmp, split))
file.close()
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-12/23-30-48/output.json') as f:
predictions = json.load(f)
consistent, inconsistent = 0, 0
answers = []
for k in predictions:
if len(answers) < 3:
answers.append(predictions[k])
else:
if len(set(answers)) == 1:
consistent += 1
else:
inconsistent += 1
answers = [predictions[k]]
print(consistent, inconsistent, consistent / (consistent + inconsistent))
###Output
1278 632 0.669109947643979
###Markdown
Data Analysis
###Code
import json
data = []
with open('dataset/dev.hard.json', 'r') as f:
for line in f:
data.append(json.loads(line))
answerable, na = 0, 0
for d in data:
if d['targets'] == ['']:
na += 1
else:
answerable += 1
print(na, answerable)
import json
from utils import get_raw_scores
data = []
with open('dataset/dev.easy.json', 'r') as f:
for line in f:
data.append(json.loads(line))
print(len(data))
reference = {}
context = {}
for entry in data:
reference[entry['idx']] = entry['targets']
context[entry['idx']] = (entry['question'], entry['context'])
print('FiD-easy')
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/13-49-48/output.json') as f:
pred = json.load(f)
print(get_raw_scores(pred, reference))
from yattag import Doc
doc, tag, text = Doc(
defaults = {
'title': 'TSQA Prediction Visualization',
'contact_message': 'Prediction Visualization!'
},
).tagtext()
css = """
.collapsible {
background-color: #777;
color: white;
cursor: pointer;
padding: 18px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
}
.active, .collapsible:hover {
background-color: #555;
}
.content {
padding: 0 18px;
display: none;
overflow: hidden;
background-color: #f1f1f1;
}
"""
with doc.tag('style', type='text/css'):
doc.asis(css)
done = set()
results = {'context': '', 'question': [], 'pred': [], 'groundtruth': []}
with tag('html'):
with tag('body'):
for k in list(reference.keys())[:500]:
name = k.split('#')[0]
if name not in done:
if results['context']:
with tag('button', type='button', klass='collapsible'):
text(f'open {name}')
with tag('div', klass='content'):
with tag('p', id = 'main'):
text(results['context'])
for q, p, g in zip(results['question'], results['pred'], results['groundtruth']):
with tag('p', id = 'main'):
text(q)
with tag('p', id = 'main'):
text(p)
with tag('p', id = 'main'):
text(g)
results = {'context': '', 'question': [], 'pred': [], 'groundtruth': []}
results['context'] = context[k][1]
done.add(name)
if pred[k] not in reference[k]:
results['question'].append(context[k][0])
results['pred'].append(f'Pred: {pred[k]}')
ref = ' ; '.join(reference[k])
results['groundtruth'].append(f'Reference: {ref}')
script = """
var coll = document.getElementsByClassName("collapsible");
var i;
for (i = 0; i < coll.length; i++) {
coll[i].addEventListener("click", function() {
this.classList.toggle("active");
var content = this.nextElementSibling;
if (content.style.display === "block") {
content.style.display = "none";
} else {
content.style.display = "block";
}
});
}
"""
with tag('script'):
doc.asis(script)
result = doc.getvalue()
#print(result)
with open('main.html', 'w') as f:
f.write(result)
#print(get_raw_scores(pred, reference))
def print_only_one(pred):
from collections import defaultdict
outputs = defaultdict(set)
for k in pred:
name = re.sub(r'#[0-9]$', '', k)
outputs[name].add(pred[k])
average = []
only_one = 0
for k in outputs:
if len(outputs[k]) == 1:
only_one += 1
average.append(len(outputs[k]))
print(only_one, len(outputs), sum(average) / len(average))
# Untrained NQ prediction on TimeQA-Hard
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-12/16-04-34/output.json') as f:
pred = json.load(f)
print('NQ hard')
print_only_one(pred)
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/15-20-21/output.json') as f:
pred = json.load(f)
print('FiD hard')
print_only_one(pred)
import json
from utils import get_raw_scores
import re
data = []
with open('dataset/dev.easy.json', 'r') as f:
for line in f:
data.append(json.loads(line))
reference = {}
context = {}
for entry in data:
reference[entry['idx']] = entry['targets']
context[entry['idx']] = (entry['question'], entry['context'])
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/13-49-48/output.json') as f:
pred = json.load(f)
mentioned = []
partial_mentioned = []
not_mentioned = []
for k in reference:
question, document = context[k]
time = extract_year(question)
#if isinstance(time, str):
# if time in document:
# mentioned.append(get_raw_scores({k: pred[k]}, {k: reference[k]})['exact'])
# else:
# not_mentioned.append(get_raw_scores({k: pred[k]}, {k: reference[k]})['exact'])
#else:
#assert len(time) == 2
if all([t in document for t in time]):
mentioned.append(get_raw_scores({k: pred[k]}, {k: reference[k]})['exact'])
#elif time[0] in document or time[1] in document:
# partial_mentioned.append(get_raw_scores({k: pred[k]}, {k: reference[k]})['exact'])
else:
not_mentioned.append(get_raw_scores({k: pred[k]}, {k: reference[k]})['exact'])
print(sum(mentioned) / len(mentioned), len(mentioned))
#print(sum(partial_mentioned) / len(partial_mentioned), len(partial_mentioned))
print(sum(not_mentioned) / len(not_mentioned), len(not_mentioned))
import re
def extract_year(string):
if re.search(r'from ([0-9]+|\S+ [0-9]+) to ([0-9]+|\S+ [0-9]+)', string):
match = re.search(r'from ([0-9]+|\S+ [0-9]+) to ([0-9]+|\S+ [0-9]+)', string)
outputs = []
outputs.extend(match.group(1).split(' '))
outputs.extend(match.group(2).split(' '))
return outputs
elif re.search(r'in ([0-9]+|\S+ [0-9]+)', string):
match = re.search(r'in ([0-9]+|\S+ [0-9]+)', string).group(1)
return match.split(' ')
else:
raise ValueError(string)
return None
import json
from utils import get_raw_scores
data = []
with open('dataset/dev.hard.json', 'r') as f:
for line in f:
data.append(json.loads(line))
print(len(data))
answerable = {}
unanswerable = {}
for entry in data:
if entry['targets'] == ['']:
unanswerable[entry['idx']] = entry['targets']
else:
answerable[entry['idx']] = entry['targets']
print('BigBird-hard')
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/12-35-13/output.json') as f:
pred = json.load(f)
print('-answerable')
print(get_raw_scores(pred, answerable))
print('-unanswerable')
print(get_raw_scores(pred, unanswerable))
print()
print('FiD-hard')
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/15-20-21/output.json') as f:
pred = json.load(f)
print('-answerable')
print(get_raw_scores(pred, answerable))
print('-unanswerable')
print(get_raw_scores(pred, unanswerable))
print()
data = []
with open('dataset/dev.easy.json', 'r') as f:
for line in f:
data.append(json.loads(line))
print(len(data))
answerable = {}
unanswerable = {}
for entry in data:
if entry['targets'] == ['']:
unanswerable[entry['idx']] = entry['targets']
else:
answerable[entry['idx']] = entry['targets']
print('BigBird-Easy')
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/13-28-41/output.json') as f:
pred = json.load(f)
print('answerable')
print(get_raw_scores(pred, answerable))
print('unanswerable')
print(get_raw_scores(pred, unanswerable))
print()
print('FiD-Easy')
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/13-49-48/output.json') as f:
pred = json.load(f)
print('answerable')
print(get_raw_scores(pred, answerable))
print('unanswerable')
print(get_raw_scores(pred, unanswerable))
import datasets
import json
from utils import get_raw_scores
data = []
with open('dataset/dev.hard.json', 'r') as f:
for line in f:
data.append(json.loads(line))
print(len(data))
reference = {}
context = {}
for entry in data:
reference[entry['idx']] = entry['targets']
context[entry['idx']] = entry['context']
def print_graph(name, pred, reference, context):
from collections import defaultdict
score = defaultdict(list)
width = 400
for k in pred:
tok_length = len(context[k].split(' '))
em = get_raw_scores({k: pred[k]}, {k: reference[k]})['exact']
assert isinstance(em, float)
score[tok_length // width].append(em)
for k in score:
score[k] = sum(score[k]) / len(score[k])
ranked_score = sorted(score.items(), key=lambda x:x[0])
print(ranked_score)
import matplotlib.pyplot as plt
#print(get_raw_scores(pred, reference))
x = [str(_[0] * width) for _ in ranked_score]
y = [_[1] for _ in ranked_score]
if 'BigBird' in name:
plt.bar(x, y, color='lemonchiffon', edgecolor = "black")
else:
plt.bar(x, y, color='lightgreen', edgecolor = "black")
plt.xlabel('Input Token length', fontsize=12)
plt.ylabel('EM')
plt.xticks(rotation=90)
plt.title(f'{name} EM score vs. Document Length', fontsize=15)
plt.tight_layout()
plt.savefig(name + '.jpg', dpi=200)
plt.show()
plt.clf()
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/12-35-13/output.json') as f:
pred = json.load(f)
print_graph('BigBird-hard', pred, reference, context)
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/15-20-21/output.json') as f:
pred = json.load(f)
print_graph('FiD-hard', pred, reference, context)
import datasets
import json
from utils import get_raw_scores
import matplotlib.pyplot as plt
data = []
with open('dataset/dev.hard.json', 'r') as f:
for line in f:
data.append(json.loads(line))
print(len(data))
reference = {}
context = {}
for entry in data:
reference[entry['idx']] = entry['targets']
context[entry['idx']] = entry['idx'].split('#')[1]
def print_graph(name, pred, reference, context):
from collections import defaultdict
score = defaultdict(list)
for k in pred:
em = get_raw_scores({k: pred[k]}, {k: reference[k]})['exact']
assert isinstance(em, float)
score[context[k]].append(em)
new_score = {}
for k in score:
if len(score[k]) > 20:
new_score[k] = sum(score[k]) / len(score[k])
print(new_score)
x = new_score.keys()
y = new_score.values()
if 'BigBird' in name:
plt.bar(x, y, color='lemonchiffon', edgecolor = "black")
else:
plt.bar(x, y, color='lightgreen', edgecolor = "black")
plt.xlabel('Relations (high frequency -> low frequency)', fontsize=12)
plt.ylabel('EM')
plt.xticks(rotation=90)
plt.title(f'{name} EM score vs. Relations', fontsize=15)
plt.tight_layout()
plt.savefig(name + '.relation.jpg', dpi=200)
plt.show()
plt.clf()
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/12-35-13/output.json') as f:
pred = json.load(f)
print_graph('BigBird-hard', pred, reference, context)
with open('/data2/wenhu/Time-Sensitive-QA/outputs/2021-08-08/15-20-21/output.json') as f:
pred = json.load(f)
print_graph('FiD-hard', pred, reference, context)
###Output
3087
{'P39': 33.65231259968102, 'P54': 51.17370892018779, 'P108': 38.40304182509506, 'P6': 45.833333333333336, 'P69': 39.39393939393939, 'P26': 37.77777777777778, 'P937': 29.78723404255319, 'P488': 54.205607476635514, 'P551': 22.641509433962263, 'P1435': 40.74074074074074, 'P102': 36.36363636363637, 'P371': 28.571428571428573, 'P463': 46.3768115942029, 'P276': 36.666666666666664, 'P6087': 57.4468085106383, 'P17': 37.5, 'P127': 32.35294117647059, 'P137': 36.666666666666664, 'P4791': 24.0, 'P2962': 61.76470588235294, 'P1448': 37.80487804878049, 'P1037': 57.89473684210526}
###Markdown
Human Annotation
###Code
import json
with open('dataset/dev.hard.json') as f:
for line in f:
entry = json.loads(line)
print(entry['question'])
print(entry['idx'])
print(entry['targets'])
print()
annotated = []
with open('dataset/annotated_train.json') as f:
annotated.extend(json.load(f))
with open('dataset/annotated_dev.json') as f:
annotated.extend(json.load(f))
with open('dataset/annotated_test.json') as f:
annotated.extend(json.load(f))
print(len(annotated))
###Output
5061
|
topicModeling/shipper_cleaning.ipynb | ###Markdown
Entitiy Matching
###Code
shipper_matching = pd.read_csv('../shipper_matching/Enigma_Enigma_6countries.csv')
shipper_matching.shape
shipper_matching[((shipper_matching['name_score']>0.9) & (shipper_matching['address_score']>0.6)) | (shipper_matching['address_score']>0.9)].shape
shipper_matching[((shipper_matching['name_score']>0.9) & (shipper_matching['address_score']>0.6))].shape
shipper_matching = shipper_matching[(shipper_matching['name_score']>0.9) | (shipper_matching['address_score']>0.6)]
def shipper_frozenset(col1,col2):
return frozenset([col1,col2])
def shipper_set(col1,col2):
return {col1,col2}
shipper_frozenset_vectorize = np.vectorize(shipper_frozenset)
shipper_matching['cl_shipper_frozenset'] = shipper_frozenset_vectorize(shipper_matching['cl_shipper_party_name_left'],shipper_matching['cl_shipper_party_name_right'])
# elinamate left right mirror
shipper_matching.drop_duplicates(subset='cl_shipper_frozenset')
shipper_matching['cl_shipper_frozenset'].shape
###Output
_____no_output_____
###Markdown
shipper_matching['cl_shipper_left_index'] = shipper_matching['cl_shipper_party_name_left'].astype('category').cat.codesshipper_matching['cl_shipper_right_index'] = shipper_matching['cl_shipper_party_name_right'].astype('category').cat.codes
###Code
le = preprocessing.LabelEncoder()
le.fit(shipper_matching['cl_shipper_party_name_left'])
shipper_matching['cl_shipper_left_index'] = le.transform(shipper_matching['cl_shipper_party_name_left'].values)
shipper_matching['cl_shipper_riht_index'] = le.transform(shipper_matching['cl_shipper_party_name_right'].values)
#inverse_transform = le.inverse_transform(shipper_matching['cl_shipper_party_name_right'].values)
shipper_matching.head()
if not {'a','b'}.isdisjoint({'b','d'}):
a = {'a','b'}.union({'b','d'})
else:
a = None
a
str(frozenset(a))[10:-1]
def create_set_master(master_left,master_right):
if not master_left.isdisjoint(master_right):
new_master = master_left.union(master_right)
else:
new_master = master_left
return new_master
def multiprocess_set(df,step):
create_master_set_vectorize = np.vectorize(create_master_set)
dfL_join_dfR = pd.DataFrame(np.roll(df,step,axis=0),columns=df.columns).join(df['cl_shipper_set'],lsuffix='_left',rsuffix='_right')
dfL_join_dfR['master_set'] = create_master_set_vectorize(dfL_join_dfR['cl_shipper_party_name_left'].values,dfL_join_dfR['cl_shipper_party_name_right'].values)
dfL_join_dfR = dfL_join_dfR[dfL_join_dfR['name_score']>=0.80]
#result = result.loc[(result['score'] < 0.80) & (result['score']>=0.65)]
#print('shift step {} done'.format(step))
return dfL_join_dfR
shipper_set_vectorize = np.vectorize(shipper_set)
create_set_master_vectorize = np.vectorize(create_set_master)
shipper_matching['cl_shipper_set'] = shipper_set_vectorize(shipper_matching['cl_shipper_party_name_left'].values,shipper_matching['cl_shipper_party_name_right'].values)
shipper_matching['cl_shipper_set_master'] = shipper_matching['cl_shipper_set'].copy()
shipper_matching_copy = shipper_matching.copy()
shift_steps = [i for i in range(len(shipper_matching))]
for step in shift_steps:
#shipper_matching = pd.DataFrame(np.roll(shipper_matching,step,axis=0),columns=shipper_matching.columns).join(shipper_matching_copy['cl_shipper_set'],rsuffix='_right')
shipper_matching['cl_shipper_set_right'] = np.roll(shipper_matching_copy['cl_shipper_set'],step)
shipper_matching['cl_shipper_set_master'] = create_set_master_vectorize(shipper_matching['cl_shipper_set_master'].values,shipper_matching['cl_shipper_set_right'].values)
#print('step {} done'.format(step))
shipper_matching['cl_shipper_set_master'].iloc[3]
###Output
_____no_output_____
###Markdown
Logistics Company
###Code
data = read_data()
data = process_data(data)
data.columns
data.shape
data['shipper_party_name'].count()
data['harmonized_number'].count()
data['shipper_party_name'].str.contains(pat='logistic|logistics|forward|dhl|transport|shipping',case=False).sum()
###Output
_____no_output_____ |
cs231n/assignment2/BatchNormalization.ipynb | ###Markdown
Batch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.
###Code
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
X_train: (49000, 3, 32, 32)
y_train: (49000,)
X_val: (1000, 3, 32, 32)
y_val: (1000,)
X_test: (1000, 3, 32, 32)
y_test: (1000,)
###Markdown
Batch normalization: ForwardIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
###Output
After batch normalization (test-time):
means: [-0.03927353 -0.04349151 -0.10452686]
stds: [ 1.01531399 1.01238345 0.97819961]
###Markdown
Batch Normalization: backwardNow implement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.Once you have finished, run the following to numerically check your backward pass.
###Code
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
dx error: 1.70292696859e-09
dgamma error: 7.42041421625e-13
dbeta error: 2.87950576558e-12
###Markdown
Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
###Code
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
###Output
dx difference: 0.0
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 1.22x
###Markdown
Fully Connected Nets with Batch NormalizationNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs2312n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.Concretely, when the flag `use_batchnorm` is `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
###Code
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if name.startswith('beta') or name.startswith('gamma'):
# print(grad_num, grads[name].sum())
pass
if reg == 0: print()
###Output
Running check with reg = 0
Initial loss: 2.33098020386
W1 relative error: 1.12e-05
W2 relative error: 3.91e-09
W3 relative error: 2.83e-07
b1 relative error: 4.67e-06
b2 relative error: 1.76e-09
b3 relative error: 1.40e-10
beta1 relative error: 2.35e-07
beta2 relative error: 4.91e-08
gamma1 relative error: 2.26e-07
gamma2 relative error: 7.91e-09
Running check with reg = 3.14
Initial loss: 7.05999987021
W1 relative error: 3.30e-07
W2 relative error: 4.18e-08
W3 relative error: 5.48e-08
b1 relative error: 1.28e-04
b2 relative error: 1.66e-08
b3 relative error: 2.09e-10
beta1 relative error: 8.95e-07
beta2 relative error: 1.41e-08
gamma1 relative error: 3.28e-07
gamma2 relative error: 7.71e-07
###Markdown
Batchnorm for deep networksRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
###Output
(Iteration 1 / 200) loss: 2.307831
(Epoch 0 / 10) train acc: 0.101000; val_acc: 0.105000
(Epoch 1 / 10) train acc: 0.293000; val_acc: 0.222000
(Epoch 2 / 10) train acc: 0.351000; val_acc: 0.222000
(Epoch 3 / 10) train acc: 0.414000; val_acc: 0.287000
(Epoch 4 / 10) train acc: 0.485000; val_acc: 0.304000
(Epoch 5 / 10) train acc: 0.557000; val_acc: 0.309000
(Epoch 6 / 10) train acc: 0.572000; val_acc: 0.325000
(Epoch 7 / 10) train acc: 0.622000; val_acc: 0.331000
(Epoch 8 / 10) train acc: 0.661000; val_acc: 0.313000
(Epoch 9 / 10) train acc: 0.736000; val_acc: 0.321000
(Epoch 10 / 10) train acc: 0.739000; val_acc: 0.307000
(Iteration 1 / 200) loss: 2.302332
(Epoch 0 / 10) train acc: 0.123000; val_acc: 0.130000
(Epoch 1 / 10) train acc: 0.260000; val_acc: 0.215000
(Epoch 2 / 10) train acc: 0.319000; val_acc: 0.273000
(Epoch 3 / 10) train acc: 0.336000; val_acc: 0.280000
(Epoch 4 / 10) train acc: 0.386000; val_acc: 0.299000
(Epoch 5 / 10) train acc: 0.455000; val_acc: 0.325000
(Epoch 6 / 10) train acc: 0.492000; val_acc: 0.320000
(Epoch 7 / 10) train acc: 0.497000; val_acc: 0.294000
(Epoch 8 / 10) train acc: 0.567000; val_acc: 0.307000
(Epoch 9 / 10) train acc: 0.611000; val_acc: 0.326000
(Epoch 10 / 10) train acc: 0.663000; val_acc: 0.333000
###Markdown
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
###Code
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch normalization and initializationWe will now run a small experiment to study the interaction of batch normalization and weight initialization.The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.
###Code
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
_____no_output_____
###Markdown
Batch normalization: ForwardIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
###Output
_____no_output_____
###Markdown
Batch Normalization: backwardNow implement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.Once you have finished, run the following to numerically check your backward pass.
###Code
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
_____no_output_____
###Markdown
Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
###Code
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
###Output
_____no_output_____
###Markdown
Fully Connected Nets with Batch NormalizationNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs2312n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.Concretely, when the flag `use_batchnorm` is `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
###Code
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
###Output
_____no_output_____
###Markdown
Batchnorm for deep networksRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
###Output
_____no_output_____
###Markdown
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
###Code
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch normalization and initializationWe will now run a small experiment to study the interaction of batch normalization and weight initialization.The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.
###Code
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
X_train: (49000, 3, 32, 32)
y_train: (49000,)
X_val: (1000, 3, 32, 32)
y_val: (1000,)
X_test: (1000, 3, 32, 32)
y_test: (1000,)
###Markdown
Batch normalization: ForwardIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
###Output
After batch normalization (test-time):
means: [-0.03927354 -0.04349152 -0.10452688]
stds: [ 1.01531428 1.01238373 0.97819988]
###Markdown
Batch Normalization: backwardNow implement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.Once you have finished, run the following to numerically check your backward pass.
###Code
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print(dx_num)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
[[-0.00310319 0.00305468 -0.00156246 0.17251307 0.01388029]
[ 0.01147762 -0.10800884 -0.01112564 -0.02021632 -0.02098085]
[-0.01682492 -0.01106847 -0.00384286 0.13581055 -0.04108612]
[ 0.00845049 0.11602263 0.01653096 -0.2881073 0.04818669]]
dx error: 1.70292810437e-09
dgamma error: 7.41722504069e-13
dbeta error: 2.87950576558e-12
###Markdown
Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
###Code
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
###Output
dx difference: 0.0
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 2.51x
###Markdown
Fully Connected Nets with Batch NormalizationNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs2312n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.Concretely, when the flag `use_batchnorm` is `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
###Code
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
###Output
Running check with reg = 0
Initial loss: 2.26119551013
W1 relative error: 1.10e-04
W2 relative error: 2.85e-06
W3 relative error: 3.92e-10
b1 relative error: 1.11e-08
b2 relative error: 2.22e-08
b3 relative error: 1.01e-10
beta1 relative error: 7.85e-09
beta2 relative error: 1.07e-09
beta3 relative error: 0.00e+00
gamma1 relative error: 6.44e-09
gamma2 relative error: 1.56e-09
gamma3 relative error: 0.00e+00
Running check with reg = 3.14
Initial loss: 6.99653322011
W1 relative error: 1.98e-06
W2 relative error: 2.28e-06
W3 relative error: 1.11e-08
b1 relative error: 1.67e-08
b2 relative error: 2.22e-08
b3 relative error: 2.23e-10
beta1 relative error: 6.65e-09
beta2 relative error: 5.69e-09
beta3 relative error: 0.00e+00
gamma1 relative error: 5.94e-09
gamma2 relative error: 4.14e-09
gamma3 relative error: 0.00e+00
###Markdown
Batchnorm for deep networksRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
###Output
(Iteration 1 / 200) loss: 2.340974
(Epoch 0 / 10) train acc: 0.101000; val_acc: 0.115000
(Epoch 1 / 10) train acc: 0.246000; val_acc: 0.216000
(Epoch 2 / 10) train acc: 0.342000; val_acc: 0.290000
(Epoch 3 / 10) train acc: 0.407000; val_acc: 0.298000
(Epoch 4 / 10) train acc: 0.435000; val_acc: 0.301000
(Epoch 5 / 10) train acc: 0.465000; val_acc: 0.298000
(Epoch 6 / 10) train acc: 0.533000; val_acc: 0.319000
(Epoch 7 / 10) train acc: 0.597000; val_acc: 0.314000
(Epoch 8 / 10) train acc: 0.636000; val_acc: 0.319000
(Epoch 9 / 10) train acc: 0.699000; val_acc: 0.339000
(Epoch 10 / 10) train acc: 0.682000; val_acc: 0.328000
(Iteration 1 / 200) loss: 2.302332
(Epoch 0 / 10) train acc: 0.155000; val_acc: 0.140000
(Epoch 1 / 10) train acc: 0.182000; val_acc: 0.160000
(Epoch 2 / 10) train acc: 0.213000; val_acc: 0.179000
(Epoch 3 / 10) train acc: 0.227000; val_acc: 0.241000
(Epoch 4 / 10) train acc: 0.260000; val_acc: 0.223000
(Epoch 5 / 10) train acc: 0.301000; val_acc: 0.248000
(Epoch 6 / 10) train acc: 0.333000; val_acc: 0.260000
(Epoch 7 / 10) train acc: 0.347000; val_acc: 0.255000
(Epoch 8 / 10) train acc: 0.393000; val_acc: 0.257000
(Epoch 9 / 10) train acc: 0.403000; val_acc: 0.269000
(Epoch 10 / 10) train acc: 0.435000; val_acc: 0.295000
###Markdown
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
###Code
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
/Users/rolight/anaconda3/envs/ML/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:106: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
warnings.warn(message, mplDeprecation, stacklevel=1)
###Markdown
Batch normalization and initializationWe will now run a small experiment to study the interaction of batch normalization and weight initialization.The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was proposed by [1] in 2015.The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.The authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167)
###Code
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def print_mean_std(x,axis=0):
print(' means: ', x.mean(axis=axis))
print(' stds: ', x.std(axis=axis))
print()
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
X_train: (49000, 3, 32, 32)
y_train: (49000,)
X_val: (1000, 3, 32, 32)
y_val: (1000,)
X_test: (1000, 3, 32, 32)
y_test: (1000,)
###Markdown
Batch normalization: forwardIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.Referencing the paper linked to above in [1] may be helpful!
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print_mean_std(a,axis=0)
gamma = np.ones((D3,))
beta = np.zeros((D3,))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=0)
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
# Now means should be close to beta and stds close to gamma
print('After batch normalization (gamma=', gamma, ', beta=', beta, ')')
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print_mean_std(a_norm,axis=0)
###Output
After batch normalization (test-time):
means: [-0.03927354 -0.04349152 -0.10452688]
stds: [1.01531428 1.01238373 0.97819988]
###Markdown
Batch normalization: backwardNow implement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.Once you have finished, run the following to numerically check your backward pass.
###Code
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
#You should expect to see relative errors between 1e-13 and 1e-8
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
dx error: 1.7029281043741803e-09
dgamma error: 7.420414216247087e-13
dbeta error: 2.8795057655839487e-12
###Markdown
Batch normalization: alternative backwardIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.Surprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! In the forward pass, given a set of inputs $X=\begin{bmatrix}x_1\\x_2\\...\\x_N\end{bmatrix}$, we first calculate the mean $\mu$ and variance $v$.With $\mu$ and $v$ calculated, we can calculate the standard deviation $\sigma$ and normalized data $Y$.The equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\begin{align}& \mu=\frac{1}{N}\sum_{k=1}^N x_k & v=\frac{1}{N}\sum_{k=1}^N (x_k-\mu)^2 \\& \sigma=\sqrt{v+\epsilon} & y_i=\frac{x_i-\mu}{\sigma}\end{align} The meat of our problem during backpropagation is to compute $\frac{\partial L}{\partial X}$, given the upstream gradient we receive, $\frac{\partial L}{\partial Y}.$ To do this, recall the chain rule in calculus gives us $\frac{\partial L}{\partial X} = \frac{\partial L}{\partial Y} \cdot \frac{\partial Y}{\partial X}$.The unknown/hart part is $\frac{\partial Y}{\partial X}$. We can find this by first deriving step-by-step our local gradients at $\frac{\partial v}{\partial X}$, $\frac{\partial \mu}{\partial X}$,$\frac{\partial \sigma}{\partial v}$, $\frac{\partial Y}{\partial \sigma}$, and $\frac{\partial Y}{\partial \mu}$,and then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\frac{\partial Y}{\partial X}$.If it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\frac{\partial L}{\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\frac{\partial \mu}{\partial x_i}, \frac{\partial v}{\partial x_i}, \frac{\partial \sigma}{\partial x_i},$ then assemble these pieces to calculate $\frac{\partial y_i}{\partial x_i}$. You should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
###Code
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
###Output
dx difference: 1.050338323703225e-10
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 1.01x
###Markdown
Fully Connected Nets with Batch NormalizationNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.Concretely, when the `normalization` flag is set to `"batchnorm"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
###Code
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
# You should expect losses between 1e-4~1e-10 for W,
# losses between 1e-08~1e-10 for b,
# and losses between 1e-08~1e-09 for beta and gammas.
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
normalization='batchnorm')
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
###Output
Running check with reg = 0
Initial loss: 2.2611955101340957
W1 relative error: 1.10e-04
W2 relative error: 2.85e-06
W3 relative error: 3.92e-10
b1 relative error: 2.22e-08
b2 relative error: 2.22e-08
b3 relative error: 4.78e-11
beta1 relative error: 7.33e-09
beta2 relative error: 1.89e-09
gamma1 relative error: 7.57e-09
gamma2 relative error: 1.96e-09
Running check with reg = 3.14
Initial loss: 6.996533220108303
W1 relative error: 1.98e-06
W2 relative error: 2.28e-06
W3 relative error: 1.11e-08
b1 relative error: 2.78e-09
b2 relative error: 2.22e-08
b3 relative error: 2.23e-10
beta1 relative error: 6.65e-09
beta2 relative error: 3.48e-09
gamma1 relative error: 5.94e-09
gamma2 relative error: 4.14e-09
###Markdown
Batchnorm for deep networksRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
print('Solver with batch norm:')
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True,print_every=20)
bn_solver.train()
print('\nSolver without batch norm:')
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
###Output
Solver with batch norm:
(Iteration 1 / 200) loss: 2.340974
(Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000
(Epoch 1 / 10) train acc: 0.314000; val_acc: 0.266000
(Iteration 21 / 200) loss: 2.039365
(Epoch 2 / 10) train acc: 0.390000; val_acc: 0.278000
(Iteration 41 / 200) loss: 2.036710
(Epoch 3 / 10) train acc: 0.497000; val_acc: 0.315000
(Iteration 61 / 200) loss: 1.769536
(Epoch 4 / 10) train acc: 0.528000; val_acc: 0.319000
(Iteration 81 / 200) loss: 1.265761
(Epoch 5 / 10) train acc: 0.589000; val_acc: 0.317000
(Iteration 101 / 200) loss: 1.256780
(Epoch 6 / 10) train acc: 0.637000; val_acc: 0.320000
(Iteration 121 / 200) loss: 1.115820
(Epoch 7 / 10) train acc: 0.688000; val_acc: 0.321000
(Iteration 141 / 200) loss: 1.146838
(Epoch 8 / 10) train acc: 0.711000; val_acc: 0.311000
(Iteration 161 / 200) loss: 0.829313
(Epoch 9 / 10) train acc: 0.776000; val_acc: 0.314000
(Iteration 181 / 200) loss: 0.916865
(Epoch 10 / 10) train acc: 0.789000; val_acc: 0.322000
Solver without batch norm:
(Iteration 1 / 200) loss: 2.302332
(Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000
(Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000
(Iteration 21 / 200) loss: 2.041970
(Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000
(Iteration 41 / 200) loss: 1.900473
(Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000
(Iteration 61 / 200) loss: 1.713156
(Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000
(Iteration 81 / 200) loss: 1.662209
(Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000
(Iteration 101 / 200) loss: 1.696062
(Epoch 6 / 10) train acc: 0.536000; val_acc: 0.346000
(Iteration 121 / 200) loss: 1.550785
(Epoch 7 / 10) train acc: 0.530000; val_acc: 0.310000
(Iteration 141 / 200) loss: 1.436308
(Epoch 8 / 10) train acc: 0.622000; val_acc: 0.342000
(Iteration 161 / 200) loss: 1.000868
(Epoch 9 / 10) train acc: 0.654000; val_acc: 0.328000
(Iteration 181 / 200) loss: 0.925457
(Epoch 10 / 10) train acc: 0.728000; val_acc: 0.335000
###Markdown
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
###Code
def plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):
"""utility function for plotting training history"""
plt.title(title)
plt.xlabel(label)
bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]
bl_plot = plot_fn(baseline)
num_bn = len(bn_plots)
for i in range(num_bn):
label='with_norm'
if labels is not None:
label += str(labels[i])
plt.plot(bn_plots[i], bn_marker, label=label)
label='baseline'
if labels is not None:
label += str(labels[0])
plt.plot(bl_plot, bl_marker, label=label)
plt.legend(loc='lower center', ncol=num_bn+1)
plt.subplot(3, 1, 1)
plot_training_history('Training loss','Iteration', solver, [bn_solver], \
lambda x: x.loss_history, bl_marker='o', bn_marker='o')
plt.subplot(3, 1, 2)
plot_training_history('Training accuracy','Epoch', solver, [bn_solver], \
lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')
plt.subplot(3, 1, 3)
plot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \
lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch normalization and initializationWe will now run a small experiment to study the interaction of batch normalization and weight initialization.The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers_ws = {}
solvers_ws = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers_ws[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers_ws[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers_ws[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))
best_val_accs.append(max(solvers_ws[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))
final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Inline Question 1:Describe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why? Answer:*The model without batch normalization is more sensitive to weight scale. The changes in accuracy and loss with respect to weight scale are smoother in a model with batch normalization.* Batch normalization and batch sizeWe will now run a small experiment to study the interaction of batch normalization and batch size.The first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.
###Code
def run_batchsize_experiments(normalization_mode):
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
n_epochs=10
weight_scale = 2e-2
batch_sizes = [5,10,50]
lr = 10**(-3.5)
solver_bsize = batch_sizes[0]
print('No normalization: batch size = ',solver_bsize)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
solver = Solver(model, small_data,
num_epochs=n_epochs, batch_size=solver_bsize,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
verbose=False)
solver.train()
bn_solvers = []
for i in range(len(batch_sizes)):
b_size=batch_sizes[i]
print('Normalization: batch size = ',b_size)
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)
bn_solver = Solver(bn_model, small_data,
num_epochs=n_epochs, batch_size=b_size,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
verbose=False)
bn_solver.train()
bn_solvers.append(bn_solver)
return bn_solvers, solver, batch_sizes
batch_sizes = [5,10,50]
bn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')
plt.subplot(2, 1, 1)
plot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.subplot(2, 1, 2)
plot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.gcf().set_size_inches(15, 10)
plt.show()
###Output
_____no_output_____
###Markdown
Inline Question 2:Describe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed? Answer:*By adding batch normalization, we get some loss in accuracy, but the result can be improved with a larger batch size, because the larger batch size we have, the more accurate the sample mean and variance are.* Layer NormalizationBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. Several alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf) Inline Question 3:Which of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. 3. Subtracting the mean image of the dataset from each image in the dataset.4. Setting all RGB values to either 0 or 1 depending on a given threshold. Answer:Batch normalization: 1, 4Layer normalization: 2, 3 Layer Normalization: ImplementationNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.Here's what you need to do:* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_backward`. Run the cell below to check your results.* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. Run the second cell below to check your results.* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `"layernorm"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. Run the third cell below to run the batch size experiment on layer normalization.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after layer normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 =4, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before layer normalization:')
print_mean_std(a,axis=1)
gamma = np.ones(D3)
beta = np.zeros(D3)
# Means should be close to zero and stds close to one
print('After layer normalization (gamma=1, beta=0)')
a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=1)
gamma = np.asarray([3.0,3.0,3.0])
beta = np.asarray([5.0,5.0,5.0])
# Now means should be close to beta and stds close to gamma
print('After layer normalization (gamma=', gamma, ', beta=', beta, ')')
a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=1)
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
ln_param = {}
fx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]
fg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]
fb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = layernorm_forward(x, gamma, beta, ln_param)
dx, dgamma, dbeta = layernorm_backward(dout, cache)
#You should expect to see relative errors between 1e-12 and 1e-8
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
dx error: 1.433615657860454e-09
dgamma error: 4.519489546032799e-12
dbeta error: 2.276445013433725e-12
###Markdown
Layer Normalization and batch sizeWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!
###Code
ln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')
plt.subplot(2, 1, 1)
plot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.subplot(2, 1, 2)
plot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.gcf().set_size_inches(15, 10)
plt.show()
###Output
No normalization: batch size = 5
Normalization: batch size = 5
Normalization: batch size = 10
Normalization: batch size = 50
###Markdown
Batch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.
###Code
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
X_train: (49000, 3, 32, 32)
y_train: (49000,)
X_val: (1000, 3, 32, 32)
y_val: (1000,)
X_test: (1000, 3, 32, 32)
y_test: (1000,)
###Markdown
Batch normalization: ForwardIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
###Output
After batch normalization (test-time):
means: [-0.03927354 -0.04349152 -0.10452688]
stds: [ 1.01531428 1.01238373 0.97819988]
###Markdown
Batch Normalization: backwardNow implement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.Once you have finished, run the following to numerically check your backward pass.
###Code
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
dx error: 1.70292583282e-09
dgamma error: 7.42041421625e-13
dbeta error: 2.87950576558e-12
###Markdown
Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
###Code
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
###Output
dx difference: 5.1350543434e-13
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 1.19x
###Markdown
Fully Connected Nets with Batch NormalizationNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs2312n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.Concretely, when the flag `use_batchnorm` is `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
###Code
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
###Output
Running check with reg = 0
Initial loss: 2.26119551013
W1 relative error: 1.10e-04
W2 relative error: 2.85e-06
W3 relative error: 3.92e-10
b1 relative error: 5.55e-09
b2 relative error: 2.22e-08
b3 relative error: 4.78e-11
beta1 relative error: 7.33e-09
beta2 relative error: 1.89e-09
gamma1 relative error: 7.57e-09
gamma2 relative error: 1.96e-09
Running check with reg = 3.14
Initial loss: 6.99653322011
W1 relative error: 1.98e-06
W2 relative error: 2.29e-06
W3 relative error: 1.11e-08
b1 relative error: 5.55e-09
b2 relative error: 5.55e-09
b3 relative error: 2.23e-10
beta1 relative error: 6.65e-09
beta2 relative error: 3.48e-09
gamma1 relative error: 5.94e-09
gamma2 relative error: 4.14e-09
###Markdown
Batchnorm for deep networksRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
###Output
(Iteration 1 / 200) loss: 2.340974
(Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000
(Epoch 1 / 10) train acc: 0.320000; val_acc: 0.276000
(Epoch 2 / 10) train acc: 0.403000; val_acc: 0.304000
(Epoch 3 / 10) train acc: 0.477000; val_acc: 0.309000
(Epoch 4 / 10) train acc: 0.498000; val_acc: 0.302000
(Epoch 5 / 10) train acc: 0.570000; val_acc: 0.299000
(Epoch 6 / 10) train acc: 0.626000; val_acc: 0.327000
(Epoch 7 / 10) train acc: 0.707000; val_acc: 0.352000
(Epoch 8 / 10) train acc: 0.739000; val_acc: 0.342000
(Epoch 9 / 10) train acc: 0.782000; val_acc: 0.315000
(Epoch 10 / 10) train acc: 0.824000; val_acc: 0.322000
(Iteration 1 / 200) loss: 2.302332
(Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000
(Epoch 1 / 10) train acc: 0.245000; val_acc: 0.212000
(Epoch 2 / 10) train acc: 0.318000; val_acc: 0.270000
(Epoch 3 / 10) train acc: 0.347000; val_acc: 0.262000
(Epoch 4 / 10) train acc: 0.384000; val_acc: 0.286000
(Epoch 5 / 10) train acc: 0.420000; val_acc: 0.308000
(Epoch 6 / 10) train acc: 0.471000; val_acc: 0.325000
(Epoch 7 / 10) train acc: 0.484000; val_acc: 0.300000
(Epoch 8 / 10) train acc: 0.555000; val_acc: 0.303000
(Epoch 9 / 10) train acc: 0.595000; val_acc: 0.322000
(Epoch 10 / 10) train acc: 0.610000; val_acc: 0.315000
###Markdown
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
###Code
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch normalization and initializationWe will now run a small experiment to study the interaction of batch normalization and weight initialization.The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
###Output
_____no_output_____ |
notebooks/Avance.ipynb | ###Markdown
Lectura de archivos
###Code
df = pd.read_csv('../datasets/owid-covid-data.csv', parse_dates=['date'])
df.info
df.head()
df.columns
len(df.columns)
df.drop(['new_cases_smoothed','new_deaths_smoothed','new_cases_smoothed_per_million','new_deaths_smoothed_per_million','new_vaccinations_smoothed','new_vaccinations_smoothed_per_million','cardiovasc_death_rate','diabetes_prevalence','female_smokers','male_smokers'],axis=1,inplace=True)
df_copia = df.copy()
df_copia['location'].unique()
countries_of_interest = ['Ecuador', 'Peru', 'Colombia', 'España', 'Rusia', 'Panama', 'China', 'United States']
df_countries = df.loc[df['location'].isin(countries_of_interest)]
df_countries
df_countries.columns
len(df_countries.columns)
df_countries.shape
df_countries.describe()
df_copia.head()
df_latin_america_vacc = df_copia.loc[df_copia['continent'] == 'South America', ['iso_code','location','total_vaccinations','people_vaccinated', 'people_fully_vaccinated', 'new_vaccinations','population']]
df_latin_america_vacc = df_latin_america_vacc.groupby(['location']).sum()
df_latin_america_vacc.reset_index(inplace=True)
df_latin_america_vacc['restantes_vacunar'] = df_latin_america_vacc['population'] - df_latin_america_vacc['people_vaccinated']
df_latin_america_vacc
df_latin_america_vacc['restantes_vacunar'] = ((df_latin_america_vacc['restantes_vacunar']) / df_latin_america_vacc['population']) * 100
df_latin_america_vacc
plt.figure(figsize=(15,8))
ax = sns.barplot(x="location", y="restantes_vacunar", data=df_latin_america_vacc).set_title('Vacunación en LATAM')
plt.figure(figsize=(15,8))
ax = sns.barplot(x="location", y="people_vaccinated", data=df_latin_america_vacc).set_title('Vacunación en LATAM')
###Output
_____no_output_____ |
make_zarr/.ipynb_checkpoints/Make_OISST_zarr-checkpoint.ipynb | ###Markdown
create NOAA OISST zarr data files
###Code
%%time
adir = 'F:/data/sat_data/sst/noaa_oisst/www.ncei.noaa.gov/data/sea-surface-temperature-optimum-interpolation/v2.1/access/'
dir_pattern_zarr = adir + 'avhrr_zarr2/'
dir_pattern = adir + 'avhrr/'
pattern = dir_pattern + '/*/*.nc'
files = glob(pattern)
print('number of files:',len(files))
%%time
#open dataset, this will take a while
ds=xr.open_mfdataset(files,combine='nested',concat_dim='time',decode_cf=False,mask_and_scale=False)
ds.close()
ds_all = ds.isel(zlev=0)
ds_all
#remove any duplicates
_, index = np.unique(ds_all['time'], return_index=True)
ds_all=ds_all.isel(time=index)
#rechunck data #data in int16 = 2 bytes
ds_all = ds_all.chunk({'time':1000,'lat':300,'lon':300})
ds_all
#output data to zarr format
#ds_all.to_zarr(dir_pattern_zarr, consolidated=True)
ds_all
###Output
_____no_output_____
###Markdown
append new data to original data store without overwritting entire store- there is a trick, I need to raise and issue on github to document because the documentation isn't clear about:1. that you are appending, so you need to make sure you are only appending the new bit2. that you need to decode your dataI struggled to get this to work at first because I tried just reading the data the same way & then appending it. This did not work, I got errors about removing scale_factor, add_offset, Fill_value which are all encoding attributes. After poking around, the error suggest you just remove these attributes, and that is true, if you remove them, it will work, but then the appended data isn't encoded correctly and it is a mess. so, actually, you have to apply the decoding, then everything is peachy. First calculate where the old data ends, then read in new data, decode, append
###Code
ds_old = xr.open_zarr(dir_pattern_zarr, consolidated=True,decode_cf=False)
ds_old.close()
lasttime=ds_old.time[-1] + 1#+np.timedelta64(1, 'D')
print(lasttime.data)
ds_old
ds=xr.open_mfdataset(files[-100:],combine='nested',concat_dim='time',decode_cf=False,mask_and_scale=False)
ds.close()
ds_all = ds.isel(zlev=0)
#remove any duplicates
_, index = np.unique(ds_all['time'], return_index=True)
ds_all=ds_all.isel(time=index)
#rechunck data #data in int16 = 2 bytes
ds_all = ds_all.chunk({'time':1000,'lat':300,'lon':300})
ds_new = ds_all.sel(time=slice(lasttime,9999999))
print(ds_new.time[0:3].data)
ds_new = xr.decode_cf(ds_new)
ds_new.to_zarr(dir_pattern_zarr, mode='a',append_dim='time', consolidated=True) #
###Output
_____no_output_____ |
lec5/lec5.ipynb | ###Markdown
DDPG: Deep Deterministic Policy Gradient 1. Setup environment
###Code
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from collections import deque
from itertools import count
from PIL import Image
from copy import deepcopy
import time
import cartenv
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
env = cartenv.ContinuousCartPoleEnv()
state = env.reset()
print("initial state: ", state)
action = env.action_space.sample()
print("sample action: ", action)
print(env.action_space)
print(env.observation_space)
n_action = env.action_space.shape[0]
n_state = env.observation_space.shape[0]
print("#state: ", n_state)
print("#action: ", n_action)
print(env.observation_space.high)
print(env.observation_space.low)
print(env.action_space.high)
print(env.action_space.low)
###Output
_____no_output_____
###Markdown
2. Experience Pool
###Code
Experience = namedtuple('Experience', ('state', 'action', 'reward', 'next_state', 'terminal'))
class ReplayMemory(object):
def __init__(self, capacity):
self.memory = deque(maxlen=capacity)
def push(self, *args):
self.memory.append(Experience(*args)) ## append a new experience
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self): ## len(experience)
return len(self.memory)
experience_pool = ReplayMemory(int(1e6)) #initialize memory pool
###Output
_____no_output_____
###Markdown
3. Hyperparameters
###Code
EPOCHS = 6001
EPOCH_STEPS = 200
BATCH_SIZE = 128 #batch-train
WARM_UP_SIZE = BATCH_SIZE
GAMMA = 0.99 #reward-discount: 0.99 vs 0.999???
EXPLORE_NOISE = 0.05 #the best choice?
UPDATE_WEIGHT = 0.999 #0.99 vs 0.999???
LEARN_RATE = 1e-3
###Output
_____no_output_____
###Markdown
4. Policy-Network & Q-Network
###Code
policy_net = nn.Sequential(
nn.Linear(n_state, 100),
nn.ReLU(),
nn.Linear(100, n_action),
nn.Tanh()) #tanh
q_net = nn.Sequential(
nn.Linear(n_state + n_action, 100),
nn.ReLU(),
nn.Linear(100, 1))
target_p_net = deepcopy(policy_net)
target_q_net = deepcopy(q_net)
def enable_gradient(network):
for p in network.parameters():
p.requires_grad = True
def disable_gradient(network):
for p in network.parameters():
p.requires_grad = False
disable_gradient(target_p_net)
disable_gradient(target_q_net)
def copy_net(source_net, target_net):
with torch.no_grad():
for p, p_targ in zip(source_net.parameters(), target_net.parameters()):
p_targ.data.mul_(UPDATE_WEIGHT)
p_targ.data.add_((1 - UPDATE_WEIGHT) * p.data)
###Output
_____no_output_____
###Markdown
5. Exploration
###Code
def policy_action(state): # state is tensor
with torch.no_grad():
action = env.action_space.high[0] * policy_net(state)
return action
def explore_action(state):
with torch.no_grad():
action = env.action_space.high[0] * policy_net(state)
action = torch.normal(action, EXPLORE_NOISE)
action = torch.clamp(action, min=env.action_space.low[0], max=env.action_space.high[0])
return action
def target_action(state):
return env.action_space.high[0] * target_p_net(state)
def explore_one_step(state):
action = explore_action(state) # a
obs, r, done, _ = env.step(action.item())
reward = torch.tensor(r, dtype=torch.float) # r
next_state = torch.tensor(obs, dtype=torch.float) # s'
terminal = torch.tensor(int(done) * 1.0, dtype=torch.float) # t
# Store the transition in experience pool
experience_pool.push(state, action, reward, next_state, terminal) #(s,a,r,s',t), tensors
return done, next_state, r
###Output
_____no_output_____
###Markdown
6. Optimize
###Code
# optimizer_p = optim.SGD(policy_net.parameters(), lr=LEARN_RATE)
# optimizer_q = optim.SGD(q_net.parameters(), lr=LEARN_RATE)
optimizer_p = optim.Adam(policy_net.parameters(), lr=LEARN_RATE)
optimizer_q = optim.Adam(q_net.parameters(), lr=LEARN_RATE)
loss_fn = torch.nn.MSELoss()
def sample_batch():
experiences = experience_pool.sample(BATCH_SIZE)
experiences_batch = Experience(*zip(*experiences)) #experiences of batches, unpack twice
state_batch = torch.stack(experiences_batch.state)
action_batch = torch.stack(experiences_batch.action)
reward_batch = torch.stack(experiences_batch.reward)
next_state_batch = torch.stack(experiences_batch.next_state)
terminal_batch = torch.stack(experiences_batch.terminal)
state_action_batch = torch.cat((state_batch, action_batch), dim=1)
return state_batch, action_batch, reward_batch, next_state_batch, terminal_batch, state_action_batch
def update_q_net(r, ns, d, sa):
curr_q_value = q_net(sa).squeeze()
next_action = target_p_net(ns)
next_sa = torch.cat((ns, next_action), dim=1)
target_next_q_value = target_q_net(next_sa).squeeze()
target_q_value = r + GAMMA * target_next_q_value * (1 - d)
# mean square loss
loss = loss_fn(curr_q_value, target_q_value)
# Optimize the model
optimizer_q.zero_grad()
loss.backward()
optimizer_q.step()
return loss.item()
def update_policy_net(s):
curr_action = policy_net(s)
curr_sa = torch.cat((s, curr_action), dim=1)
## using q network
disable_gradient(q_net)
loss = -1.0 * torch.mean(q_net(curr_sa))
# Optimize the model
optimizer_p.zero_grad()
loss.backward()
optimizer_p.step()
enable_gradient(q_net)
return loss.item()
###Output
_____no_output_____
###Markdown
7. Train Loop
###Code
def evaluate():
state = torch.tensor(env.reset(), dtype=torch.float)
while True:
env.render()
action = policy_action(state).item()
next_state, _, done, _ = env.step(action)
state = torch.tensor(next_state, dtype=torch.float)
if done:
break # one episode
def train_loop():
for epoch in range(EPOCHS):
explore_steps = 0
reward = 0
# Initialize the environment and state
state = torch.tensor(env.reset(), dtype=torch.float) # s
while explore_steps < EPOCH_STEPS:
explore_steps += 1
# generate experience
done, next_state, r = explore_one_step(state)
state = next_state
reward += r
# Perform one step of the optimization
if len(experience_pool) > WARM_UP_SIZE:
s, _, r, ns, d, sa = sample_batch()
loss_q = update_q_net(r,ns,d,sa)
loss_p = update_policy_net(s)
copy_net(policy_net, target_p_net)
copy_net(q_net, target_q_net)
if done:
break # one episode
if epoch % 50 == 0 and len(experience_pool) > WARM_UP_SIZE:
evaluate()
print("epoch: ", epoch, "reward: ", reward, "loss_policy: ", loss_p, "loss_q: ", loss_q)
train_loop()
###Output
_____no_output_____
###Markdown
8. Load Saved Model
###Code
#torch.save(policy_net.state_dict(), 'policy.pt')
#policy_net.load_state_dict(torch.load('policy.pt'))
#evaluate()
###Output
_____no_output_____ |
AIML Assignment Day 31.ipynb | ###Markdown
Question : Use IRIS dataset from Sklearn and perform KNN. Compare with logistic regression
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets, preprocessing
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.preprocessing import StandardScaler
iris = datasets.load_iris()
dir(iris)
X = iris.data
y = iris.target
print(X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=49)
print(X_train.shape, X_test.shape)
print(y_train.shape, y_test.shape)
scaler = StandardScaler()
scaler.fit(X_train)
scaled_train_x = scaler.fit_transform(X_train)
scaled_test_x = scaler.fit_transform(X_test)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(scaled_train_x, y_train)
ypred = knn.predict(scaled_test_x)
accuracy_score(y_test,ypred)
xt = confusion_matrix(y_test, ypred)
xt
sns.heatmap(cm, annot=True)
plt.show()
log_reg = LogisticRegression()
log_reg.fit(scaled_train_x,y_train)
ypred1 = log_reg.predict(scaled_test_x)
accuracy_score(y_test, ypred1)
cm1 = confusion_matrix(y_test, ypred1)
cm1
sns.heatmap(cm1, annot=True)
plt.show()
###Output
_____no_output_____ |
tutorials/lecture_05/solution_tutorial_05.ipynb | ###Markdown
Convolutional Neural Networks* Today we will see convolutional neural networks (CNNs), a type of neural network which is specifically well suited for dealing with images. In the previous tutorial we have seen that multilayer perceptrons present strong limitations when dealing with high dimensional inputs and in this tutorial we will see how CNNs are able to overcome these limitations* The goal of this notebook is to make you get familiar with CNNs and keep making you get familiar with PyTorch autodifferentiation properties.* We will be training a CNN on the Hymenoptera dataset which you can download from [here](https://download.pytorch.org/tutorial/hymenoptera_data.zip).
###Code
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import torch.nn.functional as F
import time
import os
import copy
from matplotlib import pyplot as plt
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
1. Load the dataset* As we have seen last week we will have to load the dataset in a similar fashion by making use of PyTorch's dataloaders. To keep things as simple as possible for now we would like the data to have the same dimensionality as the one we have loaded last week when dealing with the CIFAR-10 dataset. Visualize the dataIf the dataset is loaded properly you should be able to see some random images of bees and ants, which are the two classes we will be classifying today.
###Code
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(32),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(32),
transforms.CenterCrop(32),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
trainloader = dataloaders['train']
valloader = dataloaders['val']
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001)
inputs, classes = next(iter(dataloaders['train']))
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
###Output
_____no_output_____
###Markdown
Computation GraphYou are free to define any kind of convolutional neural network hat you think is reasonable for today's problem.Remember that convolutional neural networks are usually a combination of the following building blocks: * Convolutional layers * Pooling layers * Linear layers It is your task today to arrange these components into a reasonable architecture.
###Code
import torch.nn.functional as F
class ConvNet(nn.Module):
def __init__(self, output_dim):
super(ConvNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, 3, 2, 1), #in_channels, out_channels, kernel_size, stride, padding
nn.ReLU(inplace=True),
nn.MaxPool2d(2), #kernel_size
nn.Conv2d(64, 192, 3, padding = 1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(192, 384, 3, padding = 1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, 3, padding = 1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, 3, padding = 1),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 2 * 2, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, output_dim),
)
def forward(self, x):
x = self.features(x)
x = x.view(x.shape[0], -1)
x = self.classifier(x)
return x
###Output
_____no_output_____
###Markdown
Training Set-upJust like last week once your computation graph is defined you will have to define your experimental set-up: this includes the construction of the network, the definition of an objective function to minimize and an optimization algorithm.
###Code
net = ConvNet(2)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Train the networkOnce everything is ready you will have to train the network and measure its performance.Discuss the performance of your model
###Code
def calculate_accuracy(x, y):
preds = x.argmax(1, keepdim=True)
correct = preds.eq(y.view_as(preds)).sum()
acc = correct.float()/preds.shape[0]
return acc
def train(model, optimizer):
model.train()
for epoch in range(3):
training_loss = 0
training_acc = 0
validation_loss = 0
validation_acc = 0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
acc = calculate_accuracy(outputs, labels)
loss.backward()
optimizer.step()
training_loss += loss.item()
training_acc += acc.item()
with torch.no_grad():
for i, data in enumerate(valloader, 0):
val_inputs, val_labels = data
outputs = model(val_inputs)
acc = calculate_accuracy(outputs, val_labels)
validation_acc += acc.item()
print(training_loss / len(trainloader), training_acc / len(trainloader))
print(validation_loss / len(valloader), validation_acc / len(valloader))
print('Finished Training')
train(net, optimizer)
###Output
(0.694684446835127, 0.430327868852459)
(0, 29.807692307692307)
(0.6945546894777016, 0.39344262295081966)
(0, 31.576923076923077)
###Markdown
Use a pre-trained modelYou might have noticed that training a CNN from scratch can be very slow and that the overall performance might not be satisfying. To overcome this we can use a pre-trained network and fine-tune it on our Hymenoptera dataset.You are free to choose any pre-trained model that the PyTorch library offers evaluate and compare its performance to the CNN you have built yourself.
###Code
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
train(model_ft, optimizer_ft)
###Output
(0.5604771497796793, 0.7131147540983607)
(0, 43.5)
|
challenge-2.ipynb | ###Markdown
IBM Quantum Challenge Fall 2021 Challenge 2: Calculate bandgap of OLED molecules We recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience. IntroductionOrganic Light Emitting Diodes or OLEDs have become increasingly popular in recent years as the basis for fabrication of thin, flexible TV and mobile phone displays that emit light upon application of an electric current. Recent studies ([**Gao et al., 2021**](https://www.nature.com/articles/s41524-021-00540-6)) have been looking at electronic transitions of high energy states in phenylsulfonyl-carbazole (PSPCz) molecules, which could be useful thermally activated delayed fluorescence (TADF) emitters for OLED technology. TADF emitters could potentially produce OLEDs that perform with 100 percent internal quantum efficiency (IQE), i.e the fraction of the charge carriers in a circuit or system that emit absorbed photons, compared with conventional fluorophores currently used to make OLEDs whose quantum efficiencies are limited to 25 percent. That large boost in efficiency means manufacturers could produce OLEDs for use in devices requiring low-power consumption, such as cell phones, which could in turn lead to future developments where virtually any surface can be converted into a cheap and energy-efficient lighting source covering vast areas of homes, offices, museums and more! Why quantum?Quantum computers could be invaluable tools for studying the electronic structure and dynamical properties of complex molecules and materials as it makes more sense to model quantum mechanical systems on a quantum device than on a classical computer. A recent joint research project by IBM Quantum and partners was successful in developing methods to improve accuracy for the calculation of excited TADF states for efficient OLEDs, making it the world's first research case of applying quantum computers to the calculation of excited states of commercial materials (see paper linked above for reference). With this background information, we are interested in describing quantum computations of the “excited states,” or high energy states, of industrial chemical compounds that could potentially be used in the fabrication of efficient OLED devices. Challenge**Goal**The goal of this challenge is to use quantum algorithms to reliably predict the excited states energies of these TADF materials. Along the way, this challenge introduces state-of-the-art hybrid classical-quantum embedded chemistry modelling allowing the splitting of the work-load between classical approximations and more accurate quantum calculations. 1. **Challenge 2a & 2b**: Understanding the atomic orbitals (AO), molecular orbitals (MO) and how to reduce the number of orbitals using active space transformation.2. **Challenge 2c & 2d**: Calculating ground state energy of PSPCz molecule using NumPy and Variational Quantum Eigensolver (VQE).3. **Challenge 2e**: Calculating excited state energy of PSPCz module using quantum Equation-of-Motion (QEOM) algorithm.4. **Challenge 2f**: Running VQE on the cloud (simulator or real quantum system) using Qiskit Runtime.Before you begin, we recommend watching the [**Qiskit Nature Demo Session with Max Rossmannek**](https://youtu.be/UtMVoGXlz04?t=38) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-nature) to learn how to define electronic structure calculations. 1. DriverThe interfaces to the classical chemistry codes that are available in Qiskit are called drivers. We have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available.By running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm.
###Code
from qiskit_nature.drivers import Molecule
from qiskit_nature.drivers.second_quantization import ElectronicStructureDriverType, ElectronicStructureMoleculeDriver
# PSPCz molecule
geometry = [['C', [ -0.2316640, 1.1348450, 0.6956120]],
['C', [ -0.8886300, 0.3253780, -0.2344140]],
['C', [ -0.1842470, -0.1935670, -1.3239330]],
['C', [ 1.1662930, 0.0801450, -1.4737160]],
['C', [ 1.8089230, 0.8832220, -0.5383540]],
['C', [ 1.1155860, 1.4218050, 0.5392780]],
['S', [ 3.5450920, 1.2449890, -0.7349240]],
['O', [ 3.8606900, 1.0881590, -2.1541690]],
['C', [ 4.3889120, -0.0620730, 0.1436780]],
['O', [ 3.8088290, 2.4916780, -0.0174650]],
['C', [ 4.6830900, 0.1064460, 1.4918230]],
['C', [ 5.3364470, -0.9144080, 2.1705280]],
['C', [ 5.6895490, -2.0818670, 1.5007820]],
['C', [ 5.4000540, -2.2323130, 0.1481350]],
['C', [ 4.7467230, -1.2180160, -0.5404770]],
['N', [ -2.2589180, 0.0399120, -0.0793330]],
['C', [ -2.8394600, -1.2343990, -0.1494160]],
['C', [ -4.2635450, -1.0769890, 0.0660760]],
['C', [ -4.5212550, 0.2638010, 0.2662190]],
['C', [ -3.2669630, 0.9823890, 0.1722720]],
['C', [ -2.2678900, -2.4598950, -0.3287380]],
['C', [ -3.1299420, -3.6058560, -0.3236210]],
['C', [ -4.5179520, -3.4797390, -0.1395160]],
['C', [ -5.1056310, -2.2512990, 0.0536940]],
['C', [ -5.7352450, 1.0074800, 0.5140960]],
['C', [ -5.6563790, 2.3761270, 0.6274610]],
['C', [ -4.4287740, 3.0501460, 0.5083650]],
['C', [ -3.2040560, 2.3409470, 0.2746950]],
['H', [ -0.7813570, 1.5286610, 1.5426490]],
['H', [ -0.7079140, -0.7911480, -2.0611600]],
['H', [ 1.7161320, -0.2933710, -2.3302930]],
['H', [ 1.6308220, 2.0660550, 1.2427990]],
['H', [ 4.4214900, 1.0345500, 1.9875450]],
['H', [ 5.5773000, -0.7951290, 3.2218590]],
['H', [ 6.2017810, -2.8762260, 2.0345740]],
['H', [ 5.6906680, -3.1381740, -0.3739110]],
['H', [ 4.5337010, -1.3031330, -1.6001680]],
['H', [ -1.1998460, -2.5827750, -0.4596910]],
['H', [ -2.6937370, -4.5881470, -0.4657540]],
['H', [ -5.1332290, -4.3740010, -0.1501080]],
['H', [ -6.1752900, -2.1516170, 0.1987120]],
['H', [ -6.6812260, 0.4853900, 0.6017680]],
['H', [ -6.5574610, 2.9529350, 0.8109620]],
['H', [ -4.3980410, 4.1305040, 0.5929440]],
['H', [ -2.2726630, 2.8838620, 0.1712760]]]
molecule = Molecule(geometry=geometry, charge=0, multiplicity=1)
driver = ElectronicStructureMoleculeDriver(molecule=molecule,
basis='631g*',
driver_type=ElectronicStructureDriverType.PYSCF)
###Output
/opt/conda/lib/python3.8/site-packages/pyscf/lib/misc.py:47: H5pyDeprecationWarning: Using default_file_mode other than 'r' is deprecated. Pass the mode to h5py.File() instead.
h5py.get_config().default_file_mode = 'a'
###Markdown
**Challenge 2a** Question: Find out these numbers for the PSPCz molecule. 1. What is the number of C, H, N, O, S atoms?1. What is the total number of atoms?1. What is the total number of atomic orbitals (AO)?1. What is the total number of molecular orbitals (MO)? **How to count atomic orbitals?**The number depends on the basis. The number below is specific to `631g*` basis which we will use for this challenge. - C: 1s, 2s2p, 3s3p3d = 1+4+9 = 14- H: 1s, 2s = 1+1 = 2- N: 1s, 2s2p, 3s3p3d = 1+4+9 = 14- O: 1s, 2s2p, 3s3p3d = 1+4+9 = 14- S: 1s, 2s2p, 3s3p3d, 4s4p = 1+4+9+4 = 18
###Code
num_ao = {
'C': 14,
'H': 2,
'N': 14,
'O': 14,
'S': 18,
}
##############################
# Provide your code here
num_C_atom = sum(x.count('C') for x in geometry)
num_H_atom = sum(x.count('H') for x in geometry)
num_N_atom = sum(x.count('N') for x in geometry)
num_O_atom = sum(x.count('O') for x in geometry)
num_S_atom = sum(x.count('S') for x in geometry)
num_atoms_total = num_C_atom + num_H_atom + num_N_atom + num_O_atom + num_S_atom
num_AO_total = num_ao['C']*num_C_atom + num_ao['H']*num_H_atom+num_ao['N']*num_N_atom+num_ao['O']*num_O_atom+num_ao['S']*num_S_atom
num_MO_total = num_AO_total
##############################
answer_ex2a ={
'C': num_C_atom,
'H': num_H_atom,
'N': num_N_atom,
'O': num_O_atom,
'S': num_S_atom,
'atoms': num_atoms_total,
'AOs': num_AO_total,
'MOs': num_MO_total
}
print(answer_ex2a)
# Check your answer and submit using the following code
from qc_grader import grade_ex2a
grade_ex2a(answer_ex2a)
###Output
Submitting your answer for 2a. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
As you found out yourself in the exercise above, PSPCz is a large molecule, consisting of many atoms and many atomic orbitals. Direct calculation of a large molecule is out of reach for current quantum systems. However, since we are only interested in the bandgap, calculating the energy of Highest Occupied Molecular Orbital (HOMO) and Lowest Unoccupied Molecular Orbital (LUMO) is sufficient. Here we applied a technique called active space transformation to reduce the number of molecular orbitals to only 2 (HOMO and LUMO):$$E_g = E_{LUMO} - E_{HOMO}$$Each circle here represents an electron in an orbital; when light or energy of a high enough frequency is absorbed by an electron in the HOMO, it jumps to the LUMO.For PSPCz molecules, we limit this excited state to just the first singlet and triplet states. In a singlet state, all electrons in a system are spin paired, giving them only one possible orientation in space. A singlet or triplet excited state can form by exciting one of the two electrons to a higher energy level. The excited electron retains the same spin orientation in a singlet excited state, whereas in a triplet excited state, the excited electron has the same spin orientation as the ground state electron. Spin in the ground and excited statesOne set of electron spins is unpaired in a triplet state, meaning there are three possible orientations in space with respect to the axis. LUMO (a-c) and HOMO (e-f) orbitals of the triplet state optimized structures of PSPCz (a, d) and its variants 2F-PSPCz (b, e) and 4F-PSPCz (c, f) respectively would then look something like this.By using the active space transformer method, we will manage to exclude non-core electronic states by restricting calculations to the singlet and triplet, i.e. the smallest possible active space and manage to compute this energy with a small number of qubits while keeping a high-quality description of the system.
###Code
from qiskit_nature.drivers.second_quantization import HDF5Driver
driver_reduced = HDF5Driver("resources/PSPCz_reduced.hdf5")
properties = driver_reduced.run()
from qiskit_nature.properties.second_quantization.electronic import ElectronicEnergy
electronic_energy = properties.get_property(ElectronicEnergy)
print(electronic_energy)
###Output
ElectronicEnergy
(AO) 1-Body Terms:
Alpha
<(430, 430) matrix with 184900 non-zero entries>
[0, 0] = -11.481107571585675
[0, 1] = -2.6982522446048134
[0, 2] = -2.237143188610541
[0, 3] = 0.0017433998087159669
[0, 4] = 0.0007741436199762753
... skipping 184895 entries
Beta
<(430, 430) matrix with 184900 non-zero entries>
[0, 0] = -11.481107571585675
[0, 1] = -2.6982522446048134
[0, 2] = -2.237143188610541
[0, 3] = 0.0017433998087159669
[0, 4] = 0.0007741436199762753
... skipping 184895 entries
(MO) 1-Body Terms:
Alpha
<(2, 2) matrix with 4 non-zero entries>
[0, 0] = -0.4968112637934733
[0, 1] = 0.00027750088691888997
[1, 0] = 0.00027750088691825913
[1, 1] = -0.1843594001763901
Beta
<(2, 2) matrix with 4 non-zero entries>
[0, 0] = -0.4968112637934733
[0, 1] = 0.00027750088691888997
[1, 0] = 0.00027750088691825913
[1, 1] = -0.1843594001763901
(MO) 2-Body Terms:
Alpha-Alpha
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Beta-Alpha
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Beta-Beta
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Alpha-Beta
<(2, 2, 2, 2) matrix with 16 non-zero entries>
[0, 0, 0, 0] = 0.22795982746869856
[0, 0, 0, 1] = -0.00027753808830176344
[0, 0, 1, 0] = -0.00027753808830176615
[0, 0, 1, 1] = 0.13689436105642472
[0, 1, 0, 0] = -0.0002775380883017597
... skipping 11 entries
Energy Shifts:
ActiveSpaceTransformer = -4042.866322560092
###Markdown
You can see that `(AO) 1-Body Terms` contains a (430 x 430) matrix which describes the original molecule with 430 atomic orbitals which translate to 430 molecular orbitals (?). After `ActiveSpaceTransformation` (pre-calculated), the number of molecular orbitals `(MO) 1-Body Terms` is reduced to a (2x2) matrix. **Challenge 2b** Question: Use property framework to find out the answer for the questions below. 1. What is the number of electrons in the system after active space transformation?1. What is the number of molecular orbitals (MO)?1. What is the number of spin orbitals (SO)?1. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?
###Code
from qiskit_nature.properties.second_quantization.electronic import ParticleNumber
##############################
# Provide your code here
particle_number = properties.get_property(ParticleNumber)
num_electron = sum(particle_number.num_particles)
num_MO = particle_number.num_spin_orbitals//2
num_SO = particle_number.num_spin_orbitals
num_qubits = num_SO
#print(particle_number)
##############################
answer_ex2b = {
'electrons': num_electron,
'MOs': num_MO,
'SOs': num_SO,
'qubits': num_qubits
}
print(answer_ex2b)
# Check your answer and submit using the following code
from qc_grader import grade_ex2b
grade_ex2b(answer_ex2b)
###Output
Submitting your answer for 2b. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
2. Electronic structure problemYou can then create an ElectronicStructureProblem that can produce the list of fermionic operators before mapping them to qubits (Pauli strings). This is the first step in defining your molecular system in its ground state. You can read more about solving for the ground state in [**this tutorial**](https://qiskit.org/documentation/nature/tutorials/03_ground_state_solvers.html).
###Code
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
##############################
# Provide your code here
es_problem = ElectronicStructureProblem(driver_reduced)
##############################
second_q_op = es_problem.second_q_ops()
print(second_q_op[0])
###Output
Fermionic Operator
register length=4, number terms=26
(0.01572205126528473+0j) * ( +_0 -_1 +_2 -_3 )
+ (-0.01572205126528473+0j) * ( +_0 -_1 -_2 +_3 )
+ (0.00027750088691888997+0j) * ( +_0 -_1 )
+ (0.0003149147870892302+0j) * ( +_0 -_1 +_3 -_3 )
+ ...
###Markdown
3. QubitConverterAllows to define the mapping that you will use in the simulation.
###Code
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import JordanWignerMapper, ParityMapper, BravyiKitaevMapper
##############################
# Provide your code here
qubit_converter = QubitConverter(JordanWignerMapper())
##############################
qubit_op = qubit_converter.convert(second_q_op[0])
print(qubit_op)
###Output
-0.45781773131305903 * IIII
- 0.009666607989543467 * ZIII
+ 0.12689900731767084 * IZII
+ 0.030293077447785 * ZZII
- 0.009666607989543479 * IIZI
+ 0.03732964036584735 * ZIZI
+ 0.034223590264106186 * IZZI
+ 0.12689900731767084 * IIIZ
+ 0.034223590264106186 * ZIIZ
+ 0.05698995686717464 * IZIZ
+ 0.030293077447785 * IIZZ
+ 0.00014809461815615455 * XXII
+ 0.00014809461815615455 * YYII
- 7.872869677230731e-05 * XXZI
- 7.872869677230731e-05 * YYZI
+ 6.938452207544002e-05 * XXIZ
+ 6.938452207544002e-05 * YYIZ
+ 0.00014809461815615455 * IIXX
- 7.872869677230731e-05 * ZIXX
+ 6.938452207544002e-05 * IZXX
+ 0.00014809461815615455 * IIYY
- 7.872869677230731e-05 * ZIYY
+ 6.938452207544002e-05 * IZYY
+ 0.003930512816321183 * XXXX
+ 0.003930512816321183 * YYXX
+ 0.003930512816321183 * XXYY
+ 0.003930512816321183 * YYYY
###Markdown
4. Initial stateA good initial state in chemistry is the HartreeFock state. We can initialize it as follows:
###Code
from qiskit_nature.circuit.library import HartreeFock
##############################
# Provide your code here
init_state = HartreeFock(num_spin_orbitals = num_SO, num_particles = particle_number.num_particles, qubit_converter = qubit_converter)
##############################
init_state.draw()
###Output
_____no_output_____
###Markdown
5. AnsatzOne of the most important choices is the quantum circuit that you choose to approximate your ground state.Here is the example of qiskit circuit library that contains many possibilities for making your own circuit.
###Code
from qiskit.circuit.library import EfficientSU2, TwoLocal, NLocal, PauliTwoDesign
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
##############################
# Provide your code here
ansatz = EfficientSU2(num_qubits)
##############################
ansatz.decompose().draw()
###Output
/opt/conda/lib/python3.8/site-packages/sympy/core/expr.py:3949: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
SymPyDeprecationWarning(feature="expr_free_symbols method",
###Markdown
Ground state energy calculation Calculation using NumPyFor learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE. Of course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes. For very large systems you would run out of memory trying to store their wavefunctions.
###Code
from qiskit.algorithms import NumPyMinimumEigensolver
from qiskit_nature.algorithms import GroundStateEigensolver
##############################
# Provide your code here
numpy_solver = NumPyMinimumEigensolver()
numpy_ground_state_solver = GroundStateEigensolver(qubit_converter, numpy_solver)
numpy_results = numpy_ground_state_solver.solve(es_problem)
##############################
exact_energy = numpy_results.computed_energies[0]
print(f"Exact electronic energy: {exact_energy:.6f} Hartree\n")
print(numpy_results)
# Check your answer and submit using the following code
from qc_grader import grade_ex2c
grade_ex2c(numpy_results)
###Output
Submitting your answer for 2c. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
Calculation using VQEThe next step would be to use VQE to calculate this ground state energy and you would have found the solution to one half of your electronic problem!
###Code
from qiskit.providers.aer import StatevectorSimulator, QasmSimulator
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
##############################
# Provide your code here
backend = StatevectorSimulator(precision='single')
optimizer = SLSQP()
##############################
from qiskit.algorithms import VQE
from qiskit_nature.algorithms import VQEUCCFactory, GroundStateEigensolver
from jupyterplot import ProgressPlot
import numpy as np
error_threshold = 10 # mHartree
np.random.seed(5) # fix seed for reproducibility
initial_point = np.random.random(ansatz.num_parameters)
# for live plotting
pp = ProgressPlot(plot_names=['Energy'],
line_names=['Runtime VQE', f'Target + {error_threshold}mH', 'Target'])
intermediate_info = {
'nfev': [],
'parameters': [],
'energy': [],
'stddev': []
}
def callback(nfev, parameters, energy, stddev):
intermediate_info['nfev'].append(nfev)
intermediate_info['parameters'].append(parameters)
intermediate_info['energy'].append(energy)
intermediate_info['stddev'].append(stddev)
pp.update([[energy, exact_energy+error_threshold/1000, exact_energy]])
##############################
# Provide your code here
vqe =
vqe_ground_state_solver = GroundStateEigensolver(qubit_converter, vqe_solver)
vqe_results =
##############################
print(vqe_results)
error = (vqe_results.computed_energies[0] - exact_energy) * 1000 # mHartree
print(f'Error is: {error:.3f} mHartree')
# Check your answer and submit using the following code
from qc_grader import grade_ex2d
grade_ex2d(vqe_results)
###Output
_____no_output_____
###Markdown
Excited state calculation Calculation using QEOMFor the molecule of our interest we also need to compute the same but this time for the excited state of our molecular hamiltonian. Since we've already defined the system, we would now need to access the excitation energy using the quantum Equation of Motion (qEOM) algorithm which does this by solving the following pseudo-eigenvalue problemwithwhere each corresponding matrix element must be measured on our quantum computer with its corresponding ground state.To learn more, you can read up about excited state calculation with [**this tutorial**](https://qiskit.org/documentation/nature/tutorials/04_excited_states_solvers.html), and about qEOM itself from the [**corresponding paper by Ollitrault et al., 2019**](https://arxiv.org/abs/1910.12890).
###Code
from qiskit_nature.algorithms import QEOM
##############################
# Provide your code here
qeom_excited_state_solver =
qeom_results =
##############################
print(qeom_results)
# Check your answer and submit using the following code
from qc_grader import grade_ex2e
grade_ex2e(qeom_results)
###Output
_____no_output_____
###Markdown
Finally, you just need to calculate the band gap or energy gap (which is the minimum amount of energy required by an electron to break free of its ground state into its excited state) by computing the difference of the two sets of energies that you have calculated.
###Code
bandgap = qeom_results.computed_energies[1] - qeom_results.computed_energies[0]
bandgap # in Hartree
###Output
_____no_output_____
###Markdown
Running VQE on the cloud using Qiskit RuntimeQiskit Runtime is a new architecture offered by IBM Quantum that streamlines computations requiring many iterations. These experiments will execute significantly faster within this improved hybrid quantum/classical process.Qiskit Runtime allows authorized users to upload their Qiskit quantum programs for themselves or others to use. A Qiskit quantum program, also called a Qiskit Runtime program, is a piece of Python code that takes certain inputs, performs quantum and maybe classical computation, interactively provides intermediate results if desired, and returns the processing results. The same or other authorized users can then invoke these quantum programs by simply passing in the required input parameters.To run the VQE using Qiskit Runtime, we only have to do very few changes from the local VQE run and mainly have to replace the VQE class by the VQEProgram class. Both follow the same MinimumEigensolver interface and thus share the compute_minimum_eigenvalue method to execute the algorithm and return the same type of result object. Merely the signature of the initializer differs sligthly.We start by choosing the provider with access to the Qiskit Runtime service and the backend to execute the circuits on.For more information about Qiskit Runtime, please refer to [**VQEProgram**](https://qiskit.org/documentation/partners/qiskit_runtime/tutorials/vqe.htmlRuntime-VQE:-VQEProgram) and [**Leveraging Qiskit Runtime**](https://qiskit.org/documentation/nature/tutorials/07_leveraging_qiskit_runtime.html) tutorials.
###Code
from qc_grader.util import get_challenge_provider
provider = get_challenge_provider()
if provider:
backend = provider.get_backend('ibmq_qasm_simulator')
from qiskit_nature.runtime import VQEProgram
error_threshold = 10 # mHartree
# for live plotting
pp = ProgressPlot(plot_names=['Energy'],
line_names=['Runtime VQE', f'Target + {error_threshold}mH', 'Target'])
intermediate_info = {
'nfev': [],
'parameters': [],
'energy': [],
'stddev': []
}
def callback(nfev, parameters, energy, stddev):
intermediate_info['nfev'].append(nfev)
intermediate_info['parameters'].append(parameters)
intermediate_info['energy'].append(energy)
intermediate_info['stddev'].append(stddev)
pp.update([[energy,exact_energy+error_threshold/1000, exact_energy]])
##############################
# Provide your code here
optimizer = {
'name': 'QN-SPSA', # leverage the Quantum Natural SPSA
# 'name': 'SPSA', # set to ordinary SPSA
'maxiter': 100,
}
runtime_vqe =
##############################
###Output
_____no_output_____
###Markdown
**Challenge 2f grading** The grading for this exercise is slightly different from the previous exercises. 1. You will first need to use `prepare_ex2f` to submit a runtime job to IBM Quantum (to run on a simulator), using `runtime_vqe (VQEProgram)`, `qubit_converter (QubitConverter)`, `es_problem (ElectronicStructureProblem)`. Depending on the queue, the job can take up to a few minutes to complete. Under the hood, the `prepare_ex2f` does the following:```pythonruntime_vqe_groundstate_solver = GroundStateEigensolver(qubit_converter, runtime_vqe)runtime_vqe_result = runtime_vqe_groundstate_solver.solve(es_problem)``` 2. After the job has completed, you can use `grade_ex2f` to check the answer and submit.
###Code
# Submit a runtime job using the following code
from qc_grader import prepare_ex2f
runtime_job = prepare_ex2f(runtime_vqe, qubit_converter, es_problem)
# Check your answer and submit using the following code
from qc_grader import grade_ex2f
grade_ex2f(runtime_job)
print(runtime_job.result().get("eigenvalue"))
###Output
_____no_output_____
###Markdown
Congratulations! You have submitted your first Qiskit Runtime program and passed the exercise.But the fun is not over! We have reserved a dedicated quantum system for the quantum challenge. As bonus exercise (not graded), you can try your hands on submitting a VQE runtime job to a real quantum system! **Running VQE on a real quantum system (Optional)** We have reserved a dedicated quantum system [`ibm_perth`](https://quantum-computing.ibm.com/services?services=systems&system=ibm_perth) for this challenge. Please follow the steps below to submit runtime job on the real quantum system. 1. Update backend selection to `ibm_perth` and pass it to `runtime_vqe` again ```python backend = provider.get_backend('ibm_perth') runtime_vqe = VQEProgram(... backend=backend, ...) ```2. Set `real_device` flag in `prepare_ex2f` to `True`.3. Run `prepare_ex2f` to submit a runtime job to `ibm_perth`.Note: Qiskit runtime speeds up VQE by up to 5 times. However, each runtime job can still take 30 ~ 60 minutes of quantum processor time. Therefore, **the queue time for completing a job can be hours or even days**, depending on how many participants are submitting jobs. To ensure a pleasant experience for all participants, please only submit a job to the real quantum system after trying with these settings using the simulator:1. Consider using `PartiyMapper` and set `two_qubit_reduction=True` to reduce number of qubits to 2 and make the VQE program converge to ground state energy faster (with lower number of iterations).1. Limit optimizer option `maxiter=100` or less. Use the simulator runs to find an optimal low number of iterations.1. Verify your runtime program is correct by passing `grade_ex2f` with simulator as backend.1. Limit your jobs to only 1 job per participant to allow more participants to try runtime on a real quantum system. Don't worry if your job is getting too long to execute or it can't be executed before the challenge ends. This is an optional exercise. You can still pass all challenge exercises and get a digital badge without running a job on the real quantum system.
###Code
# Please change backend to ibm_perth before running the following code
runtime_job_real_device = prepare_ex2f(runtime_vqe, qubit_converter, es_problem, real_device=True)
print(runtime_job_real_device.result().get("eigenvalue"))
###Output
_____no_output_____
###Markdown
 📌 Challenge 2: Making Data Easy Now that we have successfully accessed the data archives from our friends back on Earth, it's time to explore what pets there are. This will help us get an idea of our options and which pets can be most suitable for life on Mars. **Rover**: *I hope it's dogs!* 🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟
###Code
#❗️Run this cell
import interactive as i
from IPython.display import YouTubeVideo
#YouTubeVideo('gCcyJdQyvjo')
###Output
_____no_output_____
###Markdown
📕 Debrief: Heads and TailsYou might notice when you view the dataset, it's the *whole* dataset. Say we wanted to only see the first few or the last few rows of the data. To do this, we can use `dataset.head()` to see the first 5 rows or `dataset.tail()` to see the last 5 rows. Try it out below! :)
###Code
#❗️Run this cell
i.challenge2a()
###Output
_____no_output_____
###Markdown
📕 Debrief: Selecting Data ColumnsWith the data now easily accessible, let's try using some cool functions from the `pandas` library to get to know our data more. For example, to access the values in a column, we can use `dataset["column_name"]`. In Python `""` and `''` are the same.You can also select multiple columns like this: `dataset[["column_name1", "column_name2"]]`. Note the double `[[ ]]` will give you a dataframe, which is most of the time what we want, versus `[]` which will give you only the single column. Later on we will focus more on getting multiple columns, so right now let's just focus on seeing one.To list all the columns in a dataset, use `dataset.columns`.
###Code
#❗️Run this cell
i.challenge2b()
#❗️Run this cell
i.challenge2c()
#❗️Run this cell
i.challenge2d()
###Output
_____no_output_____
###Markdown
 📌 Challenge 2: Making Data Easy Now that we have successfully accessed the data archives from our friends back on Earth, it's time to explore what pets there are. This will help us get an idea of our options and which pets can be most suitable for life on Mars. **Rover**: *I hope it's dogs!* 🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟🌟
###Code
#❗️Run this cell
import interactive as i
from IPython.display import YouTubeVideo
YouTubeVideo('Y1BtlzhB6j4')
###Output
_____no_output_____
###Markdown
📕 Debrief: Heads and TailsYou might notice when you view the dataset, it's the *whole* dataset. Say we wanted to only see the first few or the last few rows of the data. To do this, we can use `dataset.head()` to see the first 5 rows or `dataset.tail()` to see the last 5 rows. Try it out below! :)
###Code
#❗️Run this cell
i.challenge2a()
###Output
_____no_output_____
###Markdown
📕 Debrief: Selecting Data ColumnsWith the data now easily accessible, let's try using some cool functions from the `pandas` library to get to know our data more. For example, to access the values in a column, we can use `dataset["column_name"]`. In Python `""` and `''` are the same.You can also select multiple columns like this: `dataset[["column_name1", "column_name2"]]`. Note the double `[[ ]]` will give you a dataframe, which is most of the time what we want, versus `[]` which will give you only the single column. Later on we will focus more on getting multiple columns, so right now let's just focus on seeing one.To list all the columns in a dataset, use `dataset.columns`.
###Code
#❗️Run this cell
i.challenge2b()
#❗️Run this cell
i.challenge2c()
#❗️Run this cell
i.challenge2d()
###Output
_____no_output_____
###Markdown
Challenge 2In this challenge we will continue working with the `Pokemon` dataset. We will attempt solving a slightly more complex problem in which we will practice the iterative data analysis process you leaned in [this video](https://www.youtube.com/watch?v=xOomNicqbkk).The problem statement is as follows:**You are at a Pokemon black market planning to buy a Pokemon for battle. All Pokemon are sold at the same price and you can only afford to buy one. You cannot choose which specific Pokemon to buy. However, you can specify the type of the Pokemon - one type that exists in either `Type 1` or `Type 2`. Which type should you choose in order to maximize your chance of receiving a good Pokemon?**To remind you about the 3 steps of iterative data analysis, they are:1. Setting Expectations1. Collecting Information1. Reacting to Data / Revising ExpectationsFollowing the iterative process, we'll guide you in completing the challenge. Problem Solving Iteration 1In this iteration we'll analyze the problem and identify the breakthrough. The original question statement is kind of vague because we don't know what a *good pokemon* really means as represented in the data. We'll start by understanding the dataset and see if we can find some insights.
###Code
# Import libraries
import numpy as np
import pandas as pd
# Importing the dataset from Ironhack's database
pokemon = pd.read_csv('/Users/gracemartinez/ironhack/daft-miami-1019-labs/module-1/Dataframe-Calculations/Pokemon.csv')
pokemon
###Output
_____no_output_____
###Markdown
From the data it seems whether a pokemon is good depends on its abilities as represented in the fields of `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed`, and `Total`. We are not sure about `Generation` and `Legendary` because they are not necessarily the decisive factors of the pokemon abilities.But `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, `Speed`, and `Total` are a lot of fields! If we look at them all at once it's very complicated. This isn't Mission Impossible but it's ideal that we tackle this kind of problem after we learn Machine Learning (which you will do in Module 3). For now, is there a way to consolidate the fields we need to look into?Fortunately there seems to be a way. It appears the `Total` field is computed based on the other 6 fields. But we need to prove our theory. If we can approve there is a formula to compute `Total` based on the other 6 abilities, we only need to look into `Total`.We have the following expectation now: The `Total` field is computed based on `HP`, `Attack`, `Defense`, `Sp. Atk`, `Sp. Def`, and `Speed`.We need to collect the following information:* **What is the formula to compute `Total`?*** **Does the formula work for all pokemon?**In the cell below, make a hypothesis on how `Total` is computed and test your hypothesis.
###Code
# your code here
pokemon['Total'] = pokemon['HP'] + pokemon['Attack'] + pokemon['Defense'] + pokemon['Sp. Atk'] + pokemon['Sp. Def'] + pokemon['Speed']
pokemon.head()
###Output
_____no_output_____
###Markdown
Problem Solving Iteration 2Now that we have consolidated the abilities fields, we can update the problem statement. The new problem statement is: Which pokemon type is most likely to have the highest `Total` value?In the updated problem statement, we assume there is a certain relationship between the `Total` and the pokemon type. But we have two *type* fields (`Type 1` and `Type 2`) that have string values. In data analysis, string fields have to be transformed to numerical format in order to be analyzed. In addition, keep in mind that `Type 1` always has a value but `Type 2` is sometimes empty (having the `NaN` value). Also, the pokemon type we choose may be either in `Type 1` or `Type 2`.Now our expectation is: `Type 1` and `Type 2` string variables need to be converted to numerical variables in order to identify the relationship between `Total` and the pokemon type.The information we need to collect is: How to convert two string variables to numerical?Let's address the first question first. You can use a method called **One Hot Encoding** which is frequently used in machine learning to encode categorical string variables to numerical. The idea is to gather all the possible string values in a categorical field and create a numerical field for each unique string value. Each of those numerical fields uses `1` and `0` to indicate whether the data record has the corresponding categorical value. A detailed explanation of One Hot Encoding can be found in [this article](https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f). You will formally learn it in Module 3.For instance, if a pokemon has `Type 1` as `Poison` and `Type 2` as `Fire`, then its `Poison` and `Fire` fields are `1` whereas all other fields are `0`. If a pokemon has `Type 1` as `Water` and `Type 2` as `NaN`, then its `Water` field is `1` whereas all other fields are `0`. In the next cell, use One Hot Encoding to encode `Type 1` and `Type 2`. Use the pokemon type values as the names of the numerical fields you create.The new numerical variables you create should look like below:
###Code
# your code here
pd.get_dummies(pokemon)
pd.concat([pd.get_dummies(pokemon[col]) for col in pokemon], axis=1, keys=pokemon.columns)
# ANSWER
for column in ['Type 1', 'Type 2']:
dummies = pd.get_dummies(pokemon[column])
pokemon[dummies.columns] = dummies
dummies
###Output
_____no_output_____
###Markdown
Problem Solving Iteration 3Now we have encoded the pokemon types, we will identify the relationship between `Total` and the encoded fields. Our expectation is: There are relationships between `Total` and the encoded pokemon type variables and we need to identify the correlations.The information we need to collect is: How to identify the relationship between `Total` and the encoded pokemon type fields?There are multiple ways to answer this question. The easiest way is to use correlation. In the cell below, calculate the correlation of `Total` to each of the encoded fields. Rank the correlations and identify the 1 pokemon type that is most likely to have the highest `Total`.
###Code
# your code here
dummies['Total'] = pokemon['Total']
corrs = dummies.corr()
corrs
###Output
_____no_output_____
###Markdown
Bonus QuestionSay now you can choose both `Type 1` and `Type 2` of the pokemon. In order to receive the best pokemon, which types will you choose?
###Code
# your code here
pd.DataFrame(corrs['Total']).sort_values('Total', ascending = False)
# filter for best pokemon to choose
pokemon[(pokemon['Type 1']=='Dragon') & (pokemon['Type 2']=='Ice' )]
###Output
_____no_output_____ |
47-Regular Expressions.ipynb | ###Markdown
Regular ExpressionsRegular expressions are text matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, fro finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the re module with Python for this lecture.Let's get started! Searching for Patterns in TextOne of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
###Code
import re
# List of patterns to search for
patterns = [ 'term1', 'term2' ]
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for pattern in patterns:
print 'Searching for "%s" in: \n"%s"' % (pattern, text),
#Check for match
if re.search(pattern, text):
print '\n'
print 'Match was found. \n'
else:
print '\n'
print 'No Match was found.\n'
###Output
Searching for "term1" in:
"This is a string with term1, but it does not have the other term."
Match was found.
Searching for "term2" in:
"This is a string with term1, but it does not have the other term."
No Match was found.
###Markdown
Now we've seen that re.search() will take the pattern, scan the text, and then returns a **Match** object. If no pattern is found, a **None** is returned. To give a clearer picture of this match object, check out the cell below:
###Code
# List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern, text)
type(match)
###Output
_____no_output_____
###Markdown
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
###Code
# Show start of match
match.start()
# Show end
match.end()
###Output
_____no_output_____
###Markdown
Split with regular expressionsLet's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
###Code
# Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: [email protected]'
# Split the phrase
re.split(split_term,phrase)
###Output
_____no_output_____
###Markdown
Note how re.split() returns a list with the term to spit on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand! Finding all instances of a patternYou can use re.findall() to find all the instances of a pattern in a string. For example:
###Code
# Returns a list of all matches
re.findall('match','test phrase match is in middle')
###Output
_____no_output_____
###Markdown
Pattern re SyntaxThis will be the bulk of this lecture on using re with Python. Regular expressions supports a huge variety of patterns the just simply finding where a single string occurred. We can use *metacharacters* along with re to find specific types of patterns. Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
###Code
def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print 'Searching the phrase using the re check: %r' %pattern
print re.findall(pattern,phrase)
print '\n'
###Output
_____no_output_____
###Markdown
Repetition SyntaxThere are five ways to express repetition in a pattern: 1.) A pattern followed by the meta-character * is repeated zero or more times. 2.) Replace the * with + and the pattern must appear at least once. 3.) Using ? means the pattern appears zero or one time. 4.) For a specific number of occurrences, use {m} after the pattern, where m is replaced with the number of times the pattern should repeat. 5.) Use {m,n} where m is the minimum number of repetitions and n is the maximum. Leaving out n ({m,}) means the value appears at least m times, with no maximum. Now we will see an example of each of these using our multi_re_find function:
###Code
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
'sd{2,3}', # s followed by two to three d's
]
multi_re_find(test_patterns,test_phrase)
###Output
Searching the phrase using the re check: 'sd*'
['sd', 'sd', 's', 's', 'sddd', 'sddd', 'sddd', 'sd', 's', 's', 's', 's', 's', 's', 'sdddd']
Searching the phrase using the re check: 'sd+'
['sd', 'sd', 'sddd', 'sddd', 'sddd', 'sd', 'sdddd']
Searching the phrase using the re check: 'sd?'
['sd', 'sd', 's', 's', 'sd', 'sd', 'sd', 'sd', 's', 's', 's', 's', 's', 's', 'sd']
Searching the phrase using the re check: 'sd{3}'
['sddd', 'sddd', 'sddd', 'sddd']
Searching the phrase using the re check: 'sd{2,3}'
['sddd', 'sddd', 'sddd', 'sddd']
###Markdown
Character SetsCharacter sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input [ab] searches for occurrences of either a or b.Let's see some examples:
###Code
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ '[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase)
###Output
Searching the phrase using the re check: '[sd]'
['s', 'd', 's', 'd', 's', 's', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 'd', 's', 'd', 's', 'd', 's', 's', 's', 's', 's', 's', 'd', 'd', 'd', 'd']
Searching the phrase using the re check: 's[sd]+'
['sdsd', 'sssddd', 'sdddsddd', 'sds', 'sssss', 'sdddd']
###Markdown
It makes sense that the first [sd] returns every instance. Also the second input will just return any thing starting with an s in this particular case of the test phrase input. ExclusionWe can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example: [^...] will match any single character not in the brackets. Let's see some examples:
###Code
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
###Output
_____no_output_____
###Markdown
Use [^!.? ] to check for matches that are not a !,.,?, or space. Add the + to check that the match appears at least once, this basically translate into finding the words.
###Code
re.findall('[^!.? ]+',test_phrase)
###Output
_____no_output_____
###Markdown
Character RangesAs character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].Common use cases are to search for a specific range of letters in the alphabet, such [a-f] would return matches with any instance of letters between a and f. Let's walk through some examples:
###Code
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=[ '[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] # one upper case letter followed by lower case letters
multi_re_find(test_patterns,test_phrase)
###Output
Searching the phrase using the re check: '[a-z]+'
['his', 'is', 'an', 'example', 'sentence', 'ets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters']
Searching the phrase using the re check: '[A-Z]+'
['T', 'L']
Searching the phrase using the re check: '[a-zA-Z]+'
['This', 'is', 'an', 'example', 'sentence', 'Lets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters']
Searching the phrase using the re check: '[A-Z][a-z]+'
['This', 'Lets']
###Markdown
Escape CodesYou can use special escape codes to find specific types of patterns in your data, such as digits, non-digits,whitespace, and more. For example:CodeMeaning\da digit\Da non-digit\swhitespace (tab, space, newline, etc.)\Snon-whitespace\walphanumeric\Wnon-alphanumericEscapes are indicated by prefixing the character with a backslash (\). Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with r, for creating regular expressions eliminates this problem and maintains readability.Personally, I think this use of r to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
###Code
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric characters
r'\W+', # non-alphanumeric
]
multi_re_find(test_patterns,test_phrase)
###Output
Searching the phrase using the re check: '\\d+'
['1233']
Searching the phrase using the re check: '\\D+'
['This is a string with some numbers ', ' and a symbol #hashtag']
Searching the phrase using the re check: '\\s+'
[' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']
Searching the phrase using the re check: '\\S+'
['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', '#hashtag']
Searching the phrase using the re check: '\\w+'
['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', 'hashtag']
Searching the phrase using the re check: '\\W+'
[' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' #']
|
SCanalyzer/census/Census Class Example.ipynb | ###Markdown
To-Do:- fix file paths with os.join- write the load-census function- save output and enable cache feature like in Services- create demographics default dictionaries from census codes- check other gtfs files, general?- enable a cache feature for the demogrpahics pull as wellQuestions:- where should I pull the stops data from? zip? unzipped? pre-processed? [goes along with reusing Change code below]- Is it ok that mine is a class? Thought it might be useful for people to poke around the class on their own if they wanted to while using the package? long-term To-do:- make defaults vs user specified options- redundant center and radius code? utilize Chang's code- build in a way to specify how fine of features people want (blgr, tract, etc)- build in 500 warning calls to API, back off to larger group area by default? - way to get a geodataframe back from this, then do some processing, then combine later? In case they want to compute their own stats based off some columns?
###Code
from Census import Census
import matplotlib.pyplot as plt
import geopandas as gpd
from mpl_toolkits.axes_grid1 import make_axes_locatable
c = Census(
gtfs_filename="../../data/mmt_gtfs/stops.csv")
gdf_tracts = c.getCensusTracts()
gdf_tracts.boundary.plot()
plt.show()
demographic_data = c.getDemographicsData(gdf_tracts, demographics=['Race', 'Vehicles'], sample=True)
demographic_data.head(3)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad = -.5)
demographic_data.plot(column='% Black or African American alone', ax=ax, legend=True, cax=cax, cmap='viridis_r', alpha=.8, zorder=1)
fig = ax.figure
cb_ax = fig.axes[1]
cb_ax.tick_params(labelsize=14)
gdf_tracts.boundary.plot(color='lightblue', alpha=.5, ax=ax, zorder=0)
cax.set_ylabel("Percent Black", fontsize=16)
ax.set_facecolor("#e7e3e0")
ax.set_yticks([])
ax.set_xticks([])
fig.suptitle("Percent Black", fontsize=18, x=.55, y=.83)
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad = -.5)
demographic_data.plot(column='cars per capita', ax=ax, legend=True, cax=cax, cmap='viridis', alpha=.8, zorder=1)
fig = ax.figure
cb_ax = fig.axes[1]
cb_ax.tick_params(labelsize=14)
gdf_tracts.boundary.plot(color='lightblue', alpha=.5, ax=ax, zorder=0)
cax.set_ylabel("Cars per Capita", fontsize=16)
ax.set_facecolor("#e7e3e0")
ax.set_yticks([])
ax.set_xticks([])
fig.suptitle("Cars per Capita", fontsize=18, x=.55, y=.83)
plt.show()
###Output
_____no_output_____ |
Day3/notebooks/student_BBVI.ipynb | ###Markdown
Applying BBVI for a simple Gaussian Model
###Code
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
###Output
_____no_output_____
###Markdown
Data
###Code
# Generate data from a simple model: Normal(10, 1)
data = np.random.normal(loc = 10, scale = 1, size = 100)
###Output
_____no_output_____
###Markdown
Helper function: ELBOCalculate the exact value of the ELBO. Generally one would have to estimate this using sampling, but for this simple model we can evaluate it exactly
###Code
def calculate_lower_bound(tau, q_mu):
"""
Helper routine: Calculate ELBO. Data is the sampled x-values, anything without a star relates to the prior,
everything _with_ a star relates to the variational posterior.
Note that we have no nu without a star; I am simplifying by forcing this to be zero a priori
Note: This function obviously only works when the model is as in this code challenge,
and is not a general solution.
:param data: The sampled data
:param tau: prior variance for mu, the mean for the data generation
:param alpha: prior shape of dist for gamma, the precision of the data generation
:param beta: prior rate of dist for gamma, the precision of the data generation
:param nu_star: VB posterior mean for the distribution of mu - the mean of the data generation
:param tau_star: VB posterior precision for the distribution of mu - the mean of the data generation
:param alpha_star: VB posterior shape of dist for gamma, the precision of the data generation
:param beta_star: VB posterior shape of dist for gamma, the precision of the data generation
:return: the ELBO
"""
# We calculate ELBO as E_q log p(x,mu) - E_q log q(mu)
# log p(x,z) here is log p(mu) + \sum_i log p(x_i | mu, 1)
# E_q log p(mu)
log_p = -.5 * np.log(2 * np.pi) - .5 * (1/tau) * (1 + q_mu**2)
# E_q log p(x_i|mu, 1)
for xi in data:
log_p += -.5 * np.log(2 * np.pi) - .5 * (xi * xi - 2 * xi * q_mu + 1 + q_mu**2)
# Entropy of mu (Gaussian)
entropy = .5 * np.log(2 * np.pi * np.exp(1))
return log_p + entropy
###Output
_____no_output_____
###Markdown
Manual estimation of the gradient of the ELBO for the above model
###Code
# Gradient estimator using sampling -- vanilla BBVI
# We here assume the model X ~ Normal(mu, 1)
# with unknown mu, that in itself is Normal, mean 0 and standard deviation 1000,
# so effectively an uniformed prior.
# The variational dstribution for mu is also Normal, with parameter q_mu_lambda
# -- taking the role of lambda in the calculations -- and variance 1.
def grad_estimate(q_mu_lambda, samples = 1):
# sum_grad_estimate will hold the sum as we move along over the <samples> samples.
sum_grad_estimate = 0
for i in range(samples):
# Sample one example from current best guess for the variational distribution
mu_sample = np.random.normal(loc=q_mu_lambda, scale=1, size=1)
# Now we want to calculate the contribution from this sample, namely
# [log p(x, mu_sample) - log q(mu_sample|q_mu_lambda) ] * grad( log q(mu_sample|q_mu_lambda) )
#
# First log p(x|mu_sample) + log p(mu_sample) - log q(mu_sample|q_mu_lambda)
value = np.sum(norm.logpdf(data, loc=mu_sample, scale=1))
+ ???
- ???
# Next grad (log q(mu_sample|q_mu_lambda))
# The Normal distribution gives the score function with known variance as <value> - <mean>
grad_q = ???
# grad ELBO for this sample is therefore in total given by
sum_grad_estimate = sum_grad_estimate + grad_q * value
# Divide by number of samples to get average value -- the estimated expectation
return sum_grad_estimate/samples
###Output
_____no_output_____
###Markdown
Check effect of sample count
###Code
import time
no_loops = 500
sample_counts = [1, 2, 3, 4, 5, 10, 15, 20, 25, 30, 40, 50]
elbos_for_sample_counts = {}
lr = 1E-4
for sample_count in sample_counts:
##### Starting point
q_mu = -10
start = time.time()
elbos = []
#loop a couple of times
for t in range(no_loops):
elbos.append(calculate_lower_bound(1000, q_mu))
q_grad = grad_estimate(q_mu, samples=sample_count)
# Adjust learning rate according to the formula <start>/((1 + <t>/100)**1.5)
q_mu = q_mu + lr * q_grad[0]
elbos_for_sample_counts[sample_count] = elbos
print("{:4d} sample(s) -- Estimate: {:9.5f}; -- Calc.time: {:5.2f} sec.".format(
sample_count, float(q_mu), time.time() - start))
###Output
1 sample(s) -- Estimate: 10.04000; -- Calc.time: 0.25 sec.
2 sample(s) -- Estimate: 9.63580; -- Calc.time: 0.38 sec.
3 sample(s) -- Estimate: 9.80948; -- Calc.time: 0.53 sec.
4 sample(s) -- Estimate: 9.91410; -- Calc.time: 0.66 sec.
5 sample(s) -- Estimate: 9.86913; -- Calc.time: 0.84 sec.
10 sample(s) -- Estimate: 9.78933; -- Calc.time: 1.52 sec.
15 sample(s) -- Estimate: 9.87243; -- Calc.time: 2.20 sec.
20 sample(s) -- Estimate: 9.86563; -- Calc.time: 2.86 sec.
25 sample(s) -- Estimate: 9.84137; -- Calc.time: 3.54 sec.
30 sample(s) -- Estimate: 9.79658; -- Calc.time: 4.46 sec.
40 sample(s) -- Estimate: 9.87294; -- Calc.time: 5.59 sec.
50 sample(s) -- Estimate: 9.88174; -- Calc.time: 6.96 sec.
200 sample(s) -- Estimate: 9.87364; -- Calc.time: 27.96 sec.
###Markdown
Plot the evolution of the ELBO
###Code
plt.xlabel('Number of iterations')
plt.ylabel('ELBO')
no_samples = 1
plt.plot(range(no_loops), elbos_for_sample_counts[no_samples])
###Output
_____no_output_____
###Markdown
Checking the variation in gradient estimate
###Code
# To check the variation / "unreliability" of the gradient estimate we repeat
# several times for the same lambda value and notice difference
# Location to check -- close to the data mean (at +10).
# The prior will move the variational optimium **slightly** away from the data mean,
# but due to the large prior variance of mu this should be a very limited effect.
# We should therefore expect a positive derivative (since we want to move
# q_mu_lambda towards the data mean, that is, **increase** it)
q_mu_lambda = 9
plt.figure(figsize=(8,6))
sns.set()
# Do with different sample sizes
for sample_count in [1, 2, 3, 4, 5, 10, 25]:
#loop
q_grad = []
for t in range(500):
q_grad.append(grad_estimate(q_mu_lambda, samples=sample_count))
sns.distplot(q_grad, hist=False, label="$M = {:d}$".format(sample_count))
# Report back
print("M = {:2d} sample(s) in BBVI -- Mean of gradient: {:7.3f}; Std.dev. of gradient: {:7.3f}".format(
sample_count, np.mean(q_grad), np.std(q_grad)))
plt.xlim([-500, 500])
plt.legend()
plt.savefig('BBVI-gradient-variance.eps')
plt.show()
###Output
_____no_output_____ |
Plotly/Create Gantt chart.ipynb | ###Markdown
Plotly - Create Gantt chart **Tags:** plotly chart gant project dataviz Learn more on the Plotly doc : https://plotly.com/python/gantt/ Input Import libraries
###Code
import plotly.express as px
import pandas as pd
###Output
_____no_output_____
###Markdown
Model Create the plot
###Code
df = pd.DataFrame([
dict(Task="Job A", Start='2009-01-01', Finish='2009-02-28'),
dict(Task="Job B", Start='2009-03-05', Finish='2009-04-15'),
dict(Task="Job C", Start='2009-02-20', Finish='2009-05-30')
])
fig = px.timeline(df, x_start="Start", x_end="Finish", y="Task")
fig.update_yaxes(autorange="reversed") # otherwise tasks are listed from the bottom up
###Output
_____no_output_____
###Markdown
Output Display result
###Code
fig.show()
###Output
_____no_output_____
###Markdown
Plotly - Create Gantt chart **Tags:** plotly chart gant project dataviz Learn more on the Plotly doc : https://plotly.com/python/gantt/ Input Import libraries
###Code
import plotly.express as px
import pandas as pd
###Output
_____no_output_____
###Markdown
Model Create the plot
###Code
df = pd.DataFrame([
dict(Task="Job A", Start='2009-01-01', Finish='2009-02-28'),
dict(Task="Job B", Start='2009-03-05', Finish='2009-04-15'),
dict(Task="Job C", Start='2009-02-20', Finish='2009-05-30')
])
fig = px.timeline(df, x_start="Start", x_end="Finish", y="Task")
fig.update_yaxes(autorange="reversed") # otherwise tasks are listed from the bottom up
###Output
_____no_output_____
###Markdown
Output Display result
###Code
fig.show()
###Output
_____no_output_____
###Markdown
Plotly - Create Gantt chart Learn more on the Plotly doc : https://plotly.com/python/gantt/
###Code
import plotly.express as px
import pandas as pd
df = pd.DataFrame([
dict(Task="Job A", Start='2009-01-01', Finish='2009-02-28'),
dict(Task="Job B", Start='2009-03-05', Finish='2009-04-15'),
dict(Task="Job C", Start='2009-02-20', Finish='2009-05-30')
])
fig = px.timeline(df, x_start="Start", x_end="Finish", y="Task")
fig.update_yaxes(autorange="reversed") # otherwise tasks are listed from the bottom up
fig.show()
###Output
_____no_output_____ |
examples/text/text_summarization_with_bart.ipynb | ###Markdown
Summarizing Documents With *ktrain**ktrain* includes the ability to summarize text based on a pretrained [BART](https://arxiv.org/abs/1910.13461) model from the `transformers` library.To perform summarization, first create a `TransformerSummarizer` instance as follows. (Note that this feature requires PyTorch to be installed on your system.)
###Code
from ktrain import text
ts = text.TransformerSummarizer()
###Output
_____no_output_____
###Markdown
Next, let's create a long document about planetary probes. The text is taken from a post in the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
###Code
sample_doc = """Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for Jean Dominique Cassini (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by Ron
Baalke in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada ([email protected]). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator Jeffrey J Bloch
([email protected]) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016."""
###Output
_____no_output_____
###Markdown
Now, let's use our `TransformerSummarizer` instance to summarize the long document.
###Code
ts.summarize(sample_doc)
###Output
_____no_output_____
###Markdown
Summarizing Documents With *ktrain**ktrain* includes the ability to summarize text based on a pretrained [BART](https://arxiv.org/abs/1910.13461) model from the `transformers` library.To perform summarization, first create a `TransformerSummarizer` instance as follows. (Note that this feature requires PyTorch to be installed on your system.)
###Code
from ktrain.text.summarization import TransformerSummarizer
ts = TransformerSummarizer()
###Output
_____no_output_____
###Markdown
Next, let's create a long document about planetary probes. The text is taken from a post in the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
###Code
sample_doc = """Archive-name: space/new_probes
Last-modified: $Date: 93/04/01 14:39:17 $
UPCOMING PLANETARY PROBES - MISSIONS AND SCHEDULES
Information on upcoming or currently active missions not mentioned below
would be welcome. Sources: NASA fact sheets, Cassini Mission Design
team, ISAS/NASDA launch schedules, press kits.
ASUKA (ASTRO-D) - ISAS (Japan) X-ray astronomy satellite, launched into
Earth orbit on 2/20/93. Equipped with large-area wide-wavelength (1-20
Angstrom) X-ray telescope, X-ray CCD cameras, and imaging gas
scintillation proportional counters.
CASSINI - Saturn orbiter and Titan atmosphere probe. Cassini is a joint
NASA/ESA project designed to accomplish an exploration of the Saturnian
system with its Cassini Saturn Orbiter and Huygens Titan Probe. Cassini
is scheduled for launch aboard a Titan IV/Centaur in October of 1997.
After gravity assists of Venus, Earth and Jupiter in a VVEJGA
trajectory, the spacecraft will arrive at Saturn in June of 2004. Upon
arrival, the Cassini spacecraft performs several maneuvers to achieve an
orbit around Saturn. Near the end of this initial orbit, the Huygens
Probe separates from the Orbiter and descends through the atmosphere of
Titan. The Orbiter relays the Probe data to Earth for about 3 hours
while the Probe enters and traverses the cloudy atmosphere to the
surface. After the completion of the Probe mission, the Orbiter
continues touring the Saturnian system for three and a half years. Titan
synchronous orbit trajectories will allow about 35 flybys of Titan and
targeted flybys of Iapetus, Dione and Enceladus. The objectives of the
mission are threefold: conduct detailed studies of Saturn's atmosphere,
rings and magnetosphere; conduct close-up studies of Saturn's
satellites, and characterize Titan's atmosphere and surface.
One of the most intriguing aspects of Titan is the possibility that its
surface may be covered in part with lakes of liquid hydrocarbons that
result from photochemical processes in its upper atmosphere. These
hydrocarbons condense to form a global smog layer and eventually rain
down onto the surface. The Cassini orbiter will use onboard radar to
peer through Titan's clouds and determine if there is liquid on the
surface. Experiments aboard both the orbiter and the entry probe will
investigate the chemical processes that produce this unique atmosphere.
The Cassini mission is named for Jean Dominique Cassini (1625-1712), the
first director of the Paris Observatory, who discovered several of
Saturn's satellites and the major division in its rings. The Titan
atmospheric entry probe is named for the Dutch physicist Christiaan
Huygens (1629-1695), who discovered Titan and first described the true
nature of Saturn's rings.
Key Scheduled Dates for the Cassini Mission (VVEJGA Trajectory)
-------------------------------------------------------------
10/06/97 - Titan IV/Centaur Launch
04/21/98 - Venus 1 Gravity Assist
06/20/99 - Venus 2 Gravity Assist
08/16/99 - Earth Gravity Assist
12/30/00 - Jupiter Gravity Assist
06/25/04 - Saturn Arrival
01/09/05 - Titan Probe Release
01/30/05 - Titan Probe Entry
06/25/08 - End of Primary Mission
(Schedule last updated 7/22/92)
GALILEO - Jupiter orbiter and atmosphere probe, in transit. Has returned
the first resolved images of an asteroid, Gaspra, while in transit to
Jupiter. Efforts to unfurl the stuck High-Gain Antenna (HGA) have
essentially been abandoned. JPL has developed a backup plan using data
compression (JPEG-like for images, lossless compression for data from
the other instruments) which should allow the mission to achieve
approximately 70% of its original objectives.
Galileo Schedule
----------------
10/18/89 - Launch from Space Shuttle
02/09/90 - Venus Flyby
10/**/90 - Venus Data Playback
12/08/90 - 1st Earth Flyby
05/01/91 - High Gain Antenna Unfurled
07/91 - 06/92 - 1st Asteroid Belt Passage
10/29/91 - Asteroid Gaspra Flyby
12/08/92 - 2nd Earth Flyby
05/93 - 11/93 - 2nd Asteroid Belt Passage
08/28/93 - Asteroid Ida Flyby
07/02/95 - Probe Separation
07/09/95 - Orbiter Deflection Maneuver
12/95 - 10/97 - Orbital Tour of Jovian Moons
12/07/95 - Jupiter/Io Encounter
07/18/96 - Ganymede
09/28/96 - Ganymede
12/12/96 - Callisto
01/23/97 - Europa
02/28/97 - Ganymede
04/22/97 - Europa
05/31/97 - Europa
10/05/97 - Jupiter Magnetotail Exploration
HITEN - Japanese (ISAS) lunar probe launched 1/24/90. Has made
multiple lunar flybys. Released Hagoromo, a smaller satellite,
into lunar orbit. This mission made Japan the third nation to
orbit a satellite around the Moon.
MAGELLAN - Venus radar mapping mission. Has mapped almost the entire
surface at high resolution. Currently (4/93) collecting a global gravity
map.
MARS OBSERVER - Mars orbiter including 1.5 m/pixel resolution camera.
Launched 9/25/92 on a Titan III/TOS booster. MO is currently (4/93) in
transit to Mars, arriving on 8/24/93. Operations will start 11/93 for
one martian year (687 days).
TOPEX/Poseidon - Joint US/French Earth observing satellite, launched
8/10/92 on an Ariane 4 booster. The primary objective of the
TOPEX/POSEIDON project is to make precise and accurate global
observations of the sea level for several years, substantially
increasing understanding of global ocean dynamics. The satellite also
will increase understanding of how heat is transported in the ocean.
ULYSSES- European Space Agency probe to study the Sun from an orbit over
its poles. Launched in late 1990, it carries particles-and-fields
experiments (such as magnetometer, ion and electron collectors for
various energy ranges, plasma wave radio receivers, etc.) but no camera.
Since no human-built rocket is hefty enough to send Ulysses far out of
the ecliptic plane, it went to Jupiter instead, and stole energy from
that planet by sliding over Jupiter's north pole in a gravity-assist
manuver in February 1992. This bent its path into a solar orbit tilted
about 85 degrees to the ecliptic. It will pass over the Sun's south pole
in the summer of 1993. Its aphelion is 5.2 AU, and, surprisingly, its
perihelion is about 1.5 AU-- that's right, a solar-studies spacecraft
that's always further from the Sun than the Earth is!
While in Jupiter's neigborhood, Ulysses studied the magnetic and
radiation environment. For a short summary of these results, see
*Science*, V. 257, p. 1487-1489 (11 September 1992). For gory technical
detail, see the many articles in the same issue.
OTHER SPACE SCIENCE MISSIONS (note: this is based on a posting by Ron
Baalke in 11/89, with ISAS/NASDA information contributed by Yoshiro
Yamada ([email protected]). I'm attempting to track changes based
on updated shuttle manifests; corrections and updates are welcome.
1993 Missions
o ALEXIS [spring, Pegasus]
ALEXIS (Array of Low-Energy X-ray Imaging Sensors) is to perform
a wide-field sky survey in the "soft" (low-energy) X-ray
spectrum. It will scan the entire sky every six months to search
for variations in soft-X-ray emission from sources such as white
dwarfs, cataclysmic variable stars and flare stars. It will also
search nearby space for such exotic objects as isolated neutron
stars and gamma-ray bursters. ALEXIS is a project of Los Alamos
National Laboratory and is primarily a technology development
mission that uses astrophysical sources to demonstrate the
technology. Contact project investigator Jeffrey J Bloch
([email protected]) for more information.
o Wind [Aug, Delta II rocket]
Satellite to measure solar wind input to magnetosphere.
o Space Radar Lab [Sep, STS-60 SRL-01]
Gather radar images of Earth's surface.
o Total Ozone Mapping Spectrometer [Dec, Pegasus rocket]
Study of Stratospheric ozone.
o SFU (Space Flyer Unit) [ISAS]
Conducting space experiments and observations and this can be
recovered after it conducts the various scientific and
engineering experiments. SFU is to be launched by ISAS and
retrieved by the U.S. Space Shuttle on STS-68 in 1994.
1994
o Polar Auroral Plasma Physics [May, Delta II rocket]
June, measure solar wind and ions and gases surrounding the
Earth.
o IML-2 (STS) [NASDA, Jul 1994 IML-02]
International Microgravity Laboratory.
o ADEOS [NASDA]
Advanced Earth Observing Satellite.
o MUSES-B (Mu Space Engineering Satellite-B) [ISAS]
Conducting research on the precise mechanism of space structure
and in-space astronomical observations of electromagnetic waves.
1995
LUNAR-A [ISAS]
Elucidating the crust structure and thermal construction of the
moon's interior.
Proposed Missions:
o Advanced X-ray Astronomy Facility (AXAF)
Possible launch from shuttle in 1995, AXAF is a space
observatory with a high resolution telescope. It would orbit for
15 years and study the mysteries and fate of the universe.
o Earth Observing System (EOS)
Possible launch in 1997, 1 of 6 US orbiting space platforms to
provide long-term data (15 years) of Earth systems science
including planetary evolution.
o Mercury Observer
Possible 1997 launch.
o Lunar Observer
Possible 1997 launch, would be sent into a long-term lunar
orbit. The Observer, from 60 miles above the moon's poles, would
survey characteristics to provide a global context for the
results from the Apollo program.
o Space Infrared Telescope Facility
Possible launch by shuttle in 1999, this is the 4th element of
the Great Observatories program. A free-flying observatory with
a lifetime of 5 to 10 years, it would observe new comets and
other primitive bodies in the outer solar system, study cosmic
birth formation of galaxies, stars and planets and distant
infrared-emitting galaxies
o Mars Rover Sample Return (MRSR)
Robotics rover would return samples of Mars' atmosphere and
surface to Earch for analysis. Possible launch dates: 1996 for
imaging orbiter, 2001 for rover.
o Fire and Ice
Possible launch in 2001, will use a gravity assist flyby of
Earth in 2003, and use a final gravity assist from Jupiter in
2005, where the probe will split into its Fire and Ice
components: The Fire probe will journey into the Sun, taking
measurements of our star's upper atmosphere until it is
vaporized by the intense heat. The Ice probe will head out
towards Pluto, reaching the tiny world for study by 2016."""
###Output
_____no_output_____
###Markdown
Now, let's use our `TransformerSummarizer` instance to summarize the long document.
###Code
ts.summarize(sample_doc)
###Output
_____no_output_____ |
tensorflow_script_mode_training_and_serving/tensorflow_script_mode_training_and_serving.ipynb | ###Markdown
TensorFlow スクリプトモードによる学習とモデルホスティングスクリプトモードは、TensorFlowのトレーニングスクリプト形式で、SageMakerでTensorFlowトレーニングスクリプトを最小限の変更で実行できます。 [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) は、SageMakerトレーニングインスタンスへのスクリプトの転送を処理します。トレーニングインスタンスでは、SageMakerのネイティブTensorFlowサポートがトレーニング関連の環境変数を設定し、トレーニングスクリプトを実行します。このチュートリアルでは、SageMaker Python SDKを使用してトレーニングジョブを起動し、トレーニングされたモデルを展開します。スクリプトモードは、Pythonスクリプト、Pythonモジュール、またはシェルスクリプトによるトレーニングをサポートします。この例では、Pythonスクリプトを使用して、[MNISTデータセット](http://yann.lecun.com/exdb/mnist/)の分類モデルをトレーニングします。さらに、このノートブックは、[SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container)でリアルタイム推論を実行する方法を示します。 TensorFlow Servingコンテナは、スクリプトモードのデフォルトの推論方法です。 TensorFlow Servingコンテナの詳細なドキュメントについては、[こちら](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst) にアクセスしてください。--- コンテンツ1. [環境のセットアップ](1.環境のセットアップ)1. [学習データの準備](2.学習データの準備)1. [分散学習用のスクリプトを作成する](3.分散学習用のスクリプトを作成する)1. [TensorFlow Estimator を利用して学習ジョブを作成する](4.TensorFlowEstimatorを利用して学習ジョブを作成する)1. [学習したモデルをエンドポイントにデプロイする](5.学習したモデルをエンドポイントにデプロイする)1. [エンドポイントを呼び出し推論を実行する](6.エンドポイントを呼び出し推論を実行する)1. [エンドポイントを削除する](7.エンドポイントを削除する)--- 1.環境のセットアップまずは環境のセットアップを行いましょう。
###Code
import os
import sagemaker
import matplotlib.pyplot as plt
import numpy as np
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
print('Current SageMaker Python sdk Version ={0}'.format(sagemaker.__version__))
###Output
_____no_output_____
###Markdown
注) Spot インスタンスを使った学習を行う際は、SageMaker Python SDK のバージョン が 1.37.2 以上である必要があります。上記の出力結果がそれ以前のバージョンになった際は、下記のセルのを削除(コメントアウトを解除)して実行、Jupyterカーネルを再起動し、再度上記のセルを実行し、バージョンがアップデートされたことを確認してください。カーネルが再起動されない場合は、SageMaker SDK バージョン更新が反映されません。
###Code
# !pip install -U --quiet "sagemaker>=1.37.2"
###Output
_____no_output_____
###Markdown
2.学習データの準備MNISTデータセットは、パブリックS3バケット ``sagemaker-sample-data-`` の下のプレフィックス ``tensorflow/mnist`` の下にロードされています。 このプレフィックスの下には4つの ``.npy`` ファイルがあります:* ``train_data.npy``* ``eval_data.npy``* ``train_labels.npy``* ``eval_labels.npy``
###Code
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
###Output
_____no_output_____
###Markdown
3.分散学習用のスクリプトを作成するこのチュートリアルのトレーニングスクリプトは、TensorFlowの公式の[CNN MNISTの例](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py) をベースに作成されました。 SageMaker から渡された `` model_dir`` パラメーターを処理するように変更しています。 これは、分散学習時のデータ共有、チェックポイント、モデルの永続保存などに使用できるS3パスです。 また、トレーニング関連の変数を扱うために、引数をパースする関数も追加しました。トレーニングジョブの最後に、トレーニング済みモデルを環境変数 ``SM_MODEL_DIR`` に保存されているパスにエクスポートするステップを追加しました。このパスは常に ``/opt/ml/model`` をポイントします。 SageMaker は、トレーニングの終了時にこのフォルダー内のすべてのモデル成果物をS3にアップロードするため、これは重要です。スクリプト全体は次のとおりです。
###Code
!pygmentize 'mnist.py'
###Output
_____no_output_____
###Markdown
4.TensorFlowEstimatorを利用して学習ジョブを作成する`sagemaker.tensorflow.TensorFlow` estimator は、スクリプトモード対応の TensorFlow コンテナの指定、学習・推論スクリプトの S3 へのアップロード、および SageMaker トレーニングジョブの作成を行います。ここでいくつかの重要なパラメーターを呼び出しましょう。* `py_version`は` 'py3'`に設定されています。レガシーモードは Python 2 のみをサポートしているため、この学習スクリプトはスクリプトモードを使用していることを示しています。Python2は間もなく廃止されますが、 `py_version` を設定することでPython 2でスクリプトモードを使用できます。`'py2'`と` script_mode`を `True`にします。* `distributions` は、分散トレーニング設定を構成するために使用されます。インスタンスのクラスターまたは複数の GPU をまたいで分散学習を行う場合にのみ必要です。ここでは、分散トレーニングスキーマとしてパラメーターサーバーを使用しています。 SageMaker トレーニングジョブは同種のクラスターで実行されます。 SageMaker セットアップでパラメーターサーバーのパフォーマンスを向上させるために、クラスター内のすべてのインスタンスでパラメーターサーバーを実行するため、起動するパラメーターサーバーの数を指定する必要はありません。スクリプトモードは、[Horovod](https://github.com/horovod/horovod) による分散トレーニングもサポートしています。 `distributions` の設定方法に関する詳細なドキュメントは[こちら](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflowdistributed-training) をご参照ください。Spot Instanceを用いて実行する場合は、下記のコードを `Estimator` の `train_instance_type` の次の行に追加しましょう。```python train_max_run = 5000, train_use_spot_instances = 'True', train_max_wait = 10000,```
###Code
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
train_instance_count=2,
train_instance_type='ml.p2.xlarge',
framework_version='1.14',
py_version='py3',
distributions={'parameter_server': {'enabled': True}})
###Output
_____no_output_____
###Markdown
``fit`` による学習ジョブの実行学習ジョブを開始するには、`estimator.fit(training_data_uri)` を呼び出します。ここでは、S3 ロケーションが入力として使用されます。 `fit` は、`training` という名前のデフォルトチャネルを作成します。これは、このS3ロケーションを指します。トレーニングスクリプトでは、 `SM_CHANNEL_TRAINING` に保存されている場所からトレーニングデータにアクセスできます。 `fit`は、他のいくつかのタイプの入力も受け入れます。詳細については、APIドキュメント[こちら](https://sagemaker.readthedocs.io/en/stable/estimators.htmlsagemaker.estimator.EstimatorBase.fit) を参照してください。トレーニングが開始されると、TensorFlow コンテナは mnist.py を実行し、スクリプトの引数として estimator から`hyperparameters` と `model_dir` を渡します。この例では、estimator 内で定義していないハイパーパラメーターは渡されず、 `model_dir` のデフォルトは `s3:///` であるため、スクリプトの実行は次のようになります。```bashpython mnist.py --model_dir s3:///```トレーニングが完了すると、トレーニングジョブは保存されたモデルを TensorFlow serving にアップロードします。
###Code
mnist_estimator.fit(training_data_uri)
###Output
_____no_output_____
###Markdown
5.学習したモデルをエンドポイントにデプロイする`deploy()`メソッドは SageMaker モデルを作成します。このモデルはエンドポイントにデプロイされ、リアルタイムで予測リクエストを処理します。 スクリプトモードでトレーニングしたため、エンドポイントには TensorFlow Serving コンテナを使用します。 このサービングコンテナは、SageMaker ホスティングプロトコルと互換性のあるWebサーバーの実装を実行します。 [独自の推論コードの使用](https://docs.aws.amazon.com/ja_jp/sagemaker/latest/dg/your-algorithms-inference-main.html) ドキュメントでは、SageMaker が推論コンテナを実行する方法について説明しています。
###Code
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
6.エンドポイントを呼び出し推論を実行するトレーニングデータをダウンロードして、推論の入力として使用してみましょう。入力データと出力データの形式は、[TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest) の `Predict`メソッドのリクエストとレスポンスの形式に直接対応しています。 SageMaker の TensforFlow Serving エンドポイントは、単純化されたJSON形式、行区切りのJSONオブジェクト ("jsons" または "jsonlines")、CSV データなど、TensorFlow REST API の一部ではない追加の入力形式も受け入れることができます。この例では、入力として `numpy` 配列を使用しています。これは、簡略化されたJSON形式にシリアル化されます。 さらに、TensorFlow serving は、次のコードに示すように、複数のアイテムを一度に処理することもできます。 TensorFlow serving を用いた SageMaker エンドポイントに対して予測を行う方法に関する詳細なドキュメントは[こちら](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rstmaking-predictions-against-a-sagemaker-endpoint)をご参照ください。まず、評価用のデータセットを取得します。
###Code
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/eval_data.npy eval_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/eval_labels.npy eval_labels.npy
eval_data = np.load('eval_data.npy')
eval_labels = np.load('eval_labels.npy')
###Output
_____no_output_____
###Markdown
下記の ``k = `` に、9950 までの好きな数字を入れて、評価する手書き文字のセットを選択しましょう。`predictor.predict(test_data)` で推論を行い、選んだ手書き文字認識に対する `prediction is`: 推論結果 と、`label is`: 実際のラベルの値が出力され、その二つが一致していれば最後に `matched: True` と表示されます。
###Code
k = 1000 # choose your favorite number from 0 to 9950
test_data = eval_data[k:k+50]
test_data
for i in range(5):
for j in range(10):
plt.subplot(5, 10, 10* i + j+1)
plt.imshow(test_data[10 * i + j, :].reshape(28, 28), cmap='gray')
plt.title(10* i + j+1)
plt.tick_params(labelbottom=False, labelleft = False)
plt.subplots_adjust(wspace=0.2, hspace=1)
plt.show()
predictions = predictor.predict(test_data)
for i in range(0, 50):
prediction = predictions['predictions'][i]['classes']
label = eval_labels[i+k]
print(' [{}]: prediction is {}, label is {}, matched: {}'.format(i+1, prediction, label, prediction == label))
###Output
_____no_output_____
###Markdown
7.エンドポイントを削除する余分なコストが発生しないように、検証が終わったら上記で作成したエンドポイントを削除しましょう。
###Code
sagemaker.Session().delete_endpoint(predictor.endpoint)
###Output
_____no_output_____
###Markdown
TensorFlow script mode training and servingScript mode is a training script format for TensorFlow that lets you execute any TensorFlow training script in SageMaker with minimal modification. The [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) handles transferring your script to a SageMaker training instance. On the training instance, SageMaker's native TensorFlow support sets up training-related environment variables and executes your training script. In this tutorial, we use the SageMaker Python SDK to launch a training job and deploy the trained model.Script mode supports training with a Python script, a Python module, or a shell script. In this example, we use a Python script to train a classification model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). In this example, we will show how easily you can train a SageMaker using TensorFlow 1.x and TensorFlow 2.0 scripts with SageMaker Python SDK. In addition, this notebook demonstrates how to perform real time inference with the [SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container). The TensorFlow Serving container is the default inference method for script mode. For full documentation on the TensorFlow Serving container, please visit [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst).This lab goes through 3 parts:1. Training the Model2. Deploying and evaluating the Trained Model3. Hyperparameter Optimization Part 1: Training the Model Set up the environmentLet's start by setting up the environment:
###Code
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
###Output
_____no_output_____
###Markdown
Training DataThe MNIST dataset has been loaded to the public S3 buckets ``sagemaker-sample-data-`` under the prefix ``tensorflow/mnist``. There are four ``.npy`` file under this prefix:* ``train_data.npy``* ``eval_data.npy``* ``train_labels.npy``* ``eval_labels.npy``
###Code
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
###Output
_____no_output_____
###Markdown
Construct a script for distributed trainingThis tutorial's training script was adapted from TensorFlow's official [CNN MNIST example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py). We have modified it to handle the ``model_dir`` parameter passed in by SageMaker. This is an S3 path which can be used for data sharing during distributed training and checkpointing and/or model persistence. We have also added an argument-parsing function to handle processing training-related variables.At the end of the training job we have added a step to export the trained model to the path stored in the environment variable ``SM_MODEL_DIR``, which always points to ``/opt/ml/model``. This is critical because SageMaker uploads all the model artifacts in this folder to S3 at end of training.Here is the entire script, for both TF 1.x and TF 2.0:
###Code
# Tensorflow 1.x script
!pygmentize 'mnist.py'
# TensorFlow 2.0 script
!pygmentize 'mnist-2.py'
###Output
[37m# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.[39;49;00m
[37m#[39;49;00m
[37m# Licensed under the Apache License, Version 2.0 (the "License"). You[39;49;00m
[37m# may not use this file except in compliance with the License. A copy of[39;49;00m
[37m# the License is located at[39;49;00m
[37m#[39;49;00m
[37m# http://aws.amazon.com/apache2.0/[39;49;00m
[37m#[39;49;00m
[37m# or in the "license" file accompanying this file. This file is[39;49;00m
[37m# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF[39;49;00m
[37m# ANY KIND, either express or implied. See the License for the specific[39;49;00m
[37m# language governing permissions and limitations under the License.import tensorflow as tf[39;49;00m
[34mimport[39;49;00m [04m[36mtensorflow[39;49;00m [34mas[39;49;00m [04m[36mtf[39;49;00m
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mfrom[39;49;00m [04m[36mtensorflow.keras.optimizers[39;49;00m [34mimport[39;49;00m Adam
[34mdef[39;49;00m [32mmodel[39;49;00m(x_train, y_train, x_test, y_test, args):
[33m"""Generate a simple model"""[39;49;00m
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense([34m1024[39;49;00m, activation=tf.nn.relu),
tf.keras.layers.Dropout([34m0.4[39;49;00m),
tf.keras.layers.Dense([34m10[39;49;00m, activation=tf.nn.softmax)
])
model.compile(optimizer=Adam(learning_rate=args.learning_rate),
loss=[33m'[39;49;00m[33msparse_categorical_crossentropy[39;49;00m[33m'[39;49;00m,
metrics=[[33m'[39;49;00m[33maccuracy[39;49;00m[33m'[39;49;00m])
history = model.fit(x_train, y_train, verbose=[34m2[39;49;00m)
metrics = model.evaluate(x_test, y_test, verbose=[34m0[39;49;00m)
evals = {metric: value [34mfor[39;49;00m metric, value [35min[39;49;00m [36mzip[39;49;00m(model.metrics_names, metrics)}
[34mreturn[39;49;00m model, evals, history.history
[34mdef[39;49;00m [32m_load_training_data[39;49;00m(base_dir):
[33m"""Load MNIST training data"""[39;49;00m
x_train = np.load(os.path.join(base_dir, [33m'[39;49;00m[33mtrain_data.npy[39;49;00m[33m'[39;49;00m))
y_train = np.load(os.path.join(base_dir, [33m'[39;49;00m[33mtrain_labels.npy[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m x_train, y_train
[34mdef[39;49;00m [32m_load_testing_data[39;49;00m(base_dir):
[33m"""Load MNIST testing data"""[39;49;00m
x_test = np.load(os.path.join(base_dir, [33m'[39;49;00m[33meval_data.npy[39;49;00m[33m'[39;49;00m))
y_test = np.load(os.path.join(base_dir, [33m'[39;49;00m[33meval_labels.npy[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m x_test, y_test
[34mdef[39;49;00m [32m_parse_args[39;49;00m():
parser = argparse.ArgumentParser()
[37m# Data, model, and output directories[39;49;00m
[37m# model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket.[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--model_dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--sm-model-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_MODEL_DIR[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--train[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_CHANNEL_TRAINING[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--hosts[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mlist[39;49;00m, default=json.loads(os.environ.get([33m'[39;49;00m[33mSM_HOSTS[39;49;00m[33m'[39;49;00m)))
parser.add_argument([33m'[39;49;00m[33m--current-host[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_CURRENT_HOST[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--learning-rate[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mfloat[39;49;00m, help=[33m"[39;49;00m[33mLearning rate[39;49;00m[33m"[39;49;00m, default=[34m0.001[39;49;00m)
[34mreturn[39;49;00m parser.parse_known_args()
[34mif[39;49;00m [31m__name__[39;49;00m == [33m"[39;49;00m[33m__main__[39;49;00m[33m"[39;49;00m:
[34mprint[39;49;00m(f[33m'[39;49;00m[33mGPUs available to Tensorflow: {tf.config.experimental.list_physical_devices([39;49;00m[33m"[39;49;00m[33mGPU[39;49;00m[33m"[39;49;00m[33m)}[39;49;00m[33m'[39;49;00m)
args, unknown = _parse_args()
train_data, train_labels = _load_training_data(args.train)
eval_data, eval_labels = _load_testing_data(args.train)
mnist_classifier, eval_metrics, train_metrics = model(train_data, train_labels, eval_data, eval_labels, args)
[34mif[39;49;00m args.current_host == args.hosts[[34m0[39;49;00m]:
[37m# Print evaluation loss for HPO[39;49;00m
[34mfor[39;49;00m metric, values [35min[39;49;00m train_metrics.items():
[34mprint[39;49;00m(f[33m'[39;49;00m[33mtrain_{metric}: {values[-1]}[39;49;00m[33m'[39;49;00m)
[34mfor[39;49;00m metric, value [35min[39;49;00m eval_metrics.items():
[34mprint[39;49;00m(f[33m'[39;49;00m[33mEvaluation {metric}: {value}[39;49;00m[33m'[39;49;00m)
[37m# save model to an S3 directory with version number '00000001'[39;49;00m
mnist_classifier.save(os.path.join(args.sm_model_dir, [33m'[39;49;00m[33m000000001[39;49;00m[33m'[39;49;00m), [33m'[39;49;00m[33mmy_model.h5[39;49;00m[33m'[39;49;00m)
###Markdown
Create a training job using the `TensorFlow` estimatorThe `sagemaker.tensorflow.TensorFlow` estimator handles locating the script mode container, uploading your script to a S3 location and creating a SageMaker training job. Let's call out a couple important parameters here:* `py_version` is set to `'py3'` to indicate that we are using script mode since legacy mode supports only Python 2. Though Python 2 will be deprecated soon, you can use script mode with Python 2 by setting `py_version` to `'py2'` and `script_mode` to `True`.* `distributions` is used to configure the distributed training setup. It's required only if you are doing distributed training either across a cluster of instances or across multiple GPUs. Here we are using parameter servers as the distributed training schema. SageMaker training jobs run on homogeneous clusters. To make parameter server more performant in the SageMaker setup, we run a parameter server on every instance in the cluster, so there is no need to specify the number of parameter servers to launch. Script mode also supports distributed training with [Horovod](https://github.com/horovod/horovod). You can find the full documentation on how to configure `distributions` [here](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflowdistributed-training).
###Code
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
train_instance_count=2,
train_instance_type='ml.c5.2xlarge',
framework_version='1.14',
py_version='py3',
distributions={'parameter_server': {'enabled': True}})
###Output
_____no_output_____
###Markdown
You can also initiate an estimator to train with TensorFlow 2.0 script. The only things that you will need to change are the script name and ``framewotk_version``.We'll include metric extraction from the CloudWatch logs of the training job. The TF 2.0 script was adapted to log train and eval loss and accuracies, and we'll set the expressions up to capture them all. This will be used in part 3 of the lab for hyperparameter optimization.
###Code
metric_definitions = [{'Name': 'train_loss',
'Regex': 'train_loss: ([0-9\\.]+)'},
{'Name': 'train_acc',
'Regex': 'train_accuracy: ([0-9\\.]+)'},
{'Name': 'eval_loss',
'Regex': 'Evaluation loss: ([0-9\\.]+)'},
{'Name': 'eval_acc',
'Regex': 'Evaluation accuracy: ([0-9\\.]+)'},
]
mnist_estimator2 = TensorFlow(entry_point='mnist-2.py',
role=role,
train_instance_count=1,
train_instance_type='ml.c5.2xlarge',
framework_version='2.0.0',
py_version='py3',
distributions={'parameter_server': {'enabled': True}},
metric_definitions=metric_definitions
)
###Output
_____no_output_____
###Markdown
Calling ``fit``To start a training job, we call `estimator.fit(training_data_uri)`.An S3 location is used here as the input. `fit` creates a default channel named `'training'`, which points to this S3 location. In the training script we can then access the training data from the location stored in `SM_CHANNEL_TRAINING`. `fit` accepts a couple other types of input as well. See the API doc [here](https://sagemaker.readthedocs.io/en/stable/estimators.htmlsagemaker.estimator.EstimatorBase.fit) for details.When training starts, the TensorFlow container executes mnist.py, passing `hyperparameters` and `model_dir` from the estimator as script arguments. Because we didn't define either in this example, no hyperparameters are passed, and `model_dir` defaults to `s3:///`, so the script execution is as follows:```bashpython mnist.py --model_dir s3:///```When training is complete, the training job will upload the saved model for TensorFlow serving.Training should take about **10 minutes**.
###Code
mnist_estimator.fit(training_data_uri, wait=False)
###Output
_____no_output_____
###Markdown
Calling fit to train a model with TensorFlow 2.0 scroipt.
###Code
mnist_estimator2.fit(training_data_uri, wait=False)
from time import sleep
while (mnist_estimator.latest_training_job.describe()['TrainingJobStatus'] == 'InProgress' or
mnist_estimator2.latest_training_job.describe()['TrainingJobStatus'] == 'InProgress'):
print('Training in progress...')
sleep(30)
print("Training finished. Status:\n"
f"\tTF 1: {mnist_estimator.latest_training_job.describe()['TrainingJobStatus']}\n"
f"\tTF 2: {mnist_estimator2.latest_training_job.describe()['TrainingJobStatus']}")
###Output
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training in progress...
Training finished. Status:
TF 1: Completed
TF 2: Completed
###Markdown
Part 2: Deploy the trained model to an endpointThe `deploy()` method creates a SageMaker model, which is then deployed to an endpoint to serve prediction requests in real time. We will use the TensorFlow Serving container for the endpoint, because we trained with script mode. This serving container runs an implementation of a web server that is compatible with SageMaker hosting protocol. The [Using your own inference code]() document explains how SageMaker runs inference containers.The 2 cells below deploy the TF 1.x and TF 2.0 models as service endpoints. Execute both cells, deployment should take about 10 minutes.
###Code
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.p3.2xlarge', wait=False)
###Output
_____no_output_____
###Markdown
Deploy the trained TensorFlow 2.0 model to an endpoint.
###Code
predictor2 = mnist_estimator2.deploy(initial_instance_count=1, instance_type='ml.p3.2xlarge', wait=True)
###Output
_____no_output_____
###Markdown
Invoke the endpointLet's download the test data and use that as input for inference.
###Code
import numpy as np
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/eval_data.npy test_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/eval_labels.npy test_labels.npy
test_data = np.load('test_data.npy')
test_labels = np.load('test_labels.npy')
###Output
_____no_output_____
###Markdown
The formats of the input and the output data correspond directly to the request and response formats of the `Predict` method in the [TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest). SageMaker's TensforFlow Serving endpoints can also accept additional input formats that are not part of the TensorFlow REST API, including the simplified JSON format, line-delimited JSON objects ("jsons" or "jsonlines"), and CSV data.In this example we are using a `numpy` array as input, which will be serialized into the simplified JSON format. In addtion, TensorFlow serving can also process multiple items at once as you can see in the following code. You can find the complete documentation on how to make predictions against a TensorFlow serving SageMaker endpoint [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rstmaking-predictions-against-a-sagemaker-endpoint).
###Code
predictions = predictor.predict(test_data[:50])
errors = []
for i in range(0, 50):
prediction = predictions['predictions'][i]['classes']
label = test_labels[i]
if (prediction != label):
errors.append(i)
print(f'{i}: prediction is {prediction}, label is {label}, matched: {prediction == label}')
###Output
_____no_output_____
###Markdown
So, the model made a few errors. Those were capture in the `errors` array, which we'll use to manually inspect what could be the problem. Examine the prediction result from the TensorFlow 2.0 model. The TF 2.0 model returns only the probabilities for each class, so we run a quick processing to determine the most probable class.
###Code
predictions2 = predictor2.predict(test_data[:50])
predictions2['classes'] = [np.argmax(x) for x in predictions2['predictions']]
errors2 = []
for i in range(0, 50):
prediction = predictions2['classes'][i]
label = test_labels[i]
if (prediction != label):
errors2.append(i)
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
###Output
_____no_output_____
###Markdown
Analyze Prediction errorsWe have collected the errors for both predictors, and some simple code can help us analyze them. We'll define a simple function to inspect MNIST images, in case our model makes prediction mistakes.
###Code
from PIL import Image
def plot(data):
data = data.reshape((28, 28))
gray_range = data.max() - data.min()
img_data = (((data - data.min()) / gray_range) * 255.).astype(np.uint8)
img = Image.fromarray(img_data)
return(img)
###Output
_____no_output_____
###Markdown
Then we use that function with the error labels to see what the problem could be. The code below shows the first error for each predictor.
###Code
error_imgs = [plot(test_data[i]) for i in errors]
error_imgs[0] if (len(errors) > 0) else None
error_imgs2 = [plot(train_data[i]) for i in errors2]
error_imgs2[0] if (len(errors2) > 0) else None
###Output
_____no_output_____
###Markdown
Delete the endpointsLet's delete the endpoints we just created to prevent incurring any extra costs. We won't need them for the hyperparameter tuning.
###Code
sagemaker.Session().delete_endpoint(predictor.endpoint)
###Output
_____no_output_____
###Markdown
Delete the TensorFlow 2.0 endpoint as well.
###Code
sagemaker.Session().delete_endpoint(predictor2.endpoint)
###Output
_____no_output_____
###Markdown
Part 3: Hyperparameter Tuning*Note, with the default setting below, the hyperparameter tuning job can take about 40 minutes to complete.*Now we will set up the hyperparameter tuning job using SageMaker Python SDK, following below steps:* We'll euse the TF 2.0 Estimator we defined before, but any estimator can be used, whether pretrained or not.* Define the ranges of hyperparameters we plan to tune, in this example, we are tuning "learning_rate"* Define the objective metric for the tuning job to optimize* Create a hyperparameter tuner with above setting, as well as tuning resource configurations
###Code
import boto3
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
###Output
_____no_output_____
###Markdown
With our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.- Categorical parameters need to take one value from a discrete set. We define this by passing the list of possible values to `CategoricalParameter(list)`- Continuous parameters can take any real number value between the minimum and maximum value, defined by `ContinuousParameter(min, max)`- Integer parameters can take any integer value between the minimum and maximum value, defined by `IntegerParameter(min, max)`*Note, if possible, it's almost always best to specify a value as the least restrictive type. For example, tuning learning rate as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with values 0.01, 0.1, 0.15, or 0.2.* We'll also specify the objective metric that we'd like to tune and its definition. We will use `eval_loss` as the objective metric, we also set the objective_type to be 'minimize', so that hyperparameter tuning seeks to minize the objective metric when searching for the best hyperparameter setting. By default, objective_type is set to 'maximize'.
###Code
hyperparameter_ranges = {'learning_rate': ContinuousParameter(0.001, 0.2)}
objective_metric_name = 'eval_loss'
objective_type = 'Minimize'
###Output
_____no_output_____
###Markdown
Now, we'll create a `HyperparameterTuner` object, to which we pass:- The TensorFlow estimator we created above- Our hyperparameter ranges- Objective metric name and definition- Tuning resource configurations such as Number of training jobs to run in total and how many training jobs can be run in parallel.
###Code
tuner = HyperparameterTuner(estimator=mnist_estimator2,
objective_metric_name=objective_metric_name,
hyperparameter_ranges=hyperparameter_ranges,
metric_definitions=metric_definitions,
max_jobs=8,
max_parallel_jobs=2,
objective_type=objective_type)
###Output
_____no_output_____
###Markdown
Launch hyperparameter tuning jobAnd finally, we can start our hyperprameter tuning job by calling `.fit()` and passing in the S3 path to our train and test dataset.After the hyperprameter tuning job is created, you should be able to describe the tuning job to see its progress in the next step, and you can go to SageMaker console->Jobs to check out the progress of the progress of the hyperparameter tuning job.
###Code
tuner.fit(training_data_uri)
###Output
_____no_output_____
###Markdown
Analyzing the Hyperparameter Tuning Results Let's inspect what's going on inside the training job.
###Code
analytics = tuner.analytics()
tuning = analytics.dataframe(force_refresh=True).set_index('TrainingJobName').sort_index()
while tuning[tuning.TrainingJobStatus == 'Completed'].shape[0] == 0:
print('Waiting for some job to finish...')
sleep(30)
tuning = analytics.dataframe(force_refresh=True).set_index('TrainingJobName').sort_index()
tuning
%matplotlib inline
points = tuning.dropna()[['learning_rate', 'FinalObjectiveValue']]
ax = points.plot.scatter('learning_rate', 'FinalObjectiveValue', figsize=(15, 8))
for k, v in points.iterrows():
ax.annotate(k[32:35], v)
###Output
_____no_output_____
###Markdown
TensorFlow スクリプトモードによる学習とモデルホスティングスクリプトモードは、TensorFlowのトレーニングスクリプト形式で、SageMakerでTensorFlowトレーニングスクリプトを最小限の変更で実行できます。 [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) は、SageMakerトレーニングインスタンスへのスクリプトの転送を処理します。トレーニングインスタンスでは、SageMakerのネイティブTensorFlowサポートがトレーニング関連の環境変数を設定し、トレーニングスクリプトを実行します。このチュートリアルでは、SageMaker Python SDKを使用してトレーニングジョブを起動し、トレーニングされたモデルを展開します。スクリプトモードは、Pythonスクリプト、Pythonモジュール、またはシェルスクリプトによるトレーニングをサポートします。この例では、Pythonスクリプトを使用して、[MNISTデータセット](http://yann.lecun.com/exdb/mnist/)の分類モデルをトレーニングします。さらに、このノートブックは、[SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container)でリアルタイム推論を実行する方法を示します。 TensorFlow Servingコンテナは、スクリプトモードのデフォルトの推論方法です。 TensorFlow Servingコンテナの詳細なドキュメントについては、[こちら](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst) にアクセスしてください。--- コンテンツ1. [環境のセットアップ](1.環境のセットアップ)1. [学習データの準備](2.学習データの準備)1. [分散学習用のスクリプトを作成する](3.分散学習用のスクリプトを作成する)1. [TensorFlow Estimator を利用して学習ジョブを作成する](4.TensorFlowEstimatorを利用して学習ジョブを作成する)1. [学習したモデルをエンドポイントにデプロイする](5.学習したモデルをエンドポイントにデプロイする)1. [エンドポイントを呼び出し推論を実行する](6.エンドポイントを呼び出し推論を実行する)1. [エンドポイントを削除する](7.エンドポイントを削除する)--- 1.環境のセットアップまずは環境のセットアップを行いましょう。
###Code
import os
import sagemaker
import matplotlib.pyplot as plt
import numpy as np
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
print('Current SageMaker Python sdk Version ={0}'.format(sagemaker.__version__))
###Output
_____no_output_____
###Markdown
注) Spot インスタンスを使った学習を行う際は、SageMaker Python SDK のバージョン が 1.37.2 以上である必要があります。上記の出力結果がそれ以前のバージョンになった際は、下記を実行し、Jupyterカーネルを再起動し、再度上記のセルを実行し、バージョンがアップデートされたことを確認してください。カーネルが再起動されない場合は、SageMaker SDK バージョン更新が反映されません。
###Code
!pip install -U --quiet "sagemaker>=1.37.2"
###Output
_____no_output_____
###Markdown
2.学習データの準備MNISTデータセットは、パブリックS3バケット ``sagemaker-sample-data-`` の下のプレフィックス ``tensorflow/mnist`` の下にロードされています。 このプレフィックスの下には4つの ``.npy`` ファイルがあります:* ``train_data.npy``* ``eval_data.npy``* ``train_labels.npy``* ``eval_labels.npy``
###Code
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
###Output
_____no_output_____
###Markdown
3.分散学習用のスクリプトを作成するこのチュートリアルのトレーニングスクリプトは、TensorFlowの公式の[CNN MNISTの例](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py) をベースに作成されました。 SageMaker から渡された `` model_dir`` パラメーターを処理するように変更しています。 これは、分散学習時のデータ共有、チェックポイント、モデルの永続保存などに使用できるS3パスです。 また、トレーニング関連の変数を扱うために、引数をパースする関数も追加しました。トレーニングジョブの最後に、トレーニング済みモデルを環境変数 ``SM_MODEL_DIR`` に保存されているパスにエクスポートするステップを追加しました。このパスは常に ``/opt/ml/model`` をポイントします。 SageMaker は、トレーニングの終了時にこのフォルダー内のすべてのモデル成果物をS3にアップロードするため、これは重要です。スクリプト全体は次のとおりです。
###Code
!pygmentize 'mnist.py'
###Output
_____no_output_____
###Markdown
4.TensorFlowEstimatorを利用して学習ジョブを作成する`sagemaker.tensorflow.TensorFlow` estimator は、スクリプトモード対応の TensorFlow コンテナの指定、学習・推論スクリプトの S3 へのアップロード、および SageMaker トレーニングジョブの作成を行います。ここでいくつかの重要なパラメーターを呼び出しましょう。* `py_version`は` 'py3'`に設定されています。レガシーモードは Python 2 のみをサポートしているため、この学習スクリプトはスクリプトモードを使用していることを示しています。Python2は間もなく廃止されますが、 `py_version` を設定することでPython 2でスクリプトモードを使用できます。`'py2'`と` script_mode`を `True`にします。* `distributions` は、分散トレーニング設定を構成するために使用されます。インスタンスのクラスターまたは複数の GPU をまたいで分散学習を行う場合にのみ必要です。ここでは、分散トレーニングスキーマとしてパラメーターサーバーを使用しています。 SageMaker トレーニングジョブは同種のクラスターで実行されます。 SageMaker セットアップでパラメーターサーバーのパフォーマンスを向上させるために、クラスター内のすべてのインスタンスでパラメーターサーバーを実行するため、起動するパラメーターサーバーの数を指定する必要はありません。スクリプトモードは、[Horovod](https://github.com/horovod/horovod) による分散トレーニングもサポートしています。 `distributions` の設定方法に関する詳細なドキュメントは[こちら](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflowdistributed-training) をご参照ください。Spot Instanceを用いて実行する場合は、下記のコードを `Estimator` の `train_instance_type` の次の行に追加しましょう。```python train_max_run = 5000, train_use_spot_instances = 'True', train_max_wait = 10000,```
###Code
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
train_instance_count=2,
train_instance_type='ml.p2.xlarge',
framework_version='1.14',
py_version='py3',
distributions={'parameter_server': {'enabled': True}})
###Output
_____no_output_____
###Markdown
``fit`` による学習ジョブの実行学習ジョブを開始するには、`estimator.fit(training_data_uri)` を呼び出します。ここでは、S3 ロケーションが入力として使用されます。 `fit` は、`training` という名前のデフォルトチャネルを作成します。これは、このS3ロケーションを指します。トレーニングスクリプトでは、 `SM_CHANNEL_TRAINING` に保存されている場所からトレーニングデータにアクセスできます。 `fit`は、他のいくつかのタイプの入力も受け入れます。詳細については、APIドキュメント[こちら](https://sagemaker.readthedocs.io/en/stable/estimators.htmlsagemaker.estimator.EstimatorBase.fit) を参照してください。トレーニングが開始されると、TensorFlow コンテナは mnist.py を実行し、スクリプトの引数として estimator から`hyperparameters` と `model_dir` を渡します。この例では、estimator 内で定義していないハイパーパラメーターは渡されず、 `model_dir` のデフォルトは `s3:///` であるため、スクリプトの実行は次のようになります。```bashpython mnist.py --model_dir s3:///```トレーニングが完了すると、トレーニングジョブは保存されたモデルを TensorFlow serving にアップロードします。
###Code
mnist_estimator.fit(training_data_uri)
###Output
_____no_output_____
###Markdown
5.学習したモデルをエンドポイントにデプロイする`deploy()`メソッドは SageMaker モデルを作成します。このモデルはエンドポイントにデプロイされ、リアルタイムで予測リクエストを処理します。 スクリプトモードでトレーニングしたため、エンドポイントには TensorFlow Serving コンテナを使用します。 このサービングコンテナは、SageMaker ホスティングプロトコルと互換性のあるWebサーバーの実装を実行します。 [独自の推論コードの使用](https://docs.aws.amazon.com/ja_jp/sagemaker/latest/dg/your-algorithms-inference-main.html) ドキュメントでは、SageMaker が推論コンテナを実行する方法について説明しています。
###Code
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
6.エンドポイントを呼び出し推論を実行するトレーニングデータをダウンロードして、推論の入力として使用してみましょう。入力データと出力データの形式は、[TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest) の `Predict`メソッドのリクエストとレスポンスの形式に直接対応しています。 SageMaker の TensforFlow Serving エンドポイントは、単純化されたJSON形式、行区切りのJSONオブジェクト ("jsons" または "jsonlines")、CSV データなど、TensorFlow REST API の一部ではない追加の入力形式も受け入れることができます。この例では、入力として `numpy` 配列を使用しています。これは、簡略化されたJSON形式にシリアル化されます。 さらに、TensorFlow serving は、次のコードに示すように、複数のアイテムを一度に処理することもできます。 TensorFlow serving を用いた SageMaker エンドポイントに対して予測を行う方法に関する詳細なドキュメントは[こちら](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rstmaking-predictions-against-a-sagemaker-endpoint)をご参照ください。まず、評価用のデータセットを取得します。
###Code
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/eval_data.npy eval_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/eval_labels.npy eval_labels.npy
eval_data = np.load('eval_data.npy')
eval_labels = np.load('eval_labels.npy')
###Output
_____no_output_____
###Markdown
下記の ``k = `` に、9950 までの好きな数字を入れて、評価する手書き文字のセットを選択しましょう。`predictor.predict(test_data)` で推論を行い、選んだ手書き文字認識に対する `prediction is`: 推論結果 と、`label is`: 実際のラベルの値が出力され、その二つが一致していれば最後に `matched: True` と表示されます。
###Code
k = 1000 # choose your favorite number from 0 to 9950
test_data = eval_data[k:k+50]
test_data
for i in range(5):
for j in range(10):
plt.subplot(5, 10, 10* i + j+1)
plt.imshow(test_data[10 * i + j, :].reshape(28, 28), cmap='gray')
plt.title(10* i + j+1)
plt.tick_params(labelbottom=False, labelleft = False)
plt.subplots_adjust(wspace=0.2, hspace=1)
plt.show()
predictions = predictor.predict(test_data)
for i in range(0, 50):
prediction = predictions['predictions'][i]['classes']
label = eval_labels[i+k]
print(' [{}]: prediction is {}, label is {}, matched: {}'.format(i+1, prediction, label, prediction == label))
###Output
_____no_output_____
###Markdown
7.エンドポイントを削除する余分なコストが発生しないように、検証が終わったら上記で作成したエンドポイントを削除しましょう。
###Code
sagemaker.Session().delete_endpoint(predictor.endpoint)
###Output
_____no_output_____ |
GradientDescent_For student admission using GPA_GRE and Rank.ipynb | ###Markdown
Implementing the Gradient Descent Algorithm_with a Kaggle datasetHere we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(2.2,4.25)
plt.ylim(0.4,1.7)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
###Output
_____no_output_____
###Markdown
Reading and plotting the data
###Code
data = pd.read_csv('binary1.csv')
#data = data.drop(['rank'],axis = 1)
data['gre'] = data['gre'].div(500)
#print(data.head(5))
X = np.array(data[['gpa','gre','rank']])
y = np.array(data['admit'])
plot_points(X,y)
plt.show()
###Output
_____no_output_____
###Markdown
Implementing the basic functions- Sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$- Output (prediction) formula$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$- Error function$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$- The function that updates the weights$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$$$ b \longrightarrow b + \alpha (y - \hat{y})$$
###Code
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Output (prediction) formula
def output_formula(x, weights, bias):
return sigmoid(np.dot(x, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
###Output
_____no_output_____
###Markdown
Training functionThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
###Code
np.random.seed(44)
epochs = 1500
learnrate = 0.0015
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=5 / n_features**.5, size=n_features)
#print("we1 :",weights)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
#print(x,y)
weights, bias = update_weights(x, y, weights, bias, learnrate)
#print(weights)
#print ("we2 :", weights)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Converting the output (float) to boolean as it is a binary classification
# e.g. 0.95 --> True (= 1), 0.31 --> False (= 0)
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 10) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
###Output
_____no_output_____
###Markdown
Train the algorithm!When we run the function, we'll obtain the following:- 10 updates with the current training loss and accuracy- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.- A plot of the error function. Notice how it decreases as we go through more epochs.
###Code
train(X, y, epochs, learnrate, True)
###Output
========== Epoch 0 ==========
Train loss: 1.7782133179698283
Accuracy: 0.38
========== Epoch 150 ==========
Train loss: 0.5761960125181259
Accuracy: 0.7075
========== Epoch 300 ==========
Train loss: 0.5605467795448693
Accuracy: 0.73
========== Epoch 450 ==========
Train loss: 0.5539902847812229
Accuracy: 0.7225
========== Epoch 600 ==========
Train loss: 0.5505775226052368
Accuracy: 0.725
========== Epoch 750 ==========
Train loss: 0.5484544173366522
Accuracy: 0.7225
========== Epoch 900 ==========
Train loss: 0.5470052505494114
Accuracy: 0.7225
========== Epoch 1050 ==========
Train loss: 0.5459760310211995
Accuracy: 0.7225
========== Epoch 1200 ==========
Train loss: 0.5452326830386489
Accuracy: 0.7225
========== Epoch 1350 ==========
Train loss: 0.5446915073136894
Accuracy: 0.725
|
calibration.ipynb | ###Markdown
UR10 Calibration- Step-by-step introduction to `pybotics`- Simplified calibration of UR10 industrial collaborative robot Imports
###Code
# pybotics
from pybotics import Robot, KinematicChain, LinkConvention, RobotOptimizationMask, Tool, Frame
from pybotics.calibration import compute_absolute_errors
from pybotics.constants import TRANSFORM_MATRIX_SHAPE, POSITION_VECTOR_LENGTH, TRANSFORM_VECTOR_LENGTH
# essentials
import numpy as np
import pandas as pd
from copy import deepcopy
# optimization
import scipy.optimize
from sklearn.model_selection import train_test_split
# plotting
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
# printing options
np.set_printoptions(suppress=True)
pd.set_option('precision', 3)
###Output
_____no_output_____
###Markdown
Optimization Function- Computes the errors between a robot model and the measured positions (i.e., real life) - Requires a series of measured joint-position pairs - Joints = random robot configurations - Positions = measured real-world Cartesian positions with a laser tracker
###Code
def fitness_function(optimization_vector, robot, joints, positions):
"""Return absolute distance errors of a given robot model from corresponding joints and positions."""
# apply latest optimization vector
robot.apply_optimization_vector(optimization_vector)
# compute absolute errors
# errors = abs(calculated fk - actual position)
errors = compute_absolute_errors(robot, joints, positions)
return errors
###Output
_____no_output_____
###Markdown
World Frame- Describes the geometric transformation between our global reference frame and the base of the robot - Here: - Global reference frame = laser tracker's origin - Assumed to be well defined
###Code
world_frame = np.array(
[-0.910397, 0.413555, -0.012223, 2289.101393,
-0.413474, -0.910475, -0.008638, 36.666412,
-0.014701, -0.002810, 0.999888, -899.473355,
0, 0, 0, 1]
).reshape(TRANSFORM_MATRIX_SHAPE)
display(world_frame)
###Output
_____no_output_____
###Markdown
Tool Frame- Describes the geometric transformation between our robot flange and the tool centre point (TCP) - TCP = centre of spherically mounted retroreflector (SMR)- Derived from mechanical CAD measurements
###Code
tool_frame = np.array(
[1, 0, 0, -31.418407,
0, 1, 0, -119.737717,
0, 0, 1, 27.368691,
0, 0, 0, 1]
).reshape(TRANSFORM_MATRIX_SHAPE)
display(tool_frame)
###Output
_____no_output_____
###Markdown
Measured Tool Frame Poses- Calibrate the tool frame- Simple poses that just rotate the last three joints (i.e., robot wrist) - Minimize sources of error
###Code
tool_df = pd.read_csv('tool.csv')
display(tool_df)
# split df into inputs/outputs
tool_joints = tool_df[['joint_{}'.format(i) for i in range(6)]]
tool_positions = tool_df[['laser_{}'.format(i) for i in 'xyz']]
# pybotics uses radians
tool_joints = np.deg2rad(tool_joints)
###Output
_____no_output_____
###Markdown
Measured Robot Poses- Calibrate the robot kinematic chain (i.e., MDH parameters)- Many random robot configurations (i.e., joints) are measured
###Code
robot_df = pd.read_csv('ur10.csv')
display(robot_df.head())
display(robot_df.tail())
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# plot
ax.scatter(robot_df['laser_x'],robot_df['laser_y'],robot_df['laser_z'],label='Measured Positions');
ax.scatter(world_frame[0, -1],world_frame[1, -1],world_frame[2, -1],label='Robot Base');
ax.scatter(0,0,0,label='Laser Tracker');
ax.legend();
ax.set_aspect('equal');
###Output
_____no_output_____
###Markdown
- Bias towards region between laser tracker and robot base- Good enough for a demo! 😛
###Code
# split df into inputs/outputs
robot_joints = robot_df[['joint_{}'.format(i) for i in range(6)]]
robot_positions = robot_df[['laser_{}'.format(i) for i in 'xyz']]
# pybotics uses radians
robot_joints = np.deg2rad(robot_joints)
###Output
_____no_output_____
###Markdown
Split Testing and Training Data- Put aside a set of measures for validation after calibration- Prevent overfitting
###Code
train_robot_joints, \
test_robot_joints, \
train_robot_positions, \
test_robot_positions = train_test_split(robot_joints,
robot_positions,
test_size=0.3)
###Output
_____no_output_____
###Markdown
Nominal Model- Represents the manufacturer's intended mathematical model for the robot
###Code
# load UR10 kinematic chain (i.e., MDH parameters)
mdh = np.loadtxt('ur10-mdh.csv', delimiter=',')
kc = KinematicChain.from_array(mdh)
# construct nominal robot
nominal_robot = Robot(kc)
nominal_robot.world_frame = Frame(world_frame)
nominal_robot.tool = Tool(tool_frame)
display(pd.DataFrame(
nominal_robot.kinematic_chain.vector.reshape(nominal_robot.num_dof,-1),
columns=('alpha', 'a', 'theta', 'd')))
###Output
_____no_output_____
###Markdown
Initial Errors- What would the offline programming errors look like if we did not calibrate the robot?
###Code
nominal_errors = compute_absolute_errors(nominal_robot, test_robot_joints, test_robot_positions)
display(pd.Series(nominal_errors).describe())
plt.figure()
plt.hist(x=nominal_errors,
label='Nominal',
bins='auto');
plt.xlabel('Absolute Error [mm]');
plt.ylabel('Frequency');
plt.legend();
###Output
_____no_output_____
###Markdown
- Ouch! Pretty large errors!- Don't forget, these are real measurements of a real industrial (expensive) robot! Tool Calibration- Poorly designed and measured tooling often has a large error contribution- Simply screwing a tool to a robot isn't that accurate...
###Code
# create a robot copy for tool calibration
tool_calibration_robot = deepcopy(nominal_robot)
# we are only going to calibrate
# the tool centre point position (TCP) not orientation
tool_mask = [False] * TRANSFORM_VECTOR_LENGTH
for i in range(POSITION_VECTOR_LENGTH):
tool_mask[i] = True
tool_calibration_robot.optimization_mask \
= RobotOptimizationMask(world_frame=False,
kinematic_chain=False,
tool=tool_mask)
# scipy to the rescue!
# minimize the error of the tool_calibration_robot model
result = scipy.optimize.leastsq(func=fitness_function,
x0=tool_calibration_robot.optimization_vector,
args=(tool_calibration_robot,
tool_joints,
tool_positions))
# apply the solution vector to the model
tool_calibration_robot.apply_optimization_vector(result[0])
# optimized tool frame
display(tool_calibration_robot.tool.matrix)
# diff vs old frame
display(tool_calibration_robot.tool.matrix - tool_frame)
###Output
_____no_output_____
###Markdown
- Even with our well design tool and CAD measurements, the actual tool was off by a few mm Robot Calibration
###Code
# make a new robot copy
calibration_robot = deepcopy(tool_calibration_robot)
# init the optimization mask
mask = np.array(calibration_robot.kinematic_chain.optimization_mask) \
.reshape((calibration_robot.num_dof, -1))
mask[:] = True
# filter out a couple of redundant parameters
# outside the scope of the presentation
mask[1:3, -1] = False
display(mask)
# apply mask to robot
# filter out world/tool calibrations
calibration_robot.optimization_mask \
= RobotOptimizationMask(world_frame=False,
kinematic_chain=mask.ravel(),
tool=False)
# maxfev is set way lower than it should for a quick demo
result = scipy.optimize.leastsq(func=fitness_function,
x0=calibration_robot.optimization_vector,
maxfev=50,
args=(calibration_robot,
train_robot_joints,
train_robot_positions))
# apply the solution vector to the model
calibration_robot.apply_optimization_vector(result[0])
new_model = calibration_robot \
.kinematic_chain \
.vector \
.reshape(calibration_robot.num_dof,-1)
display(pd.DataFrame(new_model,
columns=('alpha', 'a', 'theta', 'd')))
old_model = nominal_robot \
.kinematic_chain \
.vector \
.reshape(nominal_robot.num_dof,-1)
display(pd.DataFrame(new_model - old_model,
columns=('delta alpha', 'delta a',
'delta theta', 'delta d')))
###Output
_____no_output_____
###Markdown
- Little errors here and there through the model Error Improvement
###Code
calibrated_errors = compute_absolute_errors(calibration_robot,
test_robot_joints,
test_robot_positions)
display(pd.Series(calibrated_errors).describe())
plt.figure()
plt.hist(x=[nominal_errors,calibrated_errors],
label=['Nominal', 'Calibrated'],
bins=10);
plt.xlabel('Absolute Error [mm]');
plt.ylabel('Frequency');
plt.legend();
###Output
_____no_output_____
###Markdown

###Code
# more zoom!
plt.figure()
plt.hist(x=calibrated_errors,
label='Calibrated',
bins='auto');
plt.xlabel('Absolute Error [mm]');
plt.ylabel('Frequency');
plt.legend();
###Output
_____no_output_____
###Markdown
Robot Calibration Nominal Robot- A nominal robot model: - Represents what the robot manufacturer intended as a kinematic model - Is mathematically ideal
###Code
from pybotics.robot import Robot
from pybotics.predefined_models import ur10
nominal_robot = Robot.from_parameters(ur10())
import pandas as pd
def display_robot_kinematics(robot: Robot):
df = pd.DataFrame(robot.kinematic_chain.matrix)
df.columns = ["alpha", "a", "theta", "d"]
display(df)
display_robot_kinematics(nominal_robot)
###Output
_____no_output_____
###Markdown
*Real* Robots- *Real* robots do not conform perfectly to the nominal parameters- Small errors in the robot model can generate large errors in Cartesian position- Sources of errors include, but are not limited to: - Kinematic errors - Mechanical tolerances - Angle offsets - Non-kinematic errors - Joint stiffness - Gravity - Temperature - Friction
###Code
import numpy as np
from copy import deepcopy
real_robot = deepcopy(nominal_robot)
# let's pretend our real robot has small joint offsets
# in real life, this would be a joint mastering issue (level-1 calibration)
# https://en.wikipedia.org/wiki/Robot_calibration
for link in real_robot.kinematic_chain.links:
link.theta += np.random.uniform(
low=np.deg2rad(-0.1),
high=np.deg2rad(0.1)
)
display_robot_kinematics(real_robot)
###Output
_____no_output_____
###Markdown
Get *Real* (aka Measured) Poses- In real life, these poses would be measured using metrology equipment (e.g., laser tracker, CMM)
###Code
joints = []
positions = []
for i in range(1000):
q = real_robot.random_joints()
pose = real_robot.fk(q)
joints.append(q)
positions.append(pose[:-1,-1])
pd.DataFrame(joints).describe()
pd.DataFrame(positions, columns=['x','y','z']).describe()
###Output
_____no_output_____
###Markdown
Split Calibration and Validation Measures- A portion of the measured configurations and positions should be set aside for validation after calibration (i.e., optimization) - This is to prevent/check the optimized model for overfitting
###Code
from sklearn.model_selection import train_test_split
split = train_test_split(joints, positions, test_size=0.3)
train_joints = split[0]
test_joints = split[1]
train_positions = split[2]
test_positions = split[3]
###Output
_____no_output_____
###Markdown
Get Nominal Position Errors- These nominal model is our starting point for calibration- The errors are in millimetres
###Code
from pybotics.optimization import compute_absolute_errors
nominal_errors = compute_absolute_errors(
qs=test_joints,
positions=test_positions,
robot=nominal_robot
)
display(pd.Series(nominal_errors).describe())
###Output
_____no_output_____
###Markdown
Calibration
###Code
from pybotics.optimization import OptimizationHandler
# init calibration handler
handler = OptimizationHandler(nominal_robot)
# set handler to solve for theta parameters
kc_mask_matrix = np.zeros_like(nominal_robot.kinematic_chain.matrix, dtype=bool)
kc_mask_matrix[:,2] = True
display(kc_mask_matrix)
handler.kinematic_chain_mask = kc_mask_matrix.ravel()
from scipy.optimize import least_squares
from pybotics.optimization import optimize_accuracy
# run optimization
result = least_squares(
fun=optimize_accuracy,
x0=handler.generate_optimization_vector(),
args=(handler, train_joints, train_positions),
verbose=2
) # type: scipy.optimize.OptimizeResult
###Output
Iteration Total nfev Cost Cost reduction Step norm Optimality
0 1 4.1063e+02 4.23e+05
1 8 7.0118e+01 3.41e+02 1.08e-03 1.54e+05
2 9 2.2908e+01 4.72e+01 2.17e-03 7.89e+04
3 11 2.5290e-01 2.27e+01 1.08e-03 2.75e+03
4 14 8.2830e-02 1.70e-01 1.36e-04 2.62e+03
5 16 3.5286e-02 4.75e-02 6.78e-05 2.42e+03
6 17 2.1245e-03 3.32e-02 6.78e-05 6.68e+02
7 19 1.5761e-03 5.48e-04 3.39e-05 6.40e+02
8 21 3.1752e-04 1.26e-03 8.47e-06 2.27e+02
9 23 1.2419e-04 1.93e-04 4.24e-06 1.75e+02
10 24 8.5791e-07 1.23e-04 4.24e-06 1.46e+01
11 27 1.3862e-07 7.19e-07 5.30e-07 6.12e+00
12 29 5.9500e-08 7.91e-08 2.65e-07 4.38e+00
13 31 2.0220e-08 3.93e-08 6.62e-08 4.14e+00
14 32 1.0055e-09 1.92e-08 6.62e-08 8.57e-01
15 34 1.0055e-09 0.00e+00 0.00e+00 8.57e-01
`xtol` termination condition is satisfied.
Function evaluations 34, initial cost 4.1063e+02, final cost 1.0055e-09, first-order optimality 8.57e-01.
###Markdown
Results- A calibrated robot model is never perfect in real life - The goal is often to reduce the max error under a desired threshold
###Code
calibrated_robot = handler.robot
calibrated_errors = compute_absolute_errors(
qs=test_joints,
positions=test_positions,
robot=calibrated_robot
)
display(pd.Series(calibrated_errors).describe())
import matplotlib.pyplot as plt
%matplotlib inline
plt.xscale("log")
plt.hist(nominal_errors, color="C0", label="Nominal")
plt.hist(calibrated_errors, color="C1", label="Calibrated")
plt.legend()
plt.xlabel("Absolute Error [mm]")
plt.ylabel("Frequency")
###Output
_____no_output_____
###Markdown
Camera Calibration Coordinate System This camera calibration follows the coordinate system below.
x: World coordinate, longitudinal. Forward direction is positive.
y: World coordinate, lateral. Leftward direction is positive.
z: World coordinate, vertical. Upward direction is positive.
u: Image coordinate, lateral. Rightward direction is positive.
v: Image coordinate, vertical. Downward direction is positive.
Distortion Model This camera calibration takes only barrel distortion parameters $\left( \kappa_1, \kappa_2, \kappa_3, \kappa_4, \kappa_5, \kappa_6 \right)$ into account.
Let $\left( u, v \right)$ be a point in undistorted image. Then the undistorted point is normalized as follows.
$$ \bar{u} = \frac{u - cu}{f_u} $$
$$ \bar{v} = \frac{v - cv}{f_v} $$
$$ r^2 = \bar{u}^2 + \bar{v}^2 $$
The ratio of distortion with respect to radial radius from the image center $\left( cu, cv \right)$ is computed.
$$ ratio = \frac{1.0 + \kappa_1 r^2 + \kappa_2 r^4 \kappa_3 r^6} {1.0 + \kappa_4 r^2 + \kappa_5 r^4 \kappa_6 r^6} $$
The corresponding point in distorted image $\left( u_d, v_d \right)$ is computed as follows.
$$ u_d = f_u \cdot ratio \cdot \bar{u} + cu $$
$$ u_v = f_v \cdot ratio \cdot \bar{v} + cv $$ Homography Matrix The homography matrix is defined as follows.
$$ H = K \cdot P \cdot \begin{bmatrix} R & R\cdot t \end{bmatrix}$$
where
$$ K = \begin{bmatrix} f_u & 0 & cu \\ 0 & f_v & cv \\ 0 & 0 & 1 \end{bmatrix} $$
$$ P = \begin{bmatrix} 0 & -1 & 0 \\ 0 & 0 & -1 \\ 1 & 0 & 0 \end{bmatrix} $$
$$ R = R_z R_y R_x $$
$$ t = \begin{bmatrix} t_x \\ t_y \\ t_z \end{bmatrix} $$ Initial value The homography matrix is 3x4 matrix in which intrinsic camera parameters and extrinsic camera parameters cannot be linearly separated. To solve these parameters, initial values of the parameters are computed with the following assumption. Then these values will be refined by non-linear optimization.
The assumptions are as follows.
- Camera image has no distortion i.e. $\left( \kappa_n = 0 \right)$.
- $\left(cu, cv \right)$ is at the axact center of the image.
- Angle of camera is small enough to apply small angle approximation.
###Code
"""
Direct Linear Transform for computing initial values.
"""
import sympy as sp
from IPython.display import display
roll, pitch, yaw = sp.symbols('roll pitch yaw')
tx, ty, tz = sp.symbols('tx ty tz')
fu, fv, cu, cv = sp.symbols('fu fv cu cv')
cosr = 1.0 #sp.cos(roll)
sinr = roll #sp.sin(roll)
Rx = sp.Matrix([[1, 0, 0, ],
[0, cosr, -sinr],
[0, sinr, cosr]])
# pitch
cosp = 1.0 #sp.cos(pitch)
sinp = pitch #sp.sin(pitch)
Ry = sp.Matrix([[cosp, 0, sinp],
[0, 1, 0],
[-sinp, 0, cosp]])
# yaw
cosy = 1.0 #sp.cos(yaw)
siny = yaw #sp.sin(yaw)
Rz = sp.Matrix([[cosy, -siny, 0],
[siny, cosy, 0],
[0, 0, 1]])
# translation
T = sp.Matrix([[1, 0, 0, tx],
[0, 1, 0, ty],
[0, 0, 1, tz]])
# intrinsic camera parameter matrix
K = sp.Matrix([[fu, 0, cu],
[0, fv, cv],
[0, 0, 1,]])
# permutation matrix
P = sp.Matrix([[0, -1, 0],
[0, 0, -1],
[1, 0, 0]])
# R = Rz*Ry*Rx
# print('rotation matrix with small angle approximation')
# display(R)
R = sp.Matrix([[1.0, -yaw, pitch],
[yaw, 1.0, -roll],
[-pitch, roll, 1.0]])
# print('rotation matrix with small angle approximation 2')
# display(R)
RT = R*T
H = K * P * RT
display(H)
#sp.latex(H)
H3 = H[:,3]
h3_dx = H3.diff(tx)
display(h3_dx)
h3_dy = H3.diff(ty)
display(h3_dy)
h3_dz = H3.diff(tz)
display(h3_dz)
###Output
_____no_output_____
###Markdown
From the approximated homography equation, initial guess of parameters can be obtained as follows.
$$ H = \begin{bmatrix} h_{00} & h_{01} & h_{02} & h_{03} \\ h_{10} & h_{11} & h_{12} & h_{13} \\ 1 & h_{21} & h_{22} & h_{23} \end{bmatrix} $$
$$ cu = 0.5 \cdot width $$
$$ cv = 0.5 \cdot height $$
$$ yaw = -h_{21} $$
$$ pitch = h_{22} $$
$$ fu = -\frac{f_{00} - cu}{yaw} $$
$$ fv = -\frac{f_{10} - cv}{pitch} $$
$$ roll = \frac{h_{02} - cu \cdot pitch}{fu} $$
Note that pitch and yaw angle must not be zeros.
Translation vector can be obtained by the simultaneous equations
$$ \begin{bmatrix} cu - f_u \cdot yaw & -cu \cdot yaw - fu & cu \cdot pitch + f_u \cdot roll \\ cv + f_v \cdot pitch & -cv \cdot yaw - f_v \cdot roll & cv \cdot pitch - f_v \\ 1.0 & -yaw & pitch \end{bmatrix} \begin{bmatrix}t_x \\ t_y \\ t_z\end{bmatrix} = \begin{bmatrix}h_{03} \\ h_{13} \\ h_{23} \end{bmatrix} $$ Direct Linear Transform
###Code
import sympy as sp
from IPython.display import display
roll, pitch, yaw = sp.symbols('roll pitch yaw')
tx, ty, tz = sp.symbols('tx ty tz')
fu, fv, cu, cv = sp.symbols('fu fv cu cv')
cosr = sp.cos(roll)
sinr = sp.sin(roll)
Rx = sp.Matrix([[1, 0, 0, ],
[0, cosr, -sinr],
[0, sinr, cosr]])
# pitch
cosp = sp.cos(pitch)
sinp = sp.sin(pitch)
Ry = sp.Matrix([[cosp, 0, sinp],
[0, 1, 0],
[-sinp, 0, cosp]])
# yaw
cosy = sp.cos(yaw)
siny = sp.sin(yaw)
Rz = sp.Matrix([[cosy, -siny, 0],
[siny, cosy, 0],
[0, 0, 1]])
# translation
T = sp.Matrix([[1, 0, 0, tx],
[0, 1, 0, ty],
[0, 0, 1, tz]])
# intrinsic camera parameter matrix
K = sp.Matrix([[fu, 0, cu],
[0, fv, cv],
[0, 0, 1,]])
# permutation matrix
P = sp.Matrix([[0, -1, 0],
[0, 0, -1],
[1, 0, 0]])
R = Rz*Ry*Rx
display(R)
RT = R*T
display(RT)
H = K * P * RT
display(H)
###Output
_____no_output_____
###Markdown
ObjectiveThis modules aims to study the calibration plots on various classification models and compare the differences. Calibration PlotsThere are two standard ways to assess the accuracy of a predictive model for a binary response: *discrimination* and *calibration*. To assess discrimination, one uses the *ROC curve*.In contrast, calibration curves compare the predicted probability of the response to the empirical probability.**Calibration can help diagnose lack of fit.**In calibration we try to improve our model such that the distribution and behavior of the probability predicted is similar to the distribution and behavior of probability observed in training data.There are two methods of Calibration :* Sigmoid's/ Platt's Calibration - simply means to fit a Logistic Regression classifier using the (0 or 1) outputs from your original model* Isotonic Calibration - (also called Isotonic Regression) fits a piecewise function to the outputs of your original model instead.**Brier score** is a proper score function that measures the accuracy of probabilistic predictions. It is composed of refinement loss and calibration loss and is an appropriate for binary and categorical outcomes.
###Code
from classifiers import Classifier
from dataloader import train_val_test_split_data
from calibration import calibration
# dividing the dataset into train, validation and test
x_train, x_val, _, y_train, y_val, _ = train_val_test_split_data(0.2)
###Output
_____no_output_____
###Markdown
Interpretation of the GraphThe fewer the bins the smoother the plot. A well calibrated model has a calibration curve that is closer to y=x. Ideally the probabilty of correct predictions should increase with the fraction of bins. * K Neighbors Classifier
###Code
model = Classifier()
clf = model.KNeighbors(x_train, y_train)
calibration(clf, x_train, y_train, x_val, y_val)
###Output
_____no_output_____
###Markdown
* Guassian Naive Bayes Classifier
###Code
clf = model.Gaussian(x_train, y_train)
calibration(clf, x_train, y_train, x_val, y_val)
###Output
_____no_output_____
###Markdown
* Random Forest Classifier
###Code
clf = model.Random_Forest(x_train, y_train)
calibration(clf, x_train, y_train, x_val, y_val)
###Output
_____no_output_____
###Markdown
* SVM Classifier
###Code
clf = model.svm_classifier(x_train, y_train)
calibration(clf, x_train, y_train, x_val, y_val)
###Output
_____no_output_____
###Markdown
* Decision Tree Classifier
###Code
clf = model.Decision_Tree(x_train, y_train)
calibration(clf, x_train, y_train, x_val, y_val)
###Output
_____no_output_____
###Markdown
* Logistic Regression Classifier
###Code
clf = model.Logistic_Reg(x_train, y_train)
calibration(clf, x_train, y_train, x_val, y_val)
###Output
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/aditi/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:938: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
###Markdown
How to check your model
###Code
from sklearn import datasets
from sklearn.model_selection import train_test_split
X, y = datasets.make_classification(n_samples=100000, n_features=20, n_informative=7, n_redundant=10, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.99, random_state=42)
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
lgr = LogisticRegression(C=1, solver="lbfgs")
svc = SVC(max_iter=10000, probability=True)
probs_lgr = lgr.fit(X_train, y_train).predict_proba(X_test)[:, 1]
preds_svc = svc.fit(X_train, y_train).predict(X_test)
probs_svc = svc.decision_function(X_test)
probs_svc = (probs_svc - probs_svc.min()) / (probs_svc.max() - probs_svc.min())
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(10,6))
sns.kdeplot(probs_lgr, label="Logistic Regression")
sns.kdeplot(probs_svc, label="SVM")
plt.title("Probability Density Plot for 2 classifiers")
plt.show()
###Output
_____no_output_____
###Markdown
The AUC-ROC curve for the two models.
###Code
from sklearn import metrics
plt.figure(figsize=(8, 6))
plt.plot([0, 1], [0, 1], '--r')
pred = probs_lgr
label = y_test
fpr, tpr, thresh = metrics.roc_curve(label, pred)
auc = metrics.roc_auc_score(label, pred)
plt.plot(fpr, tpr, label=f'Logistic regression, auc = {str(round(auc, 3))}')
pred = preds_svc
fpr, tpr, thresh = metrics.roc_curve(label, pred)
auc = metrics.roc_auc_score(label, pred)
plt.plot(fpr, tpr, label=f'Support vector machine, auc = {str(round(auc, 3))}')
plt.xlabel('True positive rate')
plt.ylabel('False positive rate')
plt.title('AUC-ROC for two labels')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Calibration curve function to plot each of the models.
###Code
from sklearn.calibration import calibration_curve
def plot_calibration_curve(name, fig_index, probs):
fig = plt.figure(fig_index, figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
frac_of_pos, mean_pred_value = calibration_curve(y_test, probs, n_bins=10)
ax1.plot(mean_pred_value, frac_of_pos, "s-", label=f'{name}')
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title(f'Calibration plot ({name})')
ax2.hist(probs, range=(0, 1), bins=10, label=name, histtype="step", lw=2)
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
###Output
_____no_output_____
###Markdown
Logistic regression calibration curve.
###Code
plot_calibration_curve("Logistic regression", 1, probs_lgr)
###Output
_____no_output_____
###Markdown
SVM calibration curve.
###Code
plot_calibration_curve("SVM", 1, probs_svc)
###Output
_____no_output_____
###Markdown
CALIBRATING THE MODEL
###Code
from sklearn.calibration import CalibratedClassifierCV
lgr = LogisticRegression(C=1, solver="lbfgs")
svc = SVC(max_iter=10000, probability=True)
platts_scaling = CalibratedClassifierCV(svc, cv=2, method='sigmoid')
platts_scaling.fit(X_train, y_train)
calibrated_probs = platts_scaling.predict_proba(X_test)[:, 1]
plot_calibration_curve("SVM", 3, calibrated_probs)
###Output
_____no_output_____
###Markdown
detect from IPython.core.debugger import set_trace %debugdetections, corrected = \ ptv.py_detection_proc_c(cal_images, cpar, tpar, cals)
###Code
print(tpar.get_grey_thresholds())
from optv.correspondences import correspondences, MatchedCoords
from optv.segmentation import target_recognition
detections, corrected = [],[]
for i_cam, img in enumerate(cal_images):
targs = target_recognition(img, tpar, i_cam, cpar)
targs.sort_y()
detections.append(targs)
mc = MatchedCoords(targs, cpar, cals[i_cam])
corrected.append(mc)
x = [[i.pos()[0] for i in row] for row in detections]
y = [[i.pos()[1] for i in row] for row in detections]
f = show_cal_images() # show images, overlay with dots
for i in range(n_cams):
f[i][1].scatter(x[i],y[i],marker='+',color='b')
def read_cal_block():
# type: () -> numpy.ndarray
return np.atleast_1d(np.loadtxt(calOriParams.fixp_name, dtype=[('id', 'i4'), ('pos', '3f8')], skiprows=0))
cal_points = read_cal_block()
print(cal_points[0], '...', cal_points[-1])
# project initial guess
f = show_cal_images()
for i_cam in range(n_cams):
x, y = [], []
for row in cal_points:
projected = image_coordinates(np.atleast_2d(row['pos']), \
cals[i_cam], cpar.get_multimedia_params())
pos = convert_arr_metric_to_pixel(projected, cpar)
x.append(pos[0][0])
y.append(pos[0][1])
f[i_cam][1].scatter(x,y,marker='+',color='yellow')
pos0 = cals[0].get_pos()
def initial_guess(i_cam=0, pos = pos0):
fig, ax = plt.subplots(figsize=(10,10))
# change the position
# pos = cals[i_cam].get_pos()
cals[i_cam].set_pos(pos)
# project and draw
x, y = [], []
for row in cal_points:
projected = image_coordinates(np.atleast_2d(row['pos']), \
cals[i_cam], cpar.get_multimedia_params())
pos = convert_arr_metric_to_pixel(projected, cpar)
x.append(pos[0][0])
y.append(pos[0][1])
# for i in range(cpar.get_num_cams()):
ax.imshow(cal_images[i_cam],cmap=plt.cm.gray)
ax.scatter(x,y,marker='+',color='yellow')
plt.show()
def edit_cal_parameters():
cp = pargui.Calib_Params(par_path=par_path)
cp.edit_traits(kind='modal')
# at the end of a modification, copy the parameters
par.copy_params_dir(par_path, active_path)
edit_cal_parameters()
# must read again the parameters
cpar, spar, vpar, track_par, tpar, cals, epar, Existing_Target,\
calOriParams = read_parameters()
calOriParams.fixp_name # test
###Output
inside CalHandler /Users/alex/Documents/OpenPTV/test_cavity/parameters
copy from /Users/alex/Documents/OpenPTV/test_cavity/parameters to /Users/alex/Documents/OpenPTV/test_cavity/parametersRun1
###Markdown
def _button_showimg_fired(self): print("Loading images/parameters \n") Initialize what is needed, copy necessary things copy parameters from active to default folder parameters/ par.copy_params_dir(self.active_path, self.par_path) read from parameters self.cpar, self.spar, self.vpar, self.track_par, self.tpar, \ self.cals, self.epar = ptv.py_start_proc_c(self.n_cams) self.tpar.read(b'parameters/detect_plate.par') print(self.tpar.get_grey_thresholds()) self.calParams = par.CalOriParams(self.n_cams, self.par_path) self.calParams.read() if self.epar.Combine_Flag is True : print("Combine Flag") self.MultiParams = par.MultiPlaneParams() self.MultiParams.read() for i in range(self.MultiParams.n_planes): print(self.MultiParams.plane_name[i]) self.pass_raw_orient = True self.status_text = "Multiplane calibration." self.reset_show_images() self.pass_init = True self.status_text = "Initialization finished."
###Code
# to click 4 points on every image to get the manual orientation
# the points are written in the man_ori array, see above
# the points in pixels are stored in man_ori.dat file that
# should appear in the working_folder and later copied to the
# parameters folder active_path for the future
man_ori_par = np.loadtxt(os.path.join(par_path, 'man_ori.par')).reshape((n_cams,-1)).astype(np.int)
class Click():
def __init__(self, ax, func, man_points, button = 1):
self.ax = ax
self.func = func
self.button = button
self.man_points = man_points
self.press = False
self.move = False
self.data = []
self.c1 = self.ax.figure.canvas.mpl_connect('button_press_event', self.onpress)
self.c2 = self.ax.figure.canvas.mpl_connect('button_release_event', self.onrelease)
self.c3 = self.ax.figure.canvas.mpl_connect('motion_notify_event', self.onmove)
def onclick(self,event):
if event.inaxes == self.ax:
if event.button == self.button:
man_point = self.man_points.pop(0)
self.func(event, self.ax, man_point)
self.data.append([man_point,event.xdata,event.ydata])
def onpress(self,event):
self.press = True
def onmove(self,event):
if self.press:
self.move = True
def onrelease(self,event):
if self.press and not self.move:
self.onclick(event)
self.press = False
self.move = False
def pick(event, ax, man_point):
# print(event.xdata, event.ydata, man_point)
ax.scatter(event.xdata, event.ydata)
ax.annotate(str(man_point),(event.xdata,event.ydata),color='r')
ax.figure.canvas.draw()
###Output
_____no_output_____
###Markdown
coulnd't find the way to make it a loop
###Code
man_points = []
f = show_cal_images(0)
click = Click(f[0][1], pick, man_ori_par[0].tolist(), button = 1)
man_points.append(click.data)
f = show_cal_images(1)
click = Click(f[0][1], pick, man_ori_par[1].tolist(), button = 1)
man_points.append(click.data)
f = show_cal_images(2)
click = Click(f[0][1], pick, man_ori_par[2].tolist(), button = 1)
man_points.append(click.data)
f = show_cal_images(3)
click = Click(f[0][1], pick, man_ori_par[3].tolist(), button = 1)
man_points.append(click.data)
man_points = np.array(man_points).reshape((-1,3)
print(man_points)
np.savetxt('new_man_ori.dat',man_points,fmt='%d %.2f %.2f')
tmp = np.loadtxt('new_man_ori.dat')
tmp
# orientation with file
# load the dots that are stored in the man_ori.dat (pixels)
# and their indices from the man_ori.par (index according to the calibration block text file)
# and plots over the image
man_ori_par = np.loadtxt(os.path.join(par_path, 'man_ori.par')).reshape((n_cams,-1)).astype(np.int)
man_ori_dat = np.loadtxt(os.path.join(working_folder, 'man_ori.dat')).reshape((n_cams,-1))
# or simply read the new_man_ori.dat that has both man_ori.par and man_ori.dat
#
# tmp = np.loadtxt('new_man_ori.dat')
# print(tmp)
f = show_cal_images()
for i in range(n_cams):
for j in range(4):
f[i][1].scatter(man_ori_dat[i][j*2],man_ori_dat[i][j*2+1],color='y',marker='x')
f[i][1].annotate(str(man_ori_par[i][j]),(man_ori_dat[i][j*2],man_ori_dat[i][j*2+1]),color='r')
# backup to the parameters folder
shutil.copyfile(os.path.join(working_folder, 'man_ori.dat'), os.path.join(active_path, 'man_ori.dat'))
f = show_cal_images()
sorted_targs = []
for i_cam in range(n_cams):
# if len(self.cal_points) > len(self.detections[i_cam]):
# raise ValueError("Insufficient detected points, need at least as"
# "many as fixed points")
targs = match_detection_to_ref(cals[i_cam], cal_points['pos'],
detections[i_cam], cpar)
x, y, pnr = [], [], []
for t in targs:
if t.pnr() != -999:
# pnr.append(cal_points['id'][t.pnr()])
# x.append(t.pos()[0])
# y.append(t.pos()[1])
f[i_cam][1].scatter(t.pos()[0], t.pos()[1],1,color='yellow')
f[i_cam][1].annotate(str(cal_points['id'][t.pnr()]),(t.pos()[0],t.pos()[1]),color='r')
sorted_targs.append(targs)
# ax[i_cam].plot_num_overlay(x, y, pnr) # <-- implement this one text over image
# ax[i_cam].scatter(x,y,1,color='yellow')
def backup_ori_files():
# backup ORI files
for f in calOriParams.img_ori[:n_cams]:
shutil.copyfile(f, f + '.bck')
g = f.replace('ori', 'addpar')
shutil.copyfile(g, g + '.bck')
def _write_ori(i_cam):
""" Writes ORI and ADDPAR files for a single calibration result
"""
ori = calOriParams.img_ori[i_cam]
addpar = ori.replace('ori', 'addpar')
print("Saving:", ori, addpar)
cals[i_cam].write(ori.encode(), addpar.encode())
if epar.Examine_Flag and not epar.Combine_Flag:
save_point_sets(i_cam)
def save_point_sets(i_cam):
"""
Saves detected and known calibration points in crd and fix format, respectively.
These files are needed for multiplane calibration.
"""
ori = calOriParams.img_ori[i_cam]
txt_detected = ori.replace('ori', 'crd')
txt_matched = ori.replace('ori', 'fix')
detected, known = [],[]
targs = sorted_targs[i_cam]
for i,t in enumerate(targs):
if t.pnr() != -999:
detected.append(t.pos())
known.append(cal_points['pos'][i])
nums = np.arange(len(detected))
# for pnr in nums:
# print(targs[pnr].pnr())
# print(targs[pnr].pos())
# detected[pnr] = targs[pnr].pos()
detected = np.hstack((nums[:,None], np.array(detected)))
known = np.hstack((nums[:,None], np.array(known)))
np.savetxt(txt_detected, detected, fmt="%9.5f")
np.savetxt(txt_matched, known, fmt="%10.5f")
def reproject_cal_points(i_cam=0):
# read cals again, after raw_orient
cal = Calibration()
tmp = cpar.get_cal_img_base_name(i_cam)
cal.from_file(tmp + b'.ori', tmp + b'.addpar')
cals[i_cam] = cal
# project initial guess
f = show_cal_images(i_cam)
x, y = [], []
for row in cal_points:
projected = image_coordinates(np.atleast_2d(row['pos']), \
cals[i_cam], cpar.get_multimedia_params())
pos = convert_arr_metric_to_pixel(projected, cpar)
x.append(pos[0][0])
y.append(pos[0][1])
f[0][1].scatter(x,y,marker='+',color='yellow')
# def raw_orient():
"""
update the external calibration with results of raw orientation, i.e.
the iterative process that adjust the initial guess' external
parameters (position and angle of cameras) without internal or
distortions.
See: https://github.com/openptv/openptv/liboptv/src/orientation.c#L591
"""
# backup the ORI/ADDPAR files first
backup_ori_files()
man_ori_dat = np.loadtxt(os.path.join(working_folder, 'man_ori.dat')).reshape((n_cams,-1))
# get manual points from cal_points and use ids from man_ori.par
for i_cam in range(n_cams):
selected_points = np.zeros((4, 3))
for i, cp_id in enumerate(cal_points['id']):
for j in range(4):
if cp_id == man_ori_par[i_cam][j]:
selected_points[j, :] = cal_points['pos'][i, :]
continue
# in pixels:
# manual_detection_points = np.array((camera[i_cam]._x, camera[i_cam]._y)).T
manual_detection_points = man_ori_dat[i_cam].reshape((-1,2)).T
print(selected_points)
print(manual_detection_points)
success = external_calibration(cals[i_cam], selected_points, \
manual_detection_points, cpar)
print(success)
if success is False:
print("Initial guess was not successful \n")
else:
reproject_cal_points(i_cam)
_write_ori(i_cam)
# fine orientation
scale = 5000
# backup the ORI/ADDPAR files first
backup_ori_files()
op = par.OrientParams()
op.read()
# recognized names for the flags:
names = ['cc', 'xh', 'yh', 'k1', 'k2', 'k3', 'p1', 'p2', 'scale', 'shear']
op_names = [op.cc, op.xh, op.yh, op.k1, op.k2, op.k3, op.p1, op.p2, op.scale, op.shear]
flags = []
for name, op_name in zip(names, op_names):
if (op_name == 1):
flags.append(name)
for i_cam in range(n_cams): # iterate over all cameras
if epar.Combine_Flag:
""" Performs multiplane calibration, in which for all cameras the
pre-processed planes in multi_plane.par combined.
Overwrites the ori and addpar files of the cameras specified
in cal_ori.par of the multiplane parameter folder
"""
all_known = []
all_detected = []
for i in range(MultiParams.n_planes): # combine all single planes
# c = self.calParams.img_ori[i_cam][-9] # Get camera id
c = re.findall('\\d+',calOriParams.img_ori[i_cam])[0] # not all ends with a number
file_known = MultiParams.plane_name[i]+c+'.tif.fix'
file_detected = MultiParams.plane_name[i]+c+'.tif.crd'
# Load calibration point information from plane i
try:
known = np.loadtxt(file_known)
detected = np.loadtxt(file_detected)
except:
raise IOError("reading {} or {} failed".format(file_known,file_detected))
if np.any(detected == -999):
raise ValueError(("Using undetected points in {} will cause " +
"silliness. Quitting.").format(file_detected))
num_known = len(known)
num_detect = len(detected)
if num_known != num_detect:
raise ValueError("Number of detected points (%d) does not match" +\
" number of known points (%d) for %s, %s" % \
(num_known, num_detect, file_known, file_detected))
if len(all_known) > 0:
detected[:,0] = all_detected[-1][-1,0] + 1 + np.arange(len(detected))
# Append to list of total known and detected points
all_known.append(known)
all_detected.append(detected)
# Make into the format needed for full_calibration.
all_known = np.vstack(all_known)[:,1:]
all_detected = np.vstack(all_detected)
# this is the main difference in the multiplane mode
# that we fill the targs and cal_points by the
# combined information
targs = TargetArray(len(all_detected))
for tix in range(len(all_detected)):
targ = targs[tix]
det = all_detected[tix]
targ.set_pnr(tix)
targ.set_pos(det[1:])
cal_points = np.empty((all_known.shape[0],)).astype(dtype=[('id', 'i4'), ('pos', '3f8')])
cal_points['pos'] = all_known
else:
targs = sorted_targs[i_cam]
try:
residuals, targ_ix, err_est = full_calibration(cals[i_cam], cal_points['pos'], \
targs, cpar, flags)
except:
raise ValueError("full calibration failed\n")
# save the results
_write_ori(i_cam)
# Plot the output
# self.reset_plots()
x, y = [], []
for r, t in zip(residuals, targ_ix):
if t != -999:
pos = targs[t].pos()
x.append(pos[0])
y.append(pos[1])
x, y = np.array(x), np.array(y)
f = show_cal_images(i_cam)
f[0][1].scatter(x, y, 1, color='orange')
f[0][1].quiver(x, y, scale*residuals[:len(x),0], scale*residuals[:len(x),1],color='red',width=0.0025,headaxislength=0)
###Output
Saving: cal/cam1.tif.ori cal/cam1.tif.addpar
Saving: cal/cam2.tif.ori cal/cam2.tif.addpar
Saving: cal/cam3.tif.ori cal/cam3.tif.addpar
###Markdown
obsolete version def orient(): backup ori files backup_ori_files() calibration ptv.py_calibration(10) get the output from the orientation x1, y1, x2, y2 = [], [], [], [] ptv.py_get_from_orient(x1, y1, x2, y2) for i in range(n_cams): ax.imshow(ori_img[i],cmap=plt.cm.gray) ax.drawquiver(x1[i], y1[i], x2[i], y2[i], "red") ax.scatter(x1, y1, color="orange", size=4)
###Code
def _button_edit_ori_files_fired(self):
editor = codeEditor(path=self.par_path)
editor.edit_traits(kind='livemodal')
###Output
_____no_output_____
###Markdown
protect ORI filesfor f in calOriParams.img_ori[:n_cams]: with open(f, 'r') as d: d.read().split() if not np.all(np.isfinite(np.asarray(d).astype('f'))): if there NaN for instance print("protected ORI file %s " % f) shutil.copyfile(f + '.bck', f)
###Code
# restore ori files
def restore_ori_files(n_cams, calOriParams):
for f in calOriParams.img_ori[:n_cams]:
print('restored %s ' % f)
shutil.copyfile(f + '.bck', f)
g = f.replace('ori', 'addpar')
shutil.copyfile(g + '.bck', g)
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(m, b):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, m * x + b)
plt.ylim(-5, 5)
plt.show()
interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
from ipywidgets import FloatSlider
x_widget = FloatSlider(min=0.0, max=10.0, step=0.05)
y_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)
def update_x_range(*args):
x_widget.max = 2.0 * y_widget.value
y_widget.observe(update_x_range, 'value')
def printer(x, y):
print(x, y)
interact(printer,x=x_widget, y=y_widget);
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
Geometric fucntion$\theta = \dfrac{180}{\pi} \tan^{-1}\left(\dfrac{x + a}{s}\right) + \beta$$x$ and $a$ in the units of channels. $s$ is also in the units of channels and it rescales the detector range.$\beta[^\circ]$ is an angle.$\dfrac{\partial\theta}{\partial s} = -\dfrac{180}{\pi} \dfrac{a+x}{a^2+2ax+s^2+x^2}= -\dfrac{180}{\pi} \dfrac{a+x}{(a+x)^2+s^2}$$\dfrac{\partial\theta}{\partial a} = \dfrac{180}{\pi} \dfrac{s}{a^2+2ax+s^2+x^2} = \dfrac{180}{\pi} \dfrac{s}{(a+x)^2+s^2}$$\dfrac{\partial\theta}{\partial \beta} = 1$$\theta = \dfrac{180}{\pi}\left[ \tan^{-1}\left(\dfrac{x + a}{z\sin\left(\beta\right)}\right) + \beta\right]$here $\beta$ is in radians$\dfrac{\partial\theta}{\partial z} = -\dfrac{180}{\pi}\dfrac{(a + x) \csc(\beta)}{z^2 + (a + x)^2 \csc^2(\beta)}$$\dfrac{\partial\theta}{\partial a} = \dfrac{180}{\pi}\dfrac{z \csc(\beta)}{z^2 + (a + x)^2 \csc^2(\beta))}$$\dfrac{\partial\theta}{\partial b} = \dfrac{180}{\pi} \left[ 1 - \dfrac{z (a + x) \cot(\beta) \csc(\beta)}{z^2 + (a + x)^2 \csc^2(\beta)} \right]$
###Code
def fce_trig(x,a,b,s):
return (arctan((x+a)/s)) * 180 / pi + b
def fce_trigz(x,a,b,z):
t = z * sin(b)
return (arctan((x+a)/t) + b) * 180 / pi
def theta0(a,s):
return arctan(a/s)*180/pi
def thetam(a,s):
return arctan((a+1279)/s)*180/pi
def alpha(a,s,b):
return theta0(a,s)+b,thetam(a,s)+b
def dthetads(x,a,s):
return -180 / pi * (a+x)/((a+x)**2+s**2)
def dthetada(x,a,s):
return 180 / pi * (s)/((a+x)**2+s**2)
z = loadtxt('calibration.ini',unpack=True)
x,y = z
opt_lin,var=curve_fit(fce_linear,x,y)
opt_quad,var=curve_fit(fce_quad,x,y)
opt_trip,var=curve_fit(fce_trip,x,y)
opt_trig,var=curve_fit(fce_trig,x,y)
opt_trigz,var=curve_fit(fce_trigz,x,y,p0=opt_trig)
a,b,s = opt_trig
_a,_b,z = opt_trigz
print(a,_a)
opt_trig,theta0(a,s),thetam(a,s),alpha(a,s,b)
opt_trigz
_x = arange(0,1280)
figure(figsize=(12,7))
plot(_x,fce_trip(_x,*opt_trip),':',label='trip')
plot(_x,fce_trig(_x,*opt_trig),'-',label='trig')
plot(_x,fce_trigz(_x,*opt_trigz),'-',label='trigz')
plot(x,y,'+',ms=32,label='calibration points')
xlim(0,1280)
legend(frameon=False)
x = linspace(-10000,10000,1000)
y = fce_trig(x,*opt_trig)
plot(x,y)
x = linspace(_x[0],_x[-1],1000)
y = fce_trig(x,*opt_trig)
plot(x,y,'k-',lw=2)
a,b,s = opt_trig
ylim(-90+b,90+b)
xlim(-10000,10000)
xlabel('channel')
ylabel(r'angle $\theta$')
a,b,s = opt_trig
for d in [-300,0,300]:
x = linspace(-10000,10000,1000)
y = fce_trig(x,a,b,s+d)
plot(x,y)
x = linspace(_x[0],_x[-1],1000)
y = fce_trig(x,a,b,s+d)
plot(x,y,'k-',lw=1)
a,b,s = opt_trig
ylim(10,70)
xlim(-200,1500)
xlabel('channel')
ylabel(r'angle $\theta$')
print(y[0],y[-1])
a,b,z = opt_trigz
for d in [-30,0,30]:
x = linspace(-10000,10000,1000)
y = fce_trigz(x,a,b,z+d)
plot(x,y)
x = linspace(_x[0],_x[-1],1000)
y = fce_trigz(x,a,b,z+d)
plot(x,y,'k-',lw=1)
ylim(14,54)
xlim(-200,1500)
xlabel('channel')
ylabel(r'angle $\theta$')
print(y[0],y[-1])
x = linspace(-10000,10000,20000)
y = dthetads(x,*opt_trig[:2])
plot(x,y)
x = linspace(_x[0],_x[-1],20000)
y = dthetads(x,*opt_trig[:2])
plot(x,y,'k-',lw=2)
xlim(-1000,2000)
a,s = opt_trig[:2]
for d in [0,100,200]:
x = linspace(-10000,10000,20000)
y = dthetada(x,a,s+d)
plot(x,y)
x = linspace(_x[0],_x[-1],20000)
y = dthetada(x,a,s+d)
plot(x,y,'k-',lw=2)
xlim(-1000,2000)
###Output
_____no_output_____
###Markdown
CalibrationThis notebook can be used to set the baseline parameters of the model. It generates the base parameter.json file that is used for all simulations. 1 Set the general simulation parameters
###Code
i = 0
TIME = 350
AGENTS = 100000
CITY = ['cape_town'][i]
REGION = ['Western Cape'][i]
POPULATIONS2011 = [3740000]
POPULATIONS2019 = [4524000]
INITIAL_DAYS = 7 + 11 # i1 / i2 + c
###Output
_____no_output_____
###Markdown
Google mobility dataobtained from https://www.google.com/covid19/mobility/ on the 8th of February
###Code
mobility_data = pd.read_csv('general_data/2020_ZA_Region_Mobility_Report.csv')
mobility_data = mobility_data[mobility_data['sub_region_1'] == REGION]
mobility_data.index = [datetime.strptime(x, '%Y-%m-%d') for x in mobility_data['date']]
mobility_data = mobility_data[mobility_data.columns[9:]].astype(float)
###Output
_____no_output_____
###Markdown
Excess fatalitiesobtained from https://www.samrc.ac.za/reports/report-weekly-deaths-south-africa on January 2nd 2021
###Code
ef_ct = pd.read_excel('general_data/Estimated deaths 1+ yrs for SA 30 Jan2021 with adj2.xlsx', sheet_name='Weekly excesses', header=2)
ef_ct['dates'] = ef_ct['Unnamed: 0']
ef_ct.index = ef_ct['dates']
cpt_weekly_excess_fatalities = ef_ct['CPT'].iloc[1:-2]
cpt_weekly_excess_fatalities = cpt_weekly_excess_fatalities[:39]
cpt_weekly_excess_fatalities.plot()
###Output
_____no_output_____
###Markdown
Transform to daily data:
###Code
daily_fatalities = []
days = []
for i, x in enumerate(cpt_weekly_excess_fatalities):
for y in range(7):
#print(y)
daily_fatalities.append(x / 7)
days.append(cpt_weekly_excess_fatalities.index[i] + timedelta(days= y))
excess_fatalities = pd.Series(daily_fatalities)
excess_fatalities.index = days
excess_fatalities.plot()
###Output
_____no_output_____
###Markdown
Combine fatalities and mobility
###Code
first_wave_previous_mobility = (100 + (mobility_data.mean(axis=1))).loc[:excess_fatalities.index[0]].iloc[-INITIAL_DAYS:]
first_wave_previous_mobility
mobility = (100 + (mobility_data.mean(axis=1))).loc[excess_fatalities.index[0]:excess_fatalities.index[-1]]
mobility_and_fatalities = pd.concat([mobility, excess_fatalities], axis=1)
mobility_and_fatalities.columns = ['mobility', 'fatalities']
mobility_and_fatalities['mobility'].plot()
mobility_and_fatalities['fatalities'].plot()
###Output
_____no_output_____
###Markdown
Determine the end of the first wave, and start of the second waveThe first wave ends when excess fatalities < 0
###Code
mobility_and_fatalities.index[0]
first_wave_end_date = mobility_and_fatalities.index[0]
for x in range(len(mobility_and_fatalities)):
if mobility_and_fatalities['fatalities'].loc[first_wave_end_date] > 0:
first_wave_end_date = first_wave_end_date + timedelta(days=1)
else:
break
first_wave_end_date
###Output
_____no_output_____
###Markdown
The second wave begins when excess fatalities > 0
###Code
second_wave_start_date = mobility_and_fatalities.index[-1]
for x in range(len(mobility_and_fatalities), 0, -1):
if mobility_and_fatalities['fatalities'].loc[second_wave_start_date] > 0:
second_wave_start_date = second_wave_start_date - timedelta(days=1)
else:
break
second_wave_start_date
###Output
_____no_output_____
###Markdown
Split datasets
###Code
first_wave_data = mobility_and_fatalities.loc[:first_wave_end_date - timedelta(days=1)]
first_wave_data.plot()
second_wave_data = mobility_and_fatalities.loc[second_wave_start_date + timedelta(days=1):]
second_wave_previous_mobility = mobility_and_fatalities.loc[:second_wave_start_date].iloc[-INITIAL_DAYS:]['mobility']
second_wave_previous_mobility
second_wave_data.plot()
###Output
_____no_output_____
###Markdown
Age groups & health system capacity The age groups are per decile.
###Code
age_groups = ['age_0_10', 'age_10_20', 'age_20_30', 'age_30_40', 'age_40_50',
'age_50_60', 'age_60_70', 'age_70_80', 'age_80_plus']
###Output
_____no_output_____
###Markdown
Health system capacity city
###Code
beds_cape_town = 0.0009179
###Output
_____no_output_____
###Markdown
Set base parameters
###Code
parameters = {
# Parameters related to model implementation
"time": len(mobility_and_fatalities),
"number_of_agents": AGENTS,
# COVID-19 parameters (9)
"exposed_days": 4, # (not changed) average number of days before being able to infect others (sources: NICD + CDC)
"asymptom_days": 7, # (used to be 10) average number of days agents are infected but do not have symptoms
"symptom_days": 7,# (used to be 10) average number of days agents with mild symptoms are infectious (NICD = 7, Balabdaoui and Mohr = 8, Huang et al=7)
"critical_days": 11, # (used to be 8) average number of days agents are in critical condition (Balabdaoui and Mohr = 8, NICD=8-19 (13.5), CDC=10-14 (12))
"probability_symptomatic": (1 - 0.6165), # (not changed) determines whether an agent will become asymptomatic or asymptomatic spreader
"no_hospital_multiplier": 1.79, # the increase in probability if a critical agent cannot go to the hospital SOURCE: Zhou et al. 2020
"probability_critical": {key:value for key, value in zip(age_groups, [0.001, 0.003, 0.012, 0.032, 0.049, 0.102, 0.166, 0.244, 0.273])}, # probability that an agent enters a critical stage of the disease SOURCE: Verity et al.
"probability_to_die": {key:value for key, value in zip(age_groups, [0.02090209, 0.032569361, 0.034233668, 0.052638239, 0.097470817, 0.155112718, 0.248512233, 0.306164902, 0.371187541])}, #used to be [0.005, 0.021, 0.053, 0.126, 0.221, 0.303, 0.565, 0.653, 0.765])}, probability to die per age group in critical stage SOURCE: Verity et al.
# Cape Town specific parameters
"health_system_capacity": beds_cape_town,
"stringency_index": [100 - x for x in mobility_and_fatalities['mobility']],
# uncertain parameters placeholders
"total_initial_infections": 100, # total agents infected in CT
"probability_transmission": 0.01610378740708691, # the probability that the virus is transmitted when two agents interact
"probability_multiplier_asymptomatic": 0.25,
# parameters used for comparing to data
'empirical_population': POPULATIONS2011[0], # specifies the population for the city that is modelled.
'empirical_fatalities': list(mobility_and_fatalities['fatalities']), #
# SABCoM parameters not used for the estimation
"visiting_recurring_contacts_multiplier": 0.0, # this disables the compliance feature
"probability_susceptible": (4000/1100000) / 246, # probability that the agent will again be susceptible after having recovered
'private_shock_stdev': 0.05, # the standard deviation for a truncated normal distribution shock that is part of the private signal for the deGroot learning used by the agents.
'weight_private_signal': 1.0,# 0.15, # the weight of the private signal vis à vis the social signal, used in the deGroot learning process.
'time_4_new_infections': -1, # -1 is never
'new_infections_scenario': 'None', # determines where the initial infections will be if either initial (infections will pop up in the same place as initially), or random (infections pop up in random districts). Alternatively, this parameter is None and then no second re-seeding will occur.
"informality_dummy": 1.0, # setting this parameter at 0 will mean the lockdown is equally effective anywhere, alternative = 1
'init_infected_agent': 0, # to calculate R0
"data_output": 'csv-light', # 'csv', 'csv-light' or 'network', or 'False'
"learning_scenario": None
}
print(parameters["probability_susceptible"])
###Output
1.4781966001478197e-05
###Markdown
Next, we update these parameters and store them in a .json file for both waves Wave 1:
###Code
# update time
parameters['time'] = len(first_wave_data) + INITIAL_DAYS
# update mobility
parameters["stringency_index"] = [100 - x for x in list(first_wave_previous_mobility) + list(first_wave_data['mobility'])]
# update fatalities
parameters['empirical_fatalities'] = [0.0 for x in range(len(first_wave_previous_mobility))] + list(first_wave_data['fatalities'])
with open('{}/first_waveparameters.json'.format(CITY), 'w') as outfile:
json.dump(parameters, outfile)
###Output
_____no_output_____
###Markdown
Wave 2:
###Code
# update time
parameters['time'] = len(second_wave_data) + INITIAL_DAYS
# update mobility
parameters["stringency_index"] = [100 - x for x in list(second_wave_previous_mobility) + list(second_wave_data['mobility'])]
# update fatalities
parameters['empirical_fatalities'] = [0.0 for x in range(len(second_wave_previous_mobility))] + list(second_wave_data['fatalities'])
with open('{}/second_waveparameters.json'.format(CITY), 'w') as outfile:
json.dump(parameters, outfile)
###Output
_____no_output_____
###Markdown
Smooth
###Code
data = pd.read_csv('output_data/second_strain/seed0quantities_state_time.csv')
infections = data['i1'] + data['i2']
infections.plot()
sim_dead_curve.diff().ewm(span=10).mean()
###Output
_____no_output_____
###Markdown
CalibrationThis notebook can be used to set the baseline parameters of the model. It generates the base parameter.json file that is used for all simulations. 1 Set the general simulation parameters
###Code
i = 0
TIME = 350
AGENTS = 100000
CITY = ['cape_town', 'johannesburg'][i]
REGION = ['Western Cape', 'Gauteng'][i]
POPULATIONS2011 = [3740000, 4435000]
POPULATIONS2019 = [4524000, 5635127]
###Output
_____no_output_____
###Markdown
2 Set the start and end dates for the validation period
###Code
city_dates = ['2020-04-17', '2020-04-29']
START_DATE = datetime.strptime(city_dates[i], '%Y-%m-%d')
END_DATE = datetime.strptime('2020-08-10', '%Y-%m-%d')
###Output
_____no_output_____
###Markdown
3 Import data 3.1 Oxford Stringency Index
###Code
stringency_index = pd.read_csv('general_data/OxCGRT_latest.csv')[pd.read_csv('general_data/OxCGRT_latest.csv')['CountryCode'] == 'ZAF']
stringency_index.index = [datetime.strptime(str(x), '%Y%m%d') for x in stringency_index['Date']]
stringency_index = stringency_index['StringencyIndex']
lockdown_severeness = stringency_index.loc[START_DATE:END_DATE]
###Output
_____no_output_____
###Markdown
3.2 Google mobility data
###Code
mobility_data = pd.read_csv('general_data/Global_Mobility_Report_ZA.csv')
mobility_data = mobility_data[mobility_data['sub_region_1'] == REGION]
mobility_data.index = [datetime.strptime(x, '%Y-%m-%d') for x in mobility_data['date']]
mobility_data = mobility_data[mobility_data.columns[9:]].astype(float)
###Output
_____no_output_____
###Markdown
Excess fatalities
###Code
ef = pd.read_csv('general_data/excess_death_curves.csv')
ef_jhn = []
ef_ct = []
for name, l in zip(['excess_d_ct', 'excess_d_jhn'], [ef_ct, ef_jhn]):
# remove nan value
for x in ef[name].iloc[:117]:
if str(x) != 'nan':
l.append(float(x))
else:
l.append(0.0)
excess_fatalities = [ef_ct, ef_jhn]
###Output
_____no_output_____
###Markdown
3 Set policy parameters for the duration of the simulation.Policy parameters are input as a list that is as long as the simulation. This way they can change over the course of the simulation, in line with observed policy. The travel multiplier is set using the Google mobility data.
###Code
DATE = '2020-03-27'
travel_multiplier = list(1 + mobility_data.mean(axis=1).loc[DATE:DATE] / 100)[0]
travel_multiplier
###Output
_____no_output_____
###Markdown
4 Set initial infections and age groupsNext, we assume that 3% of infections were detected at the start of the simulation and translate this to the initial number of cases at the start of the simulation.
###Code
perc_infections_detects = 3
initial_agents = max(round((310 / (POPULATIONS2019[i] / AGENTS) * 100 / perc_infections_detects)), 20) # 310 cases / (population / agent) * 1 / 14% detected cases
initial_agents
###Output
_____no_output_____
###Markdown
The age groups are per decile.
###Code
age_groups = ['age_0_10', 'age_10_20', 'age_20_30', 'age_30_40', 'age_40_50',
'age_50_60', 'age_60_70', 'age_70_80', 'age_80_plus']
###Output
_____no_output_____
###Markdown
Health system capacity city
###Code
beds_joburg = 8750 / POPULATIONS2019[1]
beds_cape_town = 0.0009179
health_system_capacities = [beds_cape_town, beds_joburg]
health_system_capacities
###Output
_____no_output_____
###Markdown
5 Create the parameters
###Code
parameters = {
# Parameters related to model implementation
"time": TIME,
"number_of_agents": AGENTS,
# COVID-19 parameters (9)
"exposed_days": 4, # (not changed) average number of days before being able to infect others (sources: NICD + CDC)
"asymptom_days": 7, # (used to be 10) average number of days agents are infected but do not have symptoms
"symptom_days": 7,# (used to be 10) average number of days agents with mild symptoms are infectious (NICD = 7, Balabdaoui and Mohr = 8, Huang et al=7)
"critical_days": 11, # (used to be 8) average number of days agents are in critical condition (Balabdaoui and Mohr = 8, NICD=8-19 (13.5), CDC=10-14 (12))
"probability_symptomatic": (1 - 0.6165), # (not changed) determines whether an agent will become asymptomatic or asymptomatic spreader
"no_hospital_multiplier": 1.79, # the increase in probability if a critical agent cannot go to the hospital SOURCE: Zhou et al. 2020
"probability_transmission": 0.01610378740708691, # the probability that the virus is transmitted when two agents interact
"probability_critical": {key:value for key, value in zip(age_groups, [0.001, 0.003, 0.012, 0.032, 0.049, 0.102, 0.166, 0.244, 0.273])}, # probability that an agent enters a critical stage of the disease SOURCE: Verity et al.
"probability_to_die": {key:value for key, value in zip(age_groups, [0.02090209, 0.032569361, 0.034233668, 0.052638239, 0.097470817, 0.155112718, 0.248512233, 0.306164902, 0.371187541])}, #used to be [0.005, 0.021, 0.053, 0.126, 0.221, 0.303, 0.565, 0.653, 0.765])}, probability to die per age group in critical stage SOURCE: Verity et al.
# learning parameters
'private_shock_stdev': 0.05, # the standard deviation for a truncated normal distribution shock that is part of the private signal for the deGroot learning used by the agents.
'weight_private_signal': 0.15, # the weight of the private signal vis à vis the social signal, used in the deGroot learning process.
# Cape Town specific parameters (2)
"health_system_capacity": health_system_capacities[i],
"stringency_index": list(lockdown_severeness),
# Reducing travel e.g. by reducing it for work, school or all
"visiting_recurring_contacts_multiplier": travel_multiplier,#[travel_multiplier for x in range(0, TIME)], # based on travel data
# initial infections
"total_initial_infections": initial_agents, # total agents infected in CT
# optional parameters for second wave
'time_4_new_infections': -1, # -1 is never
'new_infections_scenario': 'None', # determines where the initial infections will be if either initial (infections will pop up in the same place as initially), or random (infections pop up in random districts). Alternatively, this parameter is None and then no second re-seeding will occur.
# additional parameter used to switch of informal districts
"informality_dummy": 1.0, # setting this parameter at 0 will mean the lockdown is equally effective anywhere, alternative = 1
# Technical parameters
'init_infected_agent': 0, # to calculate R0
"data_output": 'csv-light', # 'csv', 'csv-light' or 'network', or 'False'
# parameters used for comparing to data
'empirical_population': POPULATIONS2019[i], # specifies the population for the city that is modelled.
'empirical_fatalities': excess_fatalities[i], #
# Depreciated paramters (can be used later)
"probability_susceptible": 0.000, # probability that the agent will again be susceptible after having recovered
}
###Output
_____no_output_____
###Markdown
Next, we store these parameters in a .json file.
###Code
with open('{}/parameters.json'.format(CITY), 'w') as outfile:
json.dump(parameters, outfile)
# with open('config_{}.json'.format(CITY), 'w') as outfile:
# json.dump(parameters, outfile)
###Output
_____no_output_____
###Markdown
1. Detect center of the ball on 1 image Read and smooth an image
###Code
im = cv2.imread("images/color_0.png")
im = cv2.GaussianBlur(im, (7,7),0)
plt.imshow(im[:,:,::-1])
###Output
_____no_output_____
###Markdown
Load Region of Interest
###Code
with open("configs/roi.json") as f:
roi = json.load(f)
print(roi)
mask = np.zeros_like(im)
mask = cv2.rectangle(mask, (roi['x'], roi["y"]), (roi['x']+roi["width"], roi["y"]+roi["height"]), (255,255,255), -1)
im = cv2.bitwise_and(mask, im)
plt.imshow(im[:,:,::-1])
###Output
_____no_output_____
###Markdown
View hsv
###Code
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
plt.figure(figsize=(18,9))
for i in range(3):
plt.subplot(1,3,i+1)
cmap = plt.cm.gray if i != 0 else plt.cm.hsv
plt.imshow(hsv[:,:,i], cmap=cmap)
###Output
_____no_output_____
###Markdown
Filter red ball
###Code
saturation = hsv[...,1]
saturation[(hsv[...,0] > 15) & (hsv[...,0] < 165)]=0
plt.imshow(saturation, cmap=plt.cm.gray)
_, im1 = cv2.threshold(saturation, 92, 255, cv2.THRESH_BINARY)
plt.imshow(im1, cmap=plt.cm.gray)
###Output
_____no_output_____
###Markdown
Find the largest object
###Code
#in different version of opencv return of findContours is different.
#Please refer to the documentation for the right return values order
contours = cv2.findContours(im1, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[0]
contour = max(contours, key=cv2.contourArea)
b_circle = cv2.minEnclosingCircle(contour)
b_circle
b_circle = ((int(b_circle[0][0]),int(b_circle[0][1])), int(b_circle[1]))
cv2.circle(im1, b_circle[0], b_circle[1], (128,0,0), 5)
plt.imshow(im1)
###Output
_____no_output_____
###Markdown
Warp the code above into a function
###Code
def get_center(image, mask):
"""Return center of the largest red ball
Keyword arguments:
image -- numpy array image in BGR color space
mask -- numpy array with region of interest. Should be the same shape as image. Use (255,255,255) for points in RoI and (0,0,0) for points outside
"""
im = cv2.GaussianBlur(image, (7,7), 0)
im = cv2.bitwise_and(mask, im)
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
saturation = hsv[...,1]
saturation[(hsv[...,0] > 15) & (hsv[...,0] < 165)] = 0
_, im1 = cv2.threshold(saturation, 92, 255, cv2.THRESH_BINARY)
contours = cv2.findContours(im1, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[0]
contour = max(contours, key=cv2.contourArea)
b_circle = cv2.minEnclosingCircle(contour)
return b_circle[0]
center = get_center(im, mask)
center
###Output
_____no_output_____
###Markdown
Transform from uv-depth coordinates to the xyz coordinatesfor additional details refer to https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
###Code
with open("configs/camera_matrix.json") as f:
camera_matrix = json.load(f)
camera_matrix = np.array(camera_matrix)
def get_world_coords(x,y, depth, camera_matrix=camera_matrix):
"""return physical coordinates in mm
Keyword arguments:
x, y -- coordinates of a point in pixels
depth -- depth coordiante of the same point
camera_matrix -- 3x3 matrix with focal lengthes and principial point"""
f = np.linalg.inv(camera_matrix)
v = np.array([x,y,1]) * depth
return np.dot(f,v)
###Output
_____no_output_____
###Markdown
2. Get coordinates for each calibration image
###Code
depth_image = cv2.imread("images/depth_0.png", cv2.IMREAD_UNCHANGED)
plt.figure(dpi=150)
plt.imshow(depth_image, cmap=plt.cm.gray)
def get_world_coords_n(n, mask=mask, base_path="images/"):
""" return xyz coordinates of the largest red ball for an image and depth number n
Keyword arguments:
n -- number of image in the folder
base_path -- path to the folder, conatining calibration images (default: images)
"""
i0 = cv2.imread(os.path.join(base_path, "color_{}.png".format(n)))
i1 = cv2.imread(os.path.join(base_path, "depth_{}.png".format(n)), cv2.IMREAD_UNCHANGED)
center = get_center(i0, mask)
depth = i1[int(center[1]), int(center[0])]
return get_world_coords(center[0], center[1], depth)
###Output
_____no_output_____
###Markdown
all xyz coordinates from camera
###Code
number_images = len(os.listdir("images")) //2
camera_coords = [get_world_coords_n(i) for i in range(number_images)]
camera_coords
###Output
_____no_output_____
###Markdown
all xyz coordinates from a robot
###Code
with open("positions/positions.txt") as f:
posiitons = json.load(f)["positions"]
posiitons
robot_coords = np.zeros((len(posiitons), 3))
for i in range(len(posiitons)):
robot_coords[i,0] = posiitons[i]["x"]
robot_coords[i,1] = posiitons[i]["y"]
robot_coords[i,2] = posiitons[i]["z"]
robot_coords
###Output
_____no_output_____
###Markdown
3. Compute calibration we need to make more perciese geometry, with respect to the ball radius
###Code
ball_radius = 24 #mm
###Output
_____no_output_____
###Markdown
actual center of the ball is ball_radius further than the point we see on an image
###Code
for i in range(number_images):
d = np.linalg.norm(camera_coords[i])
camera_coords[i] = camera_coords[i]*(d+ball_radius)/d
camera_coords
###Output
_____no_output_____
###Markdown
robot touches the ball 1 ball_radius upper than the real center
###Code
for i in range(number_images):
robot_coords[i,2] = robot_coords[i,2] - ball_radius
robot_coords
camera_coords = np.array(camera_coords).astype(np.float32)
robot_coords = np.array(robot_coords).astype(np.float32)
camera_coords, robot_coords
trans = cv2.estimateAffine3D(camera_coords, robot_coords)[1]
trans
###Output
_____no_output_____
###Markdown
if everything is correct, determinant should be close to 1
###Code
np.linalg.det(trans[:,:3])
print(trans)
###Output
[[ 1.00241493e+00 -6.71763418e-02 2.11503712e-02 1.94405790e+02]
[-7.28918381e-02 -7.89421279e-01 6.44763676e-01 -3.07488567e+02]
[-1.12829863e-03 -6.29137365e-01 -7.53842715e-01 4.02667459e+02]]
|
ptsne-test.ipynb | ###Markdown
Notes from testing:- `hidden_layer_dims = [300,100]` works well- learned `alpha` for perplexity of 100 is about 1.8- learned `alpha` for perplexity of 50 is > 2
###Code
foo = ParametricTSNE(28*28, 2, 50, use_cuda=False, hidden_layer_dims=[300,100], alpha=2, seed=42) # use_cuda=True, alpha=1
foo.fit(testdata[:5000], batch_size=1000, epochs=30, learning_rate=0.01, pretrain=True, verbose=True, loss_func='kl')
p_precalc = foo.p_ij
# bar = foo(testdata[:20000].cuda()).cpu().detach().numpy()
bar = foo(testdata[:20000]).cpu().detach().numpy()
import matplotlib.patches as mpatches
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111)
colors = [plt.cm.tab10.colors[i] for i in mnist.targets[:20000]]
ax.scatter(bar[:,0],bar[:,1],c=colors, s=2)
ax.set_aspect(1)
recs = []
for i in range(0,10):
recs.append(mpatches.Rectangle((0,0),1,1,fc=plt.cm.tab10.colors[i]))
ax.legend(recs,list(range(10)),loc=2)
###Output
_____no_output_____
###Markdown
Notes from testing:- `hidden_layer_dims = [300,100]` works well- learned `alpha` for perplexity of 100 is about 1.8- learned `alpha` for perplexity of 50 is > 2
###Code
foo = ParametricTSNE(28*28, 2, 300, use_cuda=True, hidden_layer_dims=[300,100], alpha=1)
foo.fit(testdata[:20000], p_ij=p_precalc, batch_size=100, epochs=30, learning_rate=0.01, pretrain=True, verbose=True, loss_func='kl')
p_precalc = foo.p_ij
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
bar = foo(testdata[:20000].cuda()).cpu().detach().numpy()
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111)
colors = [plt.cm.tab10.colors[i] for i in mnist.targets[:20000]]
ax.scatter(bar[:,0],bar[:,1],c=colors, s=2)
ax.set_aspect(1)
recs = []
for i in range(0,10):
recs.append(mpatches.Rectangle((0,0),1,1,fc=plt.cm.tab10.colors[i]))
ax.legend(recs,list(range(10)),loc=2)
###Output
_____no_output_____ |
Homework_4_part2.ipynb | ###Markdown
Homework 4 - Find the duplicates! Davide Toma, Giacomo Lo Cascio, Musie Meressa The 3 steps that we have to perform:1. Convert the string containing the password to a (potentially large) number ** we created Convert_to_LargeNumber() that converts an ASCII of strings to large Number **.2. Use a hash function to map the number to a large range: ** We use Knuth Multiplication Hash from the book to hash the large number**.3. Detecting the duplicates: *** Here we use Pyspark to identify the duplicates.* Our Approach is to store the hash values into a file for each password. We count each Hash Number as a word.
###Code
import math
from collections import Counter
def Convert_to_LargeNumber(ASCII):
R = 2675
prod = 1
for e in ASCII:
prod*=(R+2*e)
return(prod//2)
def HashFun(k):
w = 4294967295 # 2^32
A = 2654435769
r0 = k * A
p = 32
return ( r0 & w ) >> ( 32 - p ) # As stated in the book we use floor(m(kAmod1))
passwords = open('passwords2.txt','r')
f = open('HashValue2.txt','w+')
for p in passwords:
list = [ord(char) for char in p.strip()]
f.write(str(HashFun(Convert_to_LargeNumber(list)))+"\n")
passwords.close()
f.close()
###Output
_____no_output_____
###Markdown
Read the HashValue using pyspark
###Code
HashValue = sc.textFile('HashValue2.txt')
###Output
_____no_output_____
###Markdown
We use the Hash value as a word, and we count the number of occurences of each Hash Value
###Code
OccurenceCount = HashValue.flatMap(lambda line: line.split(",")).map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b)
###Output
_____no_output_____
###Markdown
To identify the duplicates we filter the hash values that are counted to be more than 1.
###Code
Duplicates = OccurenceCount.filter(lambda x: x[1]>1).count()
Duplicates
###Output
_____no_output_____
###Markdown
We Identify 10.9M duplicates from the passwords2.txt.Since we know there are 10 M false positives from the text, we get with our hash function 916.162 false positives. AABA != "AAABIn this case we convert the ASCII values of each chararters sequentially. Therefore, two strings will be identicall if they have same characters. But for hashing we use same Hash Function as we did before. Finally, store the Hashed numbers into a file. ** The Steps are the same with the previous one.**. We get 6.1M duplicates for `AABA != AAAB`case.
###Code
from collections import Counter
def Convert_to_LargeNumber(ASCII):
R = 265
a = []
for e in ASCII:
a.append(e)
return (''.join(map(str,a))) # The two strings are indentical if their ascii are the same sequencially if not they are different
def HashFun(k):
w = 4294967295 # 4294967296 # 2^32
A = 2654435769
r0 = k * A
p = 32
return ( r0 & w ) >> ( 32 - p ) # As stated in the book we use floor(m(kAmod1))
passwords = open('passwords2.txt','r')
f = open('HashValue5.txt','w+')
for p in passwords:
list = [ord(char) for char in p.strip()]
f.write(str(HashFun(int(Convert_to_LargeNumber(list))))+"\n")
passwords.close()
f.close()
HashValue1 = sc.textFile('HashValue5.txt')
OccurenceCount = HashValue1.flatMap(lambda line: line.split(",")).map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b)
Duplicates1 = OccurenceCount.filter(lambda x: x[1]>1).count()
Duplicates1
###Output
_____no_output_____ |
code/chap07-mine.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 7: Thermal systemsCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problem.I'll use a `State` object to store the initial temperature.
###Code
init = State(temp=90)
init
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t0=0,
t_end=30,
dt=1)
coffee
###Output
_____no_output_____
###Markdown
The `update` function implements Newton's law of cooling.
###Code
def update(state, system):
"""Update the thermal transfer model.
state: State (temp)
system: System object
returns: State (temp)
"""
unpack(system)
T = state.temp
T += -r * (T - T_env) * dt
return State(temp=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update(init, coffee)
###Output
_____no_output_____
###Markdown
Now we can run simulations using the same function from the previous chapter.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.loc[t0] = init
ts = linrange(t0, t_end-dt, dt)
for t in ts:
frame.loc[t+dt] = update_func(frame.loc[t], system)
system.results = frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
run_simulation(coffee, update)
coffee.results
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(coffee.results.temp, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
After running the simulation, we can extract the final temperature from the results.
###Code
def final_temp(system):
"""Final temperature.
If system has no results, return initial temp.
system: System object.
returns: temperature (degC)
"""
if hasattr(system, 'results'):
return system.results.temp[system.t_end]
else:
return system.init.temp
###Output
_____no_output_____
###Markdown
It will be convenient to wrap these steps in a function. `kwargs` is a collection of whatever keyword arguments are provided; they are passed along as arguments to `System`.
###Code
def make_system(T_init=90, r=0.01, volume=300, t_end=30):
"""Runs a simulation with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(temp=T_init)
system = System(init=init,
volume=volume,
r=r,
T_env=22,
t0=0,
t_end=t_end,
dt=1)
return system
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system()
run_simulation(coffee, update)
final_temp(coffee)
###Output
_____no_output_____
###Markdown
**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.
###Code
milk = make_system(5,.01,300,15)
run_simulation(milk, update)
plot(milk.results.temp, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Using `fsolve`As a simple example, let's find the roots of this function; that is, the values of `x` that make the result 0.
###Code
def func(x):
return (x-1) * (x-2) * (x-3)
###Output
_____no_output_____
###Markdown
`modsim.py` provides `fsolve`, which does some error-checking and then runs `scipy.optimize.fsolve`. The first argument is the function whose roots we want. The second argument is an initial guess.
###Code
fsolve(func, x0=0)
###Output
_____no_output_____
###Markdown
Usually the root we get is the one that's closest to the initial guess.
###Code
fsolve(func, 1.9)
fsolve(func, 2.9)
###Output
_____no_output_____
###Markdown
But not always.
###Code
fsolve(func, 1.5)
###Output
_____no_output_____
###Markdown
We want to find the value of `r` that makes the final temperature 70, so we define an "error function" that takes `r` as a parameter and returns the difference between the final temperature and the goal.
###Code
def error_func1(r):
"""Runs a simulation and returns the `error`.
r: heat transfer rate, in 1/min
returns: difference between final temp and 70 C
"""
system = make_system(r=r)
run_simulation(system, update)
print(r)
return final_temp(system) - 70
###Output
_____no_output_____
###Markdown
With `r=0.01`, we end up a little too warm.
###Code
error_func1(r=0.01)
###Output
0.01
###Markdown
The return value from `fsolve` is an array with a single element, the estimated value of `r`.
###Code
solution = fsolve(error_func1, 0.01, xtol=1e-8)
r_coffee = solution[0]
r_coffee
###Output
0.01
[ 0.01]
[ 0.01]
[ 0.01]
[ 0.01]
[ 0.01150871]
[ 0.01154231]
[ 0.01154308]
[ 0.01154308]
[ 0.01154308]
###Markdown
If we run the simulation with the estimated value of `r`, the final temperature is 70 C, as expected.
###Code
coffee = make_system(r=r_coffee)
run_simulation(coffee, update)
final_temp(coffee)
###Output
_____no_output_____
###Markdown
**Exercise:** When you call `fsolve`, it calls `error_func1` several times. To see how this works, add a print statement to `error_func1` and run `fsolve` again. **Exercise:** Repeat this process to estimate `r_milk`, given that it starts at 5 C and reaches 20 C after 15 minutes. Before you use `fsolve`, you might want to try a few values for `r_milk` and see how close you can get by trial and error. Here's an initial guess to get you started:
###Code
r_milk = 0.1
milk = make_system(T_init=5, t_end=15, r=r_milk)
run_simulation(milk, update)
final_temp(milk)
solution = fsolve(error_func1, 0.01, xtol=1e-8)
r_milk = solution[0]
r_milk
r_milk = 0.011543084583978349
milk = make_system(T_init=5, t_end=15, r=r_milk)
run_simulation(milk, update)
final_temp(milk)
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Mixing liquids The following function takes `System` objects that represent two liquids, computes the temperature of the mixture, and returns a new `System` object that represents the mixture.
###Code
def mix(s1, s2):
"""Simulates the mixture of two liquids.
s1: System representing coffee
s2: System representing milk
returns: System representing the mixture
"""
assert s1.t_end == s2.t_end
volume = s1.volume + s2.volume
temp = (s1.volume * final_temp(s1) +
s2.volume * final_temp(s2)) / volume
mixture = make_system(T_init=temp,
volume=volume,
r=s1.r)
return mixture
###Output
_____no_output_____
###Markdown
First we'll see what happens if we add the milk at the end. We'll simulate the coffee and the milk separately.
###Code
coffee = make_system(T_init=90, t_end=30, r=r_coffee, volume=300)
run_simulation(coffee, update)
final_temp(coffee)
milk = make_system(T_init=5, t_end=30, r=r_milk, volume=50)
run_simulation(milk, update)
final_temp(milk)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(coffee.results.temp, label='coffee')
plot(milk.results.temp, '--', label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)',
loc='center left')
savefig('chap07-fig01.pdf')
###Output
Saving figure to file chap07-fig01.pdf
###Markdown
Here's what happens when we mix them.
###Code
mix_last = mix(coffee, milk)
final_temp(mix_last)
###Output
_____no_output_____
###Markdown
And here's what we get if we add the milk immediately.
###Code
coffee = make_system(T_init=90, r=r_coffee, volume=300)
milk = make_system(T_init=5, r=r_milk, volume=50)
mix_first = mix(coffee, milk)
mix_first.t_end = 30
run_simulation(mix_first, update)
final_temp(mix_first)
###Output
_____no_output_____
###Markdown
The following function takes `t_add`, which is the time when the milk is added, and returns the final temperature.
###Code
def run_and_mix(t_add, t_total=30):
"""Simulates two liquids and them mixes them at t_add.
t_add: time in minutes
t_total: total time to simulate, min
returns: final temperature
"""
coffee = make_system(T_init=90, t_end=t_add,
r=r_coffee, volume=300)
run_simulation(coffee, update)
milk = make_system(T_init=5, t_end=t_add,
r=r_milk, volume=50)
run_simulation(milk, update)
mixture = mix(coffee, milk)
mixture.t_end = t_total - t_add
run_simulation(mixture, update)
return final_temp(mixture)
###Output
_____no_output_____
###Markdown
We can try it out with a few values.
###Code
run_and_mix(0)
run_and_mix(15)
run_and_mix(30)
###Output
_____no_output_____
###Markdown
And then sweep a range of values for `t_add`
###Code
sweep = SweepSeries()
for t_add in linrange(0, 30, 2):
temp = run_and_mix(t_add)
sweep[t_add] = temp
###Output
_____no_output_____
###Markdown
Here's what the result looks like.
###Code
plot(sweep, color='purple')
decorate(xlabel='Time added (min)',
ylabel='Final temperature (C)',
legend=False)
savefig('chap07-fig02.pdf')
###Output
Saving figure to file chap07-fig02.pdf
###Markdown
**Exercise:** Suppose the coffee shop won't let me take milk in a separate container, but I keep a bottle of milk in the refrigerator at my office. In that case is it better to add the milk at the coffee shop, or wait until I get to the office? The coffee shop because the milk temp would be much colder in the fridge than the milk that is warming up in the coffee. This warming effect would create an overall warmer berverage than ading the milk at the office becasue that milk keeps a constant temperature.Hint: Think about the simplest way to represent the behavior of a refrigerator in this model. The change you make to test this variation of the problem should be very small! Analysis Now we can use the analytic result to compute temperature as a function of time. The following function is similar to `run_simulation`.
###Code
def run_analysis(system):
"""Computes temperature using the analytic solution.
Adds TimeFrame to `system` as `results`
system: System object
"""
unpack(system)
T_init = init.temp
ts = linrange(t0, t_end, dt)
temp_array = T_env + (T_init - T_env) * exp(-r * ts)
temp_series = TimeSeries(temp_array, index=ts)
system.results = TimeFrame(temp_series, columns=['temp'])
###Output
_____no_output_____
###Markdown
Here's how we run it. From the analysis, we have the computed value of `r_coffee2`
###Code
r_coffee2 = 0.011610223142273859
init = State(temp=90)
coffee2 = System(init=init, T_env=22, r=r_coffee2,
t0=0, t_end=30)
run_analysis(coffee2)
final_temp(coffee2)
###Output
_____no_output_____
###Markdown
And we can compare to the results from simulation.
###Code
init = State(temp=90)
coffee = System(init=init, T_env=22, r=r_coffee,
t0=0, t_end=30, dt=1)
run_simulation(coffee, update)
final_temp(coffee)
###Output
_____no_output_____
###Markdown
They are identical except for small roundoff errors.
###Code
coffee.results - coffee2.results
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 7Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Code from the previous chapter
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Quadratic growth Here's the implementation of the quadratic growth model.
###Code
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Here's a `System` object with the parameters `alpha` and `beta`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0017)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
**Exercise:** Can you find values for the parameters that make the model fit better? EquilibriumTo understand the quadratic model better, let's plot net growth as a function of population.
###Code
pop_array = linspace(0, 15, 100)
net_growth_array = system.alpha * pop_array + system.beta * pop_array**2
None
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
sns.set_style('whitegrid')
plot(pop_array, net_growth_array)
decorate(xlabel='Population (billions)',
ylabel='Net growth (billions)')
savefig('figs/chap03-fig05.pdf')
sns.set_style('white')
###Output
Saving figure to file figs/chap03-fig05.pdf
###Markdown
Here's what it looks like. Remember that the x axis is population now, not time. It looks like the growth rate passes through 0 when the population is a little less than 14 billion.In the book we found that the net growth is 0 when the population is $-\alpha/\beta$:
###Code
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
This is the equilibrium the population tends toward. `sns` is a library called Seaborn which provides functions that control the appearance of plots. In this case I want a grid to make it easier to estimate the population where the growth rate crosses through 0. Dysfunctions When people first learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems with functions.As an example, suppose you want a function that takes a `System` object, with variables `alpha` and `beta`, as a parameter and computes the carrying capacity, `-alpha/beta`. Here's a good solution:
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
Now let's see all the ways that can go wrong.**Dysfunction 1:** Not using parameters. In the following version, the function doesn't take any parameters; when `sys1` appears inside the function, it refers to the object we created outside the function.
###Code
def carrying_capacity():
K = -sys1.alpha / sys1.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity()
print(pop)
###Output
13.88888888888889
###Markdown
This version actually works, but it is not as versatile as it could be. If there are several `System` objects, this function can only work with one of them, and only if it is named `system`.**Dysfunction 2:** Clobbering the parameters. When people first learn about parameters, they often write functions like this:
###Code
def carrying_capacity(system):
system = System(alpha=0.025, beta=-0.0018)
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
In this example, we have a `System` object named `sys1` that gets passed as an argument to `carrying_capacity`. But when the function runs, it ignores the argument and immediately replaces it with a new `System` object. As a result, this function always returns the same value, no matter what argument is passed.When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions.**Dysfunction 3:** No return value. Here's a version that computes the value of `K` but doesn't return it.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
None
###Markdown
A function that doesn't have a return statement always returns a special value called `None`, so in this example the value of `pop` is `None`. If you are debugging a program and find that the value of a variable is `None` when it shouldn't be, a function without a return statement is a likely cause.**Dysfunction 4:** Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys2 = System(alpha=0.025, beta=-0.0018)
carrying_capacity(sys2)
# print(K) This line won't work because K only exists inside the function.
###Output
_____no_output_____
###Markdown
In this example, `carrying_capacity` runs and returns `K`, but the return value is dropped.When you call a function that returns a value, you should do something with the result. Often you assign it to a variable, as in the previous examples, but you can also use it as part of an expression.For example, you could eliminate the temporary variable `pop` like this:
###Code
print(carrying_capacity(sys1))
###Output
13.88888888888889
###Markdown
Or if you had more than one system, you could compute the total carrying capacity like this:
###Code
total = carrying_capacity(sys1) + carrying_capacity(sys2)
total
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** In the book, I present a different way to parameterize the quadratic model:$ \Delta p = r p (1 - p / K) $where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of `update_func` that implements this version of the model. Test it by computing the values of `r` and `K` that correspond to `alpha=0.025, beta=-0.0018`, and confirm that you get the same results.
###Code
def update_func_1(pop, t, system):
r = system.alpha
K = -1 * system.alpha/system.beta
net_growths = r * pop * (1-(pop/K))
return pop + net_growths
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0017)
results = run_simulation(system, update_func_1)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig06.pdf')
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 7Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Code from the previous chapter
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Quadratic growth Here's the implementation of the quadratic growth model.
###Code
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Here's a `System` object with the parameters `alpha` and `beta`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
**Exercise:** Can you find values for the parameters that make the model fit better? EquilibriumTo understand the quadratic model better, let's plot net growth as a function of population.
###Code
pop_array = linspace(0, 15, 100)
net_growth_array = system.alpha * pop_array + system.beta * pop_array**2
None
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
sns.set_style('whitegrid')
plot(pop_array, net_growth_array)
decorate(xlabel='Population (billions)',
ylabel='Net growth (billions)')
savefig('figs/chap03-fig05.pdf')
sns.set_style('white')
###Output
Saving figure to file figs/chap03-fig05.pdf
###Markdown
Here's what it looks like. Remember that the x axis is population now, not time. It looks like the growth rate passes through 0 when the population is a little less than 14 billion.In the book we found that the net growth is 0 when the population is $-\alpha/\beta$:
###Code
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
This is the equilibrium the population tends toward. `sns` is a library called Seaborn which provides functions that control the appearance of plots. In this case I want a grid to make it easier to estimate the population where the growth rate crosses through 0. Dysfunctions When people first learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems with functions.As an example, suppose you want a function that takes a `System` object, with variables `alpha` and `beta`, as a parameter and computes the carrying capacity, `-alpha/beta`. Here's a good solution:
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
Now let's see all the ways that can go wrong.**Dysfunction 1:** Not using parameters. In the following version, the function doesn't take any parameters; when `sys1` appears inside the function, it refers to the object we created outside the function.
###Code
def carrying_capacity():
K = -sys1.alpha / sys1.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity()
print(pop)
###Output
13.88888888888889
###Markdown
This version actually works, but it is not as versatile as it could be. If there are several `System` objects, this function can only work with one of them, and only if it is named `system`.**Dysfunction 2:** Clobbering the parameters. When people first learn about parameters, they often write functions like this:
###Code
def carrying_capacity(system):
system = System(alpha=0.025, beta=-0.0018)
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
In this example, we have a `System` object named `sys1` that gets passed as an argument to `carrying_capacity`. But when the function runs, it ignores the argument and immediately replaces it with a new `System` object. As a result, this function always returns the same value, no matter what argument is passed.When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions.**Dysfunction 3:** No return value. Here's a version that computes the value of `K` but doesn't return it.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
None
###Markdown
A function that doesn't have a return statement always returns a special value called `None`, so in this example the value of `pop` is `None`. If you are debugging a program and find that the value of a variable is `None` when it shouldn't be, a function without a return statement is a likely cause.**Dysfunction 4:** Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys2 = System(alpha=0.025, beta=-0.0018)
carrying_capacity(sys2)
# print(K) This line won't work because K only exists inside the function.
###Output
_____no_output_____
###Markdown
In this example, `carrying_capacity` runs and returns `K`, but the return value is dropped.When you call a function that returns a value, you should do something with the result. Often you assign it to a variable, as in the previous examples, but you can also use it as part of an expression.For example, you could eliminate the temporary variable `pop` like this:
###Code
print(carrying_capacity(sys1))
###Output
13.88888888888889
###Markdown
Or if you had more than one system, you could compute the total carrying capacity like this:
###Code
total = carrying_capacity(sys1) + carrying_capacity(sys2)
total
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** In the book, I present a different way to parameterize the quadratic model:$ \Delta p = r p (1 - p / K) $where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of `update_func` that implements this version of the model. Test it by computing the values of `r` and `K` that correspond to `alpha=0.025, beta=-0.0018`, and confirm that you get the same results.
###Code
def update_func_new(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop * (1+pop/(system.alpha/system.beta))
return pop + net_growth
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
results = run_simulation(system, update_func_new)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
Modeling and Simulation in PythonChapter 7: Thermal systemsCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
The coffee cooling problem.I'll use a `State` object to store the initial temperature.
###Code
init = State(temp=90)
init
###Output
_____no_output_____
###Markdown
And a `System` object to contain the system parameters.
###Code
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t0=0,
t_end=30,
dt=1)
coffee
###Output
_____no_output_____
###Markdown
The `update` function implements Newton's law of cooling.
###Code
def update(state, system):
"""Update the thermal transfer model.
state: State (temp)
system: System object
returns: State (temp)
"""
unpack(system)
T = state.temp
T += -r * (T - T_env) * dt
return State(temp=T)
###Output
_____no_output_____
###Markdown
Here's how it works.
###Code
update(init, coffee)
###Output
_____no_output_____
###Markdown
Now we can run simulations using the same function from the previous chapter.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.loc[t0] = init
ts = linrange(t0, t_end-dt, dt)
for t in ts:
frame.loc[t+dt] = update_func(frame.loc[t], system)
system.results = frame
###Output
_____no_output_____
###Markdown
And here's how it works.
###Code
run_simulation(coffee, update)
coffee.results
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(coffee.results.temp, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
After running the simulation, we can extract the final temperature from the results.
###Code
def final_temp(system):
"""Final temperature.
If system has no results, return initial temp.
system: System object.
returns: temperature (degC)
"""
if hasattr(system, 'results'):
return system.results.temp[system.t_end]
else:
return system.init.temp
###Output
_____no_output_____
###Markdown
It will be convenient to wrap these steps in a function. `kwargs` is a collection of whatever keyword arguments are provided; they are passed along as arguments to `System`.
###Code
def make_system(T_init=90, r=0.01, volume=300, t_end=30):
"""Runs a simulation with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(temp=T_init)
system = System(init=init,
volume=volume,
r=r,
T_env=22,
t0=0,
t_end=t_end,
dt=1)
return system
###Output
_____no_output_____
###Markdown
Here's how we use it:
###Code
coffee = make_system()
run_simulation(coffee, update)
final_temp(coffee)
###Output
_____no_output_____
###Markdown
**Exercise:** Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.
###Code
# Solution goes here
init1 = State (temp = 5)
milk = System(init1=init1,
volume=50,
r=0.01,
T_env=22,
t0=0,
t_end=15,
dt=1)
milk
def update(state, system):
"""Update the thermal transfer model.
state: State (temp)
system: System object
returns: State (temp)
"""
unpack(system)
T = state.temp
T += -r * (T - T_env) * dt
return State(temp=T)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.loc[t0] = init
ts = linrange(t0, t_end-dt, dt)
for t in ts:
frame.loc[t+dt] = update_func(frame.loc[t], system)
system.results = frame
run_simulation(milk, update)
milk.results
plot(milk.results.temp, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
###Output
_____no_output_____
###Markdown
Using `fsolve`As a simple example, let's find the roots of this function; that is, the values of `x` that make the result 0.
###Code
def func(x):
return (x-1) * (x-2) * (x-3)
###Output
_____no_output_____
###Markdown
`modsim.py` provides `fsolve`, which does some error-checking and then runs `scipy.optimize.fsolve`. The first argument is the function whose roots we want. The second argument is an initial guess.
###Code
fsolve(func, x0=0)
###Output
_____no_output_____
###Markdown
Usually the root we get is the one that's closest to the initial guess.
###Code
fsolve(func, 1.9)
fsolve(func, 2.9)
###Output
_____no_output_____
###Markdown
But not always.
###Code
fsolve(func, 1.5)
###Output
_____no_output_____
###Markdown
We want to find the value of `r` that makes the final temperature 70, so we define an "error function" that takes `r` as a parameter and returns the difference between the final temperature and the goal.
###Code
def error_func1(r):
"""Runs a simulation and returns the `error`.
r: heat transfer rate, in 1/min
returns: difference between final temp and 70 C
"""
system = make_system(r=r)
run_simulation(system, update)
return final_temp(system) - 70
###Output
_____no_output_____
###Markdown
With `r=0.01`, we end up a little too warm.
###Code
error_func1(r=0.01)
###Output
_____no_output_____
###Markdown
The return value from `fsolve` is an array with a single element, the estimated value of `r`.
###Code
solution = fsolve(error_func1, 0.01, xtol=1e-8)
r_coffee = solution[0]
r_coffee
###Output
_____no_output_____
###Markdown
If we run the simulation with the estimated value of `r`, the final temperature is 70 C, as expected.
###Code
coffee = make_system(r=r_coffee)
run_simulation(coffee, update)
final_temp(coffee)
###Output
_____no_output_____
###Markdown
**Exercise:** When you call `fsolve`, it calls `error_func1` several times. To see how this works, add a print statement to `error_func1` and run `fsolve` again. **Exercise:** Repeat this process to estimate `r_milk`, given that it starts at 5 C and reaches 20 C after 15 minutes. Before you use `fsolve`, you might want to try a few values for `r_milk` and see how close you can get by trial and error. Here's an initial guess to get you started:
###Code
r_milk = 0.1
milk = make_system(T_init=5, t_end=15, r=r_milk)
run_simulation(milk, update)
final_temp(milk)
# Solution goes here
def error_func1(r):
"""Runs a simulation and returns the `error`.
r: heat transfer rate, in 1/min
returns: difference between final temp and 70 C
"""
system = make_system(r=r)
run_simulation(system, update)
return final_temp(system) - 70
# Solution goes here
error_func2(r=0.1)
# Solution goes here
solution = fsolve(error_func2, 0.1, xtol=1e-8)
r_milk = solution[0]
r_milk
# Solution goes here
milk = make_system(r=r_milk, T_init=5, t_end=15)
run_simulation(milk, update)
final_temp(milk)
###Output
_____no_output_____
###Markdown
Mixing liquids The following function takes `System` objects that represent two liquids, computes the temperature of the mixture, and returns a new `System` object that represents the mixture.
###Code
def mix(s1, s2):
"""Simulates the mixture of two liquids.
s1: System representing coffee
s2: System representing milk
returns: System representing the mixture
"""
assert s1.t_end == s2.t_end
volume = s1.volume + s2.volume
temp = (s1.volume * final_temp(s1) +
s2.volume * final_temp(s2)) / volume
mixture = make_system(T_init=temp,
volume=volume,
r=s1.r)
return mixture
###Output
_____no_output_____
###Markdown
First we'll see what happens if we add the milk at the end. We'll simulate the coffee and the milk separately.
###Code
coffee = make_system(T_init=90, t_end=30, r=r_coffee, volume=300)
run_simulation(coffee, update)
final_temp(coffee)
milk = make_system(T_init=5, t_end=30, r=r_milk, volume=50)
run_simulation(milk, update)
final_temp(milk)
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
plot(coffee.results.temp, label='coffee')
plot(milk.results.temp, '--', label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)',
loc='center left')
savefig('chap07-fig01.pdf')
###Output
_____no_output_____
###Markdown
Here's what happens when we mix them.
###Code
mix_last = mix(coffee, milk)
final_temp(mix_last)
###Output
_____no_output_____
###Markdown
And here's what we get if we add the milk immediately.
###Code
coffee = make_system(T_init=90, r=r_coffee, volume=300)
milk = make_system(T_init=5, r=r_milk, volume=50)
mix_first = mix(coffee, milk)
mix_first.t_end = 30
run_simulation(mix_first, update)
final_temp(mix_first)
###Output
_____no_output_____
###Markdown
The following function takes `t_add`, which is the time when the milk is added, and returns the final temperature.
###Code
def run_and_mix(t_add, t_total=30):
"""Simulates two liquids and them mixes them at t_add.
t_add: time in minutes
t_total: total time to simulate, min
returns: final temperature
"""
coffee = make_system(T_init=90, t_end=t_add,
r=r_coffee, volume=300)
run_simulation(coffee, update)
milk = make_system(T_init=5, t_end=t_add,
r=r_milk, volume=50)
run_simulation(milk, update)
mixture = mix(coffee, milk)
mixture.t_end = t_total - t_add
run_simulation(mixture, update)
return final_temp(mixture)
###Output
_____no_output_____
###Markdown
We can try it out with a few values.
###Code
run_and_mix(0)
run_and_mix(15)
run_and_mix(30)
###Output
_____no_output_____
###Markdown
And then sweep a range of values for `t_add`
###Code
sweep = SweepSeries()
for t_add in linrange(0, 30, 2):
temp = run_and_mix(t_add)
sweep[t_add] = temp
###Output
_____no_output_____
###Markdown
Here's what the result looks like.
###Code
plot(sweep, color='purple')
decorate(xlabel='Time added (min)',
ylabel='Final temperature (C)',
legend=False)
savefig('chap07-fig02.pdf')
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose the coffee shop won't let me take milk in a separate container, but I keep a bottle of milk in the refrigerator at my office. In that case is it better to add the milk at the coffee shop, or wait until I get to the office?Hint: Think about the simplest way to represent the behavior of a refrigerator in this model. The change you make to test this variation of the problem should be very small! Analysis Now we can use the analytic result to compute temperature as a function of time. The following function is similar to `run_simulation`.
###Code
def run_analysis(system):
"""Computes temperature using the analytic solution.
Adds TimeFrame to `system` as `results`
system: System object
"""
unpack(system)
T_init = init.temp
ts = linrange(t0, t_end, dt)
temp_array = T_env + (T_init - T_env) * exp(-r * ts)
temp_series = TimeSeries(temp_array, index=ts)
system.results = TimeFrame(temp_series, columns=['temp'])
###Output
_____no_output_____
###Markdown
Here's how we run it. From the analysis, we have the computed value of `r_coffee2`
###Code
r_coffee2 = 0.011610223142273859
init = State(temp=90)
coffee2 = System(init=init, T_env=22, r=r_coffee2,
t0=0, t_end=30)
run_analysis(coffee2)
final_temp(coffee2)
###Output
_____no_output_____
###Markdown
And we can compare to the results from simulation.
###Code
init = State(temp=90)
coffee = System(init=init, T_env=22, r=r_coffee,
t0=0, t_end=30, dt=1)
run_simulation(coffee, update)
final_temp(coffee)
###Output
_____no_output_____
###Markdown
They are identical except for small roundoff errors.
###Code
coffee.results - coffee2.results
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 7Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Code from the previous chapter
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Quadratic growth Here's the implementation of the quadratic growth model.
###Code
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Here's a `System` object with the parameters `alpha` and `beta`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.0235,
beta=-0.00147)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
**Exercise:** Can you find values for the parameters that make the model fit better? EquilibriumTo understand the quadratic model better, let's plot net growth as a function of population.
###Code
pop_array = linspace(0, 15, 100)
net_growth_array = system.alpha * pop_array + system.beta * pop_array**2
None
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
sns.set_style('whitegrid')
plot(pop_array, net_growth_array)
decorate(xlabel='Population (billions)',
ylabel='Net growth (billions)')
savefig('figs/chap03-fig05.pdf')
sns.set_style('white')
###Output
Saving figure to file figs/chap03-fig05.pdf
###Markdown
Here's what it looks like. Remember that the x axis is population now, not time. It looks like the growth rate passes through 0 when the population is a little less than 14 billion.In the book we found that the net growth is 0 when the population is $-\alpha/\beta$:
###Code
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
This is the equilibrium the population tends toward. `sns` is a library called Seaborn which provides functions that control the appearance of plots. In this case I want a grid to make it easier to estimate the population where the growth rate crosses through 0. Dysfunctions When people first learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems with functions.As an example, suppose you want a function that takes a `System` object, with variables `alpha` and `beta`, as a parameter and computes the carrying capacity, `-alpha/beta`. Here's a good solution:
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
Now let's see all the ways that can go wrong.**Dysfunction 1:** Not using parameters. In the following version, the function doesn't take any parameters; when `sys1` appears inside the function, it refers to the object we created outside the function.
###Code
def carrying_capacity():
K = -sys1.alpha / sys1.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity()
print(pop)
###Output
13.88888888888889
###Markdown
This version actually works, but it is not as versatile as it could be. If there are several `System` objects, this function can only work with one of them, and only if it is named `system`.**Dysfunction 2:** Clobbering the parameters. When people first learn about parameters, they often write functions like this:
###Code
def carrying_capacity(system):
system = System(alpha=0.025, beta=-0.0018)
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
In this example, we have a `System` object named `sys1` that gets passed as an argument to `carrying_capacity`. But when the function runs, it ignores the argument and immediately replaces it with a new `System` object. As a result, this function always returns the same value, no matter what argument is passed.When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions.**Dysfunction 3:** No return value. Here's a version that computes the value of `K` but doesn't return it.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
None
###Markdown
A function that doesn't have a return statement always returns a special value called `None`, so in this example the value of `pop` is `None`. If you are debugging a program and find that the value of a variable is `None` when it shouldn't be, a function without a return statement is a likely cause.**Dysfunction 4:** Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys2 = System(alpha=0.025, beta=-0.0018)
carrying_capacity(sys2)
# print(K) This line won't work because K only exists inside the function.
###Output
_____no_output_____
###Markdown
In this example, `carrying_capacity` runs and returns `K`, but the return value is dropped.When you call a function that returns a value, you should do something with the result. Often you assign it to a variable, as in the previous examples, but you can also use it as part of an expression.For example, you could eliminate the temporary variable `pop` like this:
###Code
print(carrying_capacity(sys1))
###Output
13.88888888888889
###Markdown
Or if you had more than one system, you could compute the total carrying capacity like this:
###Code
total = carrying_capacity(sys1) + carrying_capacity(sys2)
total
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** In the book, I present a different way to parameterize the quadratic model:$ \Delta p = r p (1 - p / K) $where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of `update_func` that implements this version of the model. Test it by computing the values of `r` and `K` that correspond to `alpha=0.025, beta=-0.0018`, and confirm that you get the same results.
###Code
def update_func_exercise(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha*pop * (1 - pop/(-system.alpha/system.beta))
return pop + net_growth
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
sysfuck = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.0235,
beta=-0.00147)
results = run_simulation(sysfuck, update_func_exercise)
plot_results(census, un, results, 'Quadratic model')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 7Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Code from the previous chapter
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Quadratic growth Here's the implementation of the quadratic growth model.
###Code
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Here's a `System` object with the parameters `alpha` and `beta`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
**Exercise:** Can you find values for the parameters that make the model fit better? EquilibriumTo understand the quadratic model better, let's plot net growth as a function of population.
###Code
pop_array = linspace(0, 15, 100)
net_growth_array = system.alpha * pop_array + system.beta * pop_array**2
None
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
sns.set_style('whitegrid')
plot(pop_array, net_growth_array)
decorate(xlabel='Population (billions)',
ylabel='Net growth (billions)')
savefig('figs/chap03-fig05.pdf')
sns.set_style('white')
###Output
Saving figure to file figs/chap03-fig05.pdf
###Markdown
Here's what it looks like. Remember that the x axis is population now, not time. It looks like the growth rate passes through 0 when the population is a little less than 14 billion.In the book we found that the net growth is 0 when the population is $-\alpha/\beta$:
###Code
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
This is the equilibrium the population tends toward. `sns` is a library called Seaborn which provides functions that control the appearance of plots. In this case I want a grid to make it easier to estimate the population where the growth rate crosses through 0. Dysfunctions When people first learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems with functions.As an example, suppose you want a function that takes a `System` object, with variables `alpha` and `beta`, as a parameter and computes the carrying capacity, `-alpha/beta`. Here's a good solution:
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
Now let's see all the ways that can go wrong.**Dysfunction 1:** Not using parameters. In the following version, the function doesn't take any parameters; when `sys1` appears inside the function, it refers to the object we created outside the function.
###Code
def carrying_capacity():
K = -sys1.alpha / sys1.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity()
print(pop)
###Output
13.88888888888889
###Markdown
This version actually works, but it is not as versatile as it could be. If there are several `System` objects, this function can only work with one of them, and only if it is named `system`.**Dysfunction 2:** Clobbering the parameters. When people first learn about parameters, they often write functions like this:
###Code
def carrying_capacity(system):
system = System(alpha=0.025, beta=-0.0018)
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
_____no_output_____
###Markdown
In this example, we have a `System` object named `sys1` that gets passed as an argument to `carrying_capacity`. But when the function runs, it ignores the argument and immediately replaces it with a new `System` object. As a result, this function always returns the same value, no matter what argument is passed.When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions.**Dysfunction 3:** No return value. Here's a version that computes the value of `K` but doesn't return it.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
_____no_output_____
###Markdown
A function that doesn't have a return statement always returns a special value called `None`, so in this example the value of `pop` is `None`. If you are debugging a program and find that the value of a variable is `None` when it shouldn't be, a function without a return statement is a likely cause.**Dysfunction 4:** Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys2 = System(alpha=0.025, beta=-0.0018)
carrying_capacity(sys2)
# print(K) This line won't work because K only exists inside the function.
###Output
_____no_output_____
###Markdown
In this example, `carrying_capacity` runs and returns `K`, but the return value is dropped.When you call a function that returns a value, you should do something with the result. Often you assign it to a variable, as in the previous examples, but you can also use it as part of an expression.For example, you could eliminate the temporary variable `pop` like this:
###Code
print(carrying_capacity(sys1))
###Output
_____no_output_____
###Markdown
Or if you had more than one system, you could compute the total carrying capacity like this:
###Code
total = carrying_capacity(sys1) + carrying_capacity(sys2)
total
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** In the book, I present a different way to parameterize the quadratic model:$ \Delta p = r p (1 - p / K) $where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of `update_func` that implements this version of the model. Test it by computing the values of `r` and `K` that correspond to `alpha=0.025, beta=-0.0018`, and confirm that you get the same results.
###Code
def update_func_quad2(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.r * pop * (1 - pop / system.k)
return pop + net_growth
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system2 = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
r=0.025,
k=-0.025/-0.0018
)
###Output
_____no_output_____
###Markdown
results = run_simulation(system2, update_func_quad2)
###Code
plot_results(census, un, results, 'Quadratic R-K model')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 7Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Code from the previous chapter
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Quadratic growth Here's the implementation of the quadratic growth model.
###Code
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Here's a `System` object with the parameters `alpha` and `beta`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.0255,
beta=-0.0019)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
**Exercise:** Can you find values for the parameters that make the model fit better?
###Code
## Yes, I was able to find parameters that worked pretty well:
"""
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.0255,
beta=-0.0019)
"""
## These are not much better than before, however.
###Output
_____no_output_____
###Markdown
EquilibriumTo understand the quadratic model better, let's plot net growth as a function of population.
###Code
pop_array = linspace(0, 15, 100)
net_growth_array = system.alpha * pop_array + system.beta * pop_array**2
None
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
sns.set_style('whitegrid')
plot(pop_array, net_growth_array)
decorate(xlabel='Population (billions)',
ylabel='Net growth (billions)')
savefig('figs/chap03-fig05.pdf')
sns.set_style('white')
###Output
Saving figure to file figs/chap03-fig05.pdf
###Markdown
Here's what it looks like. Remember that the x axis is population now, not time. It looks like the growth rate passes through 0 when the population is a little less than 14 billion.In the book we found that the net growth is 0 when the population is $-\alpha/\beta$:
###Code
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
This is the equilibrium the population tends toward. `sns` is a library called Seaborn which provides functions that control the appearance of plots. In this case I want a grid to make it easier to estimate the population where the growth rate crosses through 0. Dysfunctions When people first learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems with functions.As an example, suppose you want a function that takes a `System` object, with variables `alpha` and `beta`, as a parameter and computes the carrying capacity, `-alpha/beta`. Here's a good solution:
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
Now let's see all the ways that can go wrong.**Dysfunction 1:** Not using parameters. In the following version, the function doesn't take any parameters; when `sys1` appears inside the function, it refers to the object we created outside the function.
###Code
def carrying_capacity():
K = -sys1.alpha / sys1.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity()
print(pop)
###Output
13.88888888888889
###Markdown
This version actually works, but it is not as versatile as it could be. If there are several `System` objects, this function can only work with one of them, and only if it is named `system`.**Dysfunction 2:** Clobbering the parameters. When people first learn about parameters, they often write functions like this:
###Code
def carrying_capacity(system):
system = System(alpha=0.025, beta=-0.0018)
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
In this example, we have a `System` object named `sys1` that gets passed as an argument to `carrying_capacity`. But when the function runs, it ignores the argument and immediately replaces it with a new `System` object. As a result, this function always returns the same value, no matter what argument is passed.When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions.**Dysfunction 3:** No return value. Here's a version that computes the value of `K` but doesn't return it.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
None
###Markdown
A function that doesn't have a return statement always returns a special value called `None`, so in this example the value of `pop` is `None`. If you are debugging a program and find that the value of a variable is `None` when it shouldn't be, a function without a return statement is a likely cause.**Dysfunction 4:** Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys2 = System(alpha=0.025, beta=-0.0018)
carrying_capacity(sys2)
# print(K) This line won't work because K only exists inside the function.
###Output
_____no_output_____
###Markdown
In this example, `carrying_capacity` runs and returns `K`, but the return value is dropped.When you call a function that returns a value, you should do something with the result. Often you assign it to a variable, as in the previous examples, but you can also use it as part of an expression.For example, you could eliminate the temporary variable `pop` like this:
###Code
print(carrying_capacity(sys1))
###Output
13.88888888888889
###Markdown
Or if you had more than one system, you could compute the total carrying capacity like this:
###Code
total = carrying_capacity(sys1) + carrying_capacity(sys2)
total
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** In the book, I present a different way to parameterize the quadratic model:$ \Delta p = r p (1 - p / K) $where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of `update_func` that implements this version of the model. Test it by computing the values of `r` and `K` that correspond to `alpha=0.025, beta=-0.0018`, and confirm that you get the same results.
###Code
def update_func_quad_reparameterize(pop, t, system):
net_growth = system.r * pop (1 - pop / system.K )
return pop + net_growth
system.r = system.alpha
system.K = -system.alpha/system.beta
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 7Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Code from the previous chapter
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Quadratic growth Here's the implementation of the quadratic growth model.
###Code
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Here's a `System` object with the parameters `alpha` and `beta`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.0251,
beta=-0.00185)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
###Markdown
**Exercise:** Can you find values for the parameters that make the model fit better? EquilibriumTo understand the quadratic model better, let's plot net growth as a function of population.
###Code
pop_array = linspace(0, 15, 100)
net_growth_array = system.alpha * pop_array + system.beta * pop_array**2
None
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
sns.set_style('whitegrid')
plot(pop_array, net_growth_array)
decorate(xlabel='Population (billions)',
ylabel='Net growth (billions)')
savefig('figs/chap03-fig05.pdf')
sns.set_style('white')
###Output
Saving figure to file figs/chap03-fig05.pdf
###Markdown
Here's what it looks like. Remember that the x axis is population now, not time. It looks like the growth rate passes through 0 when the population is a little less than 14 billion.In the book we found that the net growth is 0 when the population is $-\alpha/\beta$:
###Code
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
This is the equilibrium the population tends toward. `sns` is a library called Seaborn which provides functions that control the appearance of plots. In this case I want a grid to make it easier to estimate the population where the growth rate crosses through 0. Dysfunctions When people first learn about functions, there are a few things they often find confusing. In this section I present and explain some common problems with functions.As an example, suppose you want a function that takes a `System` object, with variables `alpha` and `beta`, as a parameter and computes the carrying capacity, `-alpha/beta`. Here's a good solution:
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
Now let's see all the ways that can go wrong.**Dysfunction 1:** Not using parameters. In the following version, the function doesn't take any parameters; when `sys1` appears inside the function, it refers to the object we created outside the function.
###Code
def carrying_capacity():
K = -sys1.alpha / sys1.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity()
print(pop)
###Output
13.88888888888889
###Markdown
This version actually works, but it is not as versatile as it could be. If there are several `System` objects, this function can only work with one of them, and only if it is named `system`.**Dysfunction 2:** Clobbering the parameters. When people first learn about parameters, they often write functions like this:
###Code
def carrying_capacity(system):
system = System(alpha=0.025, beta=-0.0018)
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
In this example, we have a `System` object named `sys1` that gets passed as an argument to `carrying_capacity`. But when the function runs, it ignores the argument and immediately replaces it with a new `System` object. As a result, this function always returns the same value, no matter what argument is passed.When you write a function, you generally don't know what the values of the parameters will be. Your job is to write a function that works for any valid values. If you assign your own values to the parameters, you defeat the whole purpose of functions.**Dysfunction 3:** No return value. Here's a version that computes the value of `K` but doesn't return it.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys1 = System(alpha=0.025, beta=-0.0018)
pop = carrying_capacity(sys1)
print(pop)
###Output
13.88888888888889
###Markdown
A function that doesn't have a return statement always returns a special value called `None`, so in this example the value of `pop` is `None`. If you are debugging a program and find that the value of a variable is `None` when it shouldn't be, a function without a return statement is a likely cause.**Dysfunction 4:** Ignoring the return value. Finally, here's a version where the function is correct, but the way it's used is not.
###Code
def carrying_capacity(system):
K = -system.alpha / system.beta
return K
sys2 = System(alpha=0.025, beta=-0.0018)
carrying_capacity(sys2)
# print(K) This line won't work because K only exists inside the function.
###Output
_____no_output_____
###Markdown
In this example, `carrying_capacity` runs and returns `K`, but the return value is dropped.When you call a function that returns a value, you should do something with the result. Often you assign it to a variable, as in the previous examples, but you can also use it as part of an expression.For example, you could eliminate the temporary variable `pop` like this:
###Code
print(carrying_capacity(sys1))
###Output
13.88888888888889
###Markdown
Or if you had more than one system, you could compute the total carrying capacity like this:
###Code
total = carrying_capacity(sys1) + carrying_capacity(sys2)
total
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** In the book, I present a different way to parameterize the quadratic model:$ \Delta p = r p (1 - p / K) $where $r=\alpha$ and $K=-\alpha/\beta$. Write a version of `update_func` that implements this version of the model. Test it by computing the values of `r` and `K` that correspond to `alpha=0.025, beta=-0.0018`, and confirm that you get the same results.
###Code
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.0251,
beta=-0.00185, r = 0.0251,
K = -0.0251/-.0018)
def update_func_quad(pop, t, system):
"""Compute the population next year with a quadratic model.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = pop*system.r*(1-pop/system.K)
return pop + net_growth
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'Quadratic model')
savefig('figs/chap03-fig04.pdf')
###Output
Saving figure to file figs/chap03-fig04.pdf
|
wk2_dataframes/solutions/py_wk2_Dataframes_SOLUTIONS.ipynb | ###Markdown
Welcome to Week 2: Dataframes! In weeks 2 and 3, we're going to focus on two things, which are essentially the basics of all downstream bioinformatics that you'll do.First, learning to work with dataframes: we're going use the package pandas, which is one of the most commonly used packages for datascience. https://pandas.pydata.org/docs/getting_started/10min.htmlmin has a brief introduction, if you are curious. The core idea of dataframes - the primary datatype associated with pandas - is that you have a two-dimensional matrix of data (i.e., rows and columns, like an Excel spreadsheet), and can associate a *label* with each row and column. For example, with scRNA data, you have a 2D matrix of gene expression counts, where each row is a gene and each column is a cell. If you wanted to look up the expression for a particular gene in a particular cell, rather than have to know the particular XY "coordinates" of that datapoint (i.e., gene row 1827 and cell column 2937), you can just pass in the names of the gene and cell. If you wanted to sort the dataframe by the expression of a particular gene, you'd want to make sure that the pairings of gene names, cell names, and datapoints stay correct through this sorting process, and pandas dataframes help take care of this to keep everything organized and correct. Don't worry if this doesn't make too much sense now - it'll make more sense when we start playing with actual examples.In addition to pandas, we're going to use the package numpy, which is the core "math" package ("scientific computing", as they describe it - https://docs.scipy.org/doc/numpy/user/quickstart.html and https://docs.scipy.org/doc/numpy/user/basics.html). Oftentimes when working with large datasets, you want to perform a simple operation (for example, log transform or depth normalize) on many pieces of data. Numpy implements a lot of tricks under the hood to perform vectorized math operations very efficiently - doing the same operation to many pieces of data. Numpy is built around *arrays*, which are a 1D datatype: essentially a list, but with a lot of added tricks. Say you have a bunch of datapoints - gene counts, for example - and want to multiply each one by 2. Using a list, you would need to do this one-by-one for each list: iterate through the entire list with a for loop (or list comprehension) and multiply each value by two. However, using a numpy array, you can simply multiply the entire array by 2, and numpy will return the element-wise product of the array by 2 (multiplying each elementy by 2). Again, this will make a little more sense once you've played around with it a little.**I would recommend skimming through the introductions for pandas and numpy, since you'll want to become familiar with them both for this lesson and going forward. It's not as crucial that you memorize each function and every feature, but good to just have a sense of what is possible, so that you can remember that there should be a way to do something easily, then google for it later on and re-figure out how to do it.*** https://pandas.pydata.org/docs/getting_started/10min.htmlmin* https://docs.scipy.org/doc/numpy/user/quickstart.htmlThe second thing that we're going to focus on is plotting. **Matplotlib** is the core plotting package in Python. It is built around two concepts: the figure, which is the "overall" image - think about it like a piece of paper or figure panel - and axes, which are the specific XY axes where you plot things. The simplest example is a figure with one axis - say a simple scatter plot. This is what you'll do 90% of the time. Sometimes, though, you might want to group together multiple plots at the same time - say you have four scatter plots you want to make together. In this case, the figure might have four axes (a 2-by-2 grid of scatter plots). The important thing to remember, is that when you're plotting, you 1) create a figure, 2) create an axis, 3) plot things on that axis, [4) create & plot on any additional axes if applicable], and 5) save the figure (which contains the axis/axes you've plotting things on).Two useful matplotlib links with some tutorials and example plots:* https://matplotlib.org/tutorials/index.html* https://matplotlib.org/gallery/index.htmlThree other packages that we aren't going to use here, but you will also encounter down the road: scipy, which has a lot of more specialized functions for things like statistics (and many others - https://docs.scipy.org/doc/scipy/reference/, https://docs.scipy.org/doc/scipy/reference/tutorial/index.html), and **scikit-learn**, which is the core machine learning package (https://scikit-learn.org/stable/getting_started.html), and **seaborn**, which is another data visualization package (https://seaborn.pydata.org/introduction.html) built on matplotlib. Import Statements First, let's import the packages that we are going to use this and next week: pandas, numpy, and matplotlib.We're going to abbreviate their names as follows: import pandas as pd import numpy as np import matplotlib as mpl Then, when we want to do things with numpy, for example, such as the log10() function, rather than say: numpy.log10(my_data), we can say np.log10(my_data). Note that if we wanted to just import numpy (and not rename it - so saying numpy.log10(my_data)), we would just say: import numpy We can also import a particular function from numpy, rather than everything: from numpy import log10 If we ran that, we would be importing just the log10() function from numpy, rather than the package as a whole. We would then access this function by saying log10(my_data), rather than np.log10(my_dat).You can also put these things together and say: from matplotlib import pyplot as plt Here, we're importing pyplot from the matplotlib package, and renaming it plt to save us some typing.
###Code
import pandas as pd
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
1. Lists, loops, and arrays.First, we're going to do a quick overview of lists vs. arrays, and also list comprehensions. 1.1 Lists, loops, and list comprehensions.Here, I've created a list, where each element is a string. Let's say I want to convert each element to be an integer. There are two ways to do this.In the first way, we're creating a new empty list, iterating through each element of string_list, converting it to an integer, and adding it to our new empty list.In the second way, we're using a list comprehension to do this all in one step.
###Code
string_list = ['1','2','3','4','5','6','7','8','9','10']
# first way
int_list = []
for i in string_list:
int_list += [int(i)]
print(int_list)
# second way
int_list2 = [int(i) for i in string_list]
print(int_list2)
# checking that they are equal
print(int_list == int_list2)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
True
###Markdown
List comprehensions are your friend - they can make it easier to do simple operations to an entire list. The basic syntax is: [function(variable) for variable in thing_to_iterate_over]https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/ has a good short tutorial that is worth reading through.You can also make a list comprehension include a conditional: [function(variable) for variable in thing_to_iterate_over if condition] I'm going to provide a few examples below of the same thing done either with a loop or list comprehension, and then ask you to convert a few loops to comprehensions and vice versa. 1.1 Examples
###Code
# make a list containing the integers from 0 to 10
# here, we are using the range() function, which will automatically start at 0
# and then iterate up to the number you provide
# with a loop
list_1 = []
for i in range(10):
list_1 += [i]
# list comprehension
list_2 = [i for i in range(10)]
print(list_1)
print(list_2)
# make a list containing the integers from 10 to 20, but as strings.
# if you provide two inputs to the range() function, it will start at the first one, and end at the second one
# with a loop
list_1 = []
for i in range(10, 20):
list_1 += [str(i)]
# list comprehension
list_2 = [str(i) for i in range(10, 20)]
print(list_1)
print(list_2)
# make a list of the first ten integers squared
# note that you can say either i*i or i**2 to square a number
# to cube it, you could say i*i*i or i**3, and so on
# with a loop
list_1 = []
for i in range(10):
list_1 += [i * i]
# list comprehension
list_2 = [i*i for i in range(10)]
print(list_1)
print(list_2)
# iterate through input_list
# if the integer is less than or equal to 10, then square it
# otherwise, don't include it
input_list = [10, 4, 28, 3, 1, 930, 3928, 6, 2, 8, 2038]
# with a loop
list_1 = []
for i in input_list:
if i <= 10:
list_1 += [i**2]
# list comprehension
list_2 = [i*i for i in input_list if i <= 10]
print(list_1)
print(list_2)
###Output
[100, 16, 9, 1, 36, 4, 64]
[100, 16, 9, 1, 36, 4, 64]
###Markdown
The following two examples are examples where you can write things with a list comprehension - but it starts to get a little hard to follow, and might just be better off writing with a normal list, because the list comprehension starts to become a little unreadable.
###Code
# iterate through input list
# if it is less than or equal to 10, return the integer squared
# otherwise, return the integer raised to the fourth power
# note that when you have an if...else that the location gets moved around
input_list = [10, 4, 28, 3, 1, 930, 3928, 6, 2, 8, 2038]
# with a loop
list_1 = []
for i in input_list:
if i <= 10:
list_1 += [i**2]
else:
list_1 += [i**4]
# list comprehension
list_2 = [i**2 if i <= 10 else i ** 4 for i in input_list]
print(list_1)
print(list_2)
# iterate through integers from 0 to 10
# if it is less than or equal to 5, return 'black'
# otherwise, if it is less than 8, return 'red'
# otherwise, return 'blue'
# note that when you have an if...else that the location gets moved around
# with a loop
list_1 = []
for i in range(10):
if i <= 5:
list_1 += ['black']
elif i < 8:
list_1 += ['red']
else:
list_1 += ['blue']
# list comprehension
list_2 = ['black' if i <= 5 else 'red' if i < 8 else 'blue' for i in range(10)]
print(list_1)
print(list_2)
###Output
['black', 'black', 'black', 'black', 'black', 'black', 'red', 'red', 'blue', 'blue']
['black', 'black', 'black', 'black', 'black', 'black', 'red', 'red', 'blue', 'blue']
###Markdown
1.2 ProblemsConvert the loop to a list comprehension, and the list comprehensions to loops. Check that the results are equal.
###Code
list_1 = []
for i in range(20):
list_1 += [4 * i - 2]
print(list_1)
# write answer below
list_2 = [4 * i - 2 for i in range(20)]
print(list_2)
print(list_1 == list_2)
input_list = ['black','black','orange','black','red','black','red','black','red','red','black','green','blue','purple']
list_1 = []
for i in input_list:
if i == 'black':
list_1 += [1]
else:
list_1 += [5]
print(list_1)
# write answer below
list_2 = [1 if i == 'black' else 5 for i in input_list]
print(list_2)
print(list_1 == list_2)
list_1 = [str(i / 2) for i in range(15)]
print(list_1)
# write answer below
list_2 = []
for i in range(15):
list_2 += [str(i / 2)]
print(list_2)
print(list_1 == list_2)
input_list = [1,4,8,2,40,2038,233,23,1,5,3,882]
list_1 = [i for i in input_list if i % 2 == 0]
print(list_1)
# write your answer below
list_2 = []
for i in input_list:
if i % 2 == 0:
list_2 += [i]
print(list_2)
print(list_1 == list_2)
###Output
[4, 8, 2, 40, 2038, 882]
[4, 8, 2, 40, 2038, 882]
True
###Markdown
1.2 Numpy arraysTo create an array from a list, you say: new_array = np.array(old_list) We're going to try doing the same things to list and arrays to see what happens in each case.**Before running the cells below, try to guess that the output will be in each case (for the list versus array), and pay attention to the differences between how lists and arrays behave.**
###Code
test_list = [i for i in range(10)]
test_array = np.array(test_list)
print(test_list)
print(test_array)
print()
# what happens if we multiply by two?
print(test_list * 2)
print(test_array * 2)
# what happens if we try to add one to each one?
# note that this will only work for the arrays: it will throw an error for the list
print(test_list + 1)
print(test_array + 1)
# what happens if we try to add two lists or two arrays together?
print(test_list + test_list)
print(test_array + test_array)
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[ 0 2 4 6 8 10 12 14 16 18]
###Markdown
**This is all that we are going to go over for now - the main takeaway here is that when you're dealing with arrays, you're performing the same operation on all elements of the array.** 2. Importing a genome annotation file in pandas Here, we're going to look at a file that I've downloaded from the ENSEMBL website that contains annotation information for various genes in the genome. This file was originally downloaded with transcript-based annotations, which I convereted to be gene-based. When you're doing RNA-seq analysis, you can either perform analyses at the transcript level (meaning considering different isoforms of the same gene differently) or at the gene level (aggregating different isoforms of the same gene); we're going to focus on gene level analysis for now.First, we need to import the annotation file. I typically like to define paths and file names at the start, just to keep things organized.1. Create a variable called 'path' which contains the directory listing to wherever you downloaded the files.2. Create a variable called 'fn_anno' which is the name of the file.As a reminder, both of these should be strings, and the variable 'path' should end with a '/'.
###Code
# you will need to change this based on where you saved the files on your comptuer, as you did last week
# path = '/path/to/the/directory/containing/the/file/'
# fn = 'name_of_the_file.extension'
###Output
_____no_output_____
###Markdown
**Using pd.read_csv(), import the txt file (comma delimted) containing the annotations into a dataframe called 'anno', and set the index to be the 'gene' column. Use .head() to show the first 5 rows of the resulting dataframe.**See https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.htmland https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.htmland https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.htmlfor reference. These are part of the pandas documentation. I've provided these here just so you can get started, but in the future, I'll provide some hints/direction as to how to go about something, but it will be up to you to look up how to actually use the functions in the pandas (or other) documentation. In real life, you'll have to look things up yourself, and and I'm constantly looking up things that I've forgotten, don't know how to do, or don't want to figure out and would rather copy something somebody else already figured out and helpfully posted online.
###Code
path = '/Users/kevin/changlab/github/Bioinformatics-Tutorials/wk2_dataframes/data/'
fn = '/Homo_sapiens.GRCh38.gene_annotations.txt.gz'
anno = pd.read_csv(path + fn, sep=',')
anno = anno.set_index('gene')
anno.head()
###Output
_____no_output_____
###Markdown
**Print the information for the gene** *'ENSG00000181449.3'* **. You should familiarize yourself with the .loc and .iloc commands.**
###Code
anno.loc['ENSG00000181449.3']
###Output
_____no_output_____
###Markdown
**Save the information in the** *'start'* **column of the anno dataframe in a new variable, called** *start_column* **. Print start_column.**
###Code
start_column = anno['start']
print(start_column)
###Output
gene
ENSG00000000003.14 100630765
ENSG00000000005.5 100589213
ENSG00000000419.12 50935098
ENSG00000000457.13 169853881
ENSG00000000460.16 169780373
...
ENSG00000284596.1 102471469
ENSG00000284597.1 7931256
ENSG00000284598.1 7420360
ENSG00000284599.1 16979511
ENSG00000284600.1 1795567
Name: start, Length: 62803, dtype: int64
###Markdown
One of the most important things about working with genomics data is double checking that the files you are working with have the data you expect them to have.For instance: what values are present in the 'chr' column of our annotation dataframe? How many chromosome values are in this column?What chromosomes would you expect to be there? Are there any other chromosomes present, and if so, what are they?As a hint, you're looking for unique values in that column of the dataframe (and then also the length of the result).
###Code
print(len(anno['chr'].unique()))
print(anno['chr'].unique())
###Output
380
['X' '20' '1' '6' '3' '7' '12' '11' '4' '17' '2' '16' '8' '19' '9' '13'
'14' '5' '22' '10' 'Y' '18' '15' 'CHR_HSCHR6_MHC_MCF_CTG1'
'CHR_HSCHR6_MHC_QBL_CTG1' 'CHR_HSCHR6_MHC_DBB_CTG1'
'CHR_HSCHR6_MHC_SSTO_CTG1' 'CHR_HSCHR6_MHC_COX_CTG1' '21'
'CHR_HSCHR6_MHC_MANN_CTG1' 'CHR_HSCHR4_6_CTG12' 'MT' 'CHR_HSCHR1_5_CTG3'
'CHR_HSCHR6_MHC_APD_CTG1' 'CHR_HG1362_PATCH' 'CHR_HSCHR15_3_CTG8'
'CHR_HSCHR19_4_CTG3_1' 'CHR_HSCHR16_1_CTG1' 'CHR_HSCHR1_2_CTG3'
'CHR_HG2128_PATCH' 'CHR_HSCHR13_1_CTG3' 'CHR_HSCHR16_2_CTG3_1'
'CHR_HSCHR3_1_CTG2_1' 'CHR_HSCHR21_2_CTG1_1' 'CHR_HSCHR17_4_CTG4'
'CHR_HSCHR12_2_CTG2' 'CHR_HSCHR12_3_CTG2_1' 'CHR_HSCHR1_2_CTG31'
'CHR_HG142_HG150_NOVEL_TEST' 'CHR_HG151_NOVEL_TEST'
'CHR_HSCHR16_1_CTG3_1' 'CHR_HSCHR17_1_CTG4' 'CHR_HSCHR1_1_CTG31'
'CHR_HSCHR7_1_CTG6' 'CHR_HSCHR12_1_CTG1' 'CHR_HSCHR22_1_CTG1'
'CHR_HSCHR12_2_CTG2_1' 'CHR_HSCHR12_1_CTG2_1' 'CHR_HSCHR18_2_CTG2'
'CHR_HSCHR18_1_CTG2_1' 'CHR_HSCHR19_1_CTG3_1' 'CHR_HSCHR18_1_CTG1_1'
'CHR_HSCHR21_4_CTG1_1' 'CHR_HSCHR22_1_CTG2' 'CHR_HSCHR1_3_CTG31'
'CHR_HSCHR17_6_CTG4' 'CHR_HSCHR17_5_CTG4' 'CHR_HSCHR18_2_CTG2_1'
'CHR_HSCHR17_1_CTG1' 'CHR_HSCHR4_1_CTG12' 'CHR_HSCHR5_1_CTG5'
'CHR_HSCHR20_1_CTG1' 'CHR_HSCHR21_3_CTG1_1' 'CHR_HSCHR9_1_CTG1'
'CHR_HSCHR4_1_CTG6' 'CHR_HSCHR3_1_CTG1' 'CHR_HSCHR3_9_CTG3'
'CHR_HSCHR22_1_CTG7' 'KI270713.1' 'KI270711.1' 'CHR_HSCHR22_2_CTG1'
'CHR_HSCHR15_4_CTG8' 'CHR_HSCHR1_1_CTG3' 'CHR_HSCHR10_1_CTG2'
'GL000205.2' 'CHR_HSCHR19KIR_LUCE_BDEL_HAP_CTG3_1' 'CHR_HSCHR10_1_CTG1'
'CHR_HSCHR19KIR_FH13_A_HAP_CTG3_1' 'CHR_HSCHR12_3_CTG2'
'CHR_HSCHR19LRC_LRC_I_CTG3_1' 'CHR_HSCHR1_ALT2_1_CTG32_1'
'CHR_HSCHR19KIR_FH08_A_HAP_CTG3_1' 'CHR_HSCHR19LRC_COX1_CTG3_1'
'CHR_HSCHR19KIR_RP5_B_HAP_CTG3_1' 'CHR_HSCHR19KIR_RSH_A_HAP_CTG3_1'
'CHR_HSCHR19LRC_LRC_T_CTG3_1' 'CHR_HSCHR19KIR_ABC08_AB_HAP_T_P_CTG3_1'
'CHR_HSCHR11_1_CTG7' 'CHR_HSCHR19KIR_G085_A_HAP_CTG3_1' 'KI270721.1'
'CHR_HSCHR14_7_CTG1' 'CHR_HSCHR19LRC_PGF1_CTG3_1' 'CHR_HSCHR14_3_CTG1'
'KI270728.1' 'CHR_HSCHR11_1_CTG8' 'CHR_HSCHR19KIR_FH15_A_HAP_CTG3_1'
'CHR_HSCHR19KIR_T7526_BDEL_HAP_CTG3_1' 'CHR_HSCHR8_3_CTG7'
'CHR_HSCHR6_1_CTG2' 'CHR_HSCHR17_7_CTG4' 'CHR_HSCHR15_1_CTG1'
'CHR_HSCHR15_1_CTG8' 'CHR_HSCHR8_3_CTG1' 'CHR_HSCHR19LRC_LRC_J_CTG3_1'
'CHR_HSCHR8_8_CTG1' 'CHR_HSCHR5_2_CTG1_1'
'CHR_HSCHR19KIR_FH15_B_HAP_CTG3_1' 'CHR_HSCHR15_5_CTG8'
'CHR_HSCHR19KIR_ABC08_AB_HAP_C_P_CTG3_1' 'CHR_HSCHR19LRC_PGF2_CTG3_1'
'CHR_HSCHR17_2_CTG2' 'GL000220.1' 'CHR_HSCHR22_1_CTG3' 'KI270733.1'
'GL000219.1' 'CHR_HSCHR19KIR_FH06_BA1_HAP_CTG3_1'
'CHR_HSCHR19KIR_FH13_BA2_HAP_CTG3_1' 'CHR_HSCHR1_4_CTG31'
'CHR_HSCHR18_ALT21_CTG2_1' 'CHR_HSCHR5_1_CTG1_1' 'CHR_HSCHR22_3_CTG1'
'CHR_HSCHR11_1_CTG6' 'CHR_HSCHR3_3_CTG3' 'CHR_HSCHR1_2_CTG32_1'
'CHR_HSCHRX_2_CTG3' 'CHR_HSCHR19KIR_FH08_BAX_HAP_CTG3_1'
'CHR_HSCHR19KIR_G248_A_HAP_CTG3_1' 'CHR_HSCHR19LRC_COX2_CTG3_1'
'CHR_HSCHR19KIR_FH05_A_HAP_CTG3_1' 'CHR_HSCHR7_2_CTG6'
'CHR_HSCHR11_1_CTG5' 'CHR_HSCHR5_1_CTG1' 'CHR_HSCHR17_1_CTG2'
'CHR_HSCHR3_7_CTG3' 'CHR_HSCHR6_1_CTG8' 'CHR_HSCHR16_CTG2'
'CHR_HSCHR9_1_CTG2' 'CHR_HSCHR15_1_CTG3' 'CHR_HSCHR4_1_CTG9'
'CHR_HSCHR19KIR_T7526_A_HAP_CTG3_1' 'CHR_HSCHR19_1_CTG2'
'CHR_HSCHR19KIR_FH06_A_HAP_CTG3_1' 'CHR_HSCHR14_2_CTG1'
'CHR_HSCHR19KIR_RSH_BA2_HAP_CTG3_1' 'CHR_HSCHR19_3_CTG3_1'
'CHR_HSCHR19KIR_G085_BA1_HAP_CTG3_1' 'CHR_HSCHR8_5_CTG1'
'CHR_HSCHR17_10_CTG4' 'CHR_HSCHR17_2_CTG5' 'GL000216.2'
'CHR_HSCHR4_7_CTG12' 'CHR_HSCHR19KIR_GRC212_BA1_HAP_CTG3_1'
'CHR_HSCHR19KIR_GRC212_AB_HAP_CTG3_1' 'CHR_HSCHR17_1_CTG5'
'CHR_HSCHR16_3_CTG1' 'CHR_HSCHR14_1_CTG1' 'CHR_HSCHR2_3_CTG7_2'
'CHR_HSCHR5_3_CTG5' 'CHR_HSCHR15_3_CTG3' 'CHR_HSCHR10_1_CTG4'
'CHR_HSCHR19KIR_G248_BA2_HAP_CTG3_1' 'CHR_HSCHR19KIR_FH05_B_HAP_CTG3_1'
'CHR_HSCHR8_4_CTG7' 'CHR_HSCHR19LRC_LRC_S_CTG3_1' 'CHR_HSCHR19_5_CTG2'
'CHR_HSCHR19KIR_LUCE_A_HAP_CTG3_1' 'CHR_HSCHR22_1_CTG6'
'CHR_HSCHR22_1_CTG5' 'CHR_HSCHR21_5_CTG2' 'CHR_HSCHR5_2_CTG5'
'CHR_HSCHR19_2_CTG2' 'KI270727.1' 'CHR_HSCHR9_1_CTG3' 'CHR_HSCHR5_3_CTG1'
'GL000194.1' 'CHR_HSCHR19_3_CTG2' 'CHR_HSCHR5_2_CTG1' 'CHR_HSCHR2_1_CTG1'
'CHR_HSCHR7_3_CTG6' 'CHR_HSCHR11_2_CTG1' 'CHR_HSCHR20_1_CTG3'
'CHR_HSCHR17_1_CTG9' 'CHR_HSCHR12_6_CTG2_1' 'CHR_HSCHR12_5_CTG2_1'
'KI270726.1' 'CHR_HSCHR19KIR_ABC08_A1_HAP_CTG3_1' 'CHR_HSCHR3_6_CTG3'
'CHR_HSCHR8_5_CTG7' 'CHR_HSCHR17_8_CTG4' 'CHR_HSCHR1_3_CTG32_1'
'CHR_HSCHR2_1_CTG7_2' 'CHR_HSCHR6_1_CTG5' 'CHR_HSCHR6_1_CTG4'
'CHR_HSCHR12_4_CTG2' 'CHR_HSCHR3_8_CTG3' 'CHR_HSCHR12_5_CTG2'
'CHR_HSCHR3_2_CTG3' 'CHR_HSCHR8_9_CTG1' 'CHR_HSCHR7_1_CTG4_4'
'CHR_HSCHR2_2_CTG7' 'CHR_HSCHR6_8_CTG1' 'CHR_HSCHR13_1_CTG1'
'CHR_HSCHR17_2_CTG1' 'CHR_HSCHR17_3_CTG2' 'KI270734.1'
'CHR_HSCHR4_3_CTG12' 'CHR_HSCHR2_3_CTG15' 'CHR_HSCHR5_4_CTG1_1'
'CHR_HSCHR7_2_CTG4_4' 'CHR_HSCHR11_3_CTG1' 'CHR_HSCHR12_4_CTG2_1'
'GL000195.1' 'CHR_HSCHR16_4_CTG1' 'CHR_HSCHR9_1_CTG4' 'CHR_HSCHRX_1_CTG3'
'CHR_HSCHR5_4_CTG1' 'CHR_HSCHR9_1_CTG5' 'CHR_HSCHR3_1_CTG3' 'GL000225.1'
'CHR_HSCHR2_2_CTG7_2' 'CHR_HSCHR18_ALT2_CTG2_1' 'CHR_HSCHR15_2_CTG3'
'CHR_HSCHR5_6_CTG1' 'CHR_HSCHR3_2_CTG2_1' 'CHR_HSCHR2_1_CTG5'
'CHR_HSCHR6_1_CTG7' 'KI270750.1' 'CHR_HSCHR11_2_CTG1_1'
'CHR_HSCHR22_1_CTG4' 'CHR_HSCHR2_4_CTG1' 'GL000213.1' 'CHR_HSCHR3_5_CTG3'
'CHR_HSCHRX_2_CTG12' 'CHR_HSCHR15_2_CTG8' 'CHR_HSCHR8_2_CTG7'
'KI270731.1' 'CHR_HSCHR8_1_CTG6' 'CHR_HSCHR3_4_CTG3' 'CHR_HSCHR8_7_CTG1'
'KI270442.1' 'GL000218.1' 'KI270744.1' 'GL000009.2' 'CHR_HSCHR19_4_CTG2'
'CHR_HG1342_HG2282_PATCH' 'CHR_HG2046_PATCH' 'CHR_HSCHR1_4_CTG3'
'CHR_HSCHR22_4_CTG1' 'CHR_HG2288_HG2289_PATCH' 'CHR_HG2030_PATCH'
'CHR_HG2217_PATCH' 'CHR_HG126_PATCH' 'CHR_HG2066_PATCH'
'CHR_HSCHR20_1_CTG2' 'CHR_HSCHR2_2_CTG1' 'CHR_HSCHR21_6_CTG1_1'
'CHR_HG2021_PATCH' 'CHR_HG2095_PATCH' 'CHR_HSCHR3_3_CTG1'
'CHR_HSCHR6_1_CTG9' 'CHR_HSCHR2_1_CTG15' 'CHR_HSCHR3_4_CTG2_1'
'CHR_HSCHR11_1_CTG1_2' 'CHR_HSCHR4_2_CTG12' 'CHR_HG2247_PATCH'
'CHR_HG2232_PATCH' 'CHR_HSCHR5_5_CTG1' 'CHR_HG1832_PATCH'
'CHR_HSCHR21_8_CTG1_1' 'CHR_HSCHR6_1_CTG6' 'CHR_HG2104_PATCH'
'CHR_HG2233_PATCH' 'CHR_HG2058_PATCH' 'CHR_HSCHR4_1_CTG4'
'CHR_HSCHR1_1_CTG11' 'CHR_HG2291_PATCH' 'CHR_HSCHR22_5_CTG1'
'CHR_HSCHR2_1_CTG7' 'CHR_HSCHR2_2_CTG15' 'CHR_HSCHR22_8_CTG1'
'CHR_HG2191_PATCH' 'CHR_HSCHR4_4_CTG12' 'CHR_HSCHR20_1_CTG4'
'CHR_HSCHR5_7_CTG1' 'CHR_HG2062_PATCH' 'CHR_HSCHR1_1_CTG32_1'
'CHR_HSCHR3_5_CTG2_1' 'CHR_HSCHR7_1_CTG1' 'CHR_HSCHR2_6_CTG7_2'
'CHR_HSCHR4_5_CTG12' 'CHR_HSCHR2_3_CTG1' 'CHR_HG986_PATCH'
'CHR_HG2249_PATCH' 'CHR_HSCHR15_6_CTG8' 'CHR_HG2290_PATCH'
'CHR_HSCHR18_3_CTG2_1' 'CHR_HSCHR7_2_CTG1' 'CHR_HSCHR8_1_CTG1'
'CHR_HSCHR13_1_CTG5' 'CHR_HSCHR19_2_CTG3_1' 'CHR_HSCHR8_6_CTG1'
'CHR_HSCHR8_4_CTG1' 'CHR_HSCHR8_1_CTG7' 'CHR_HSCHR17_2_CTG4'
'CHR_HSCHR10_1_CTG3' 'CHR_HSCHR7_1_CTG7' 'CHR_HG2235_PATCH'
'CHR_HSCHR8_2_CTG1' 'CHR_HSCHR17_9_CTG4' 'CHR_HSCHR17_3_CTG4'
'CHR_HSCHR18_2_CTG1_1' 'CHR_HG1651_PATCH' 'CHR_HSCHR7_2_CTG7'
'CHR_HSCHR1_5_CTG32_1' 'CHR_HG2213_PATCH' 'CHR_HSCHR16_4_CTG3_1'
'CHR_HG26_PATCH' 'CHR_HSCHR6_1_CTG10' 'CHR_HSCHR22_6_CTG1'
'CHR_HSCHR16_5_CTG1' 'CHR_HSCHR18_5_CTG1_1' 'CHR_HSCHR13_1_CTG8'
'CHR_HG2334_PATCH' 'CHR_HG2072_PATCH' 'CHR_HSCHR4_9_CTG12'
'CHR_HSCHR22_7_CTG1' 'CHR_HSCHR4_2_CTG4' 'CHR_HG2116_PATCH'
'CHR_HSCHR9_1_CTG6' 'CHR_HSCHR4_8_CTG12' 'CHR_HG2239_PATCH'
'CHR_HSCHR12_2_CTG1' 'CHR_HSCHR10_1_CTG6' 'CHR_HSCHR1_3_CTG3'
'CHR_HG2023_PATCH' 'CHR_HG107_PATCH' 'CHR_HSCHR4_11_CTG12'
'CHR_HG1311_PATCH' 'CHR_HG2022_PATCH' 'CHR_HG2063_PATCH'
'CHR_HG926_PATCH' 'CHR_HSCHR19KIR_0010-5217-AB_CTG3_1'
'CHR_HSCHR1_6_CTG3' 'CHR_HSCHR19KIR_502960008-1_CTG3_1'
'CHR_HSCHR19KIR_CA01-TB01_CTG3_1' 'CHR_HSCHR19KIR_0019-4656-A_CTG3_1'
'CHR_HSCHR19KIR_0019-4656-B_CTG3_1' 'CHR_HG30_PATCH' 'CHR_HSCHR5_8_CTG1'
'CHR_HSCHR19KIR_7191059-1_CTG3_1' 'CHR_HSCHR19KIR_CA01-TA01_1_CTG3_1'
'CHR_HSCHR19KIR_502960008-2_CTG3_1' 'CHR_HSCHR19KIR_CA04_CTG3_1'
'CHR_HSCHR17_3_CTG1' 'CHR_HSCHR19KIR_HG2393_CTG3_1'
'CHR_HSCHR19KIR_HG2394_CTG3_1' 'CHR_HSCHR19KIR_CA01-TB04_CTG3_1'
'CHR_HSCHR19KIR_7191059-2_CTG3_1' 'CHR_HG2266_PATCH'
'CHR_HSCHR19KIR_CA01-TA01_2_CTG3_1' 'CHR_HSCHR17_11_CTG4'
'CHR_HSCHR19KIR_HG2396_CTG3_1' 'CHR_HSCHR4_12_CTG12' 'CHR_HSCHR9_1_CTG7'
'CHR_HG2285_HG106_HG2252_PATCH' 'CHR_HG2236_PATCH' 'CHR_HG2067_PATCH'
'CHR_HG2088_PATCH' 'CHR_HG1708_PATCH' 'CHR_HG23_PATCH']
###Markdown
Note that some of the values in the chromosome column are numbers (e.g., 1, 2, etc.) and others are strings (e.g., 'X', 'Y'). When Python imported the dataframe (pd.read_csv()), did it import the numerical chromosomes as integers or strings?
###Code
np.unique([str(type(i)) for i in anno['chr']])
###Output
_____no_output_____
###Markdown
Let's say that we want to subset this annotation to get a list of only those genes that are on the 'normal' chromosomes: autosomes, sex chromosomes, in the mitochondrial genome.Make a list that looks like this:[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 'X', 'Y', 'MT']Do it without explicitly writing out the numbers 1 to 22. Feel free to use either lists (built-in to Python) or numpy arrays (np.array()). Be sure to save the numerical chromosomes as the correct data type (integer or string) to match the data type of the values in anno['chr'].
###Code
my_chrs = [str(i) for i in range(1,23)] + ['X', 'Y','MT']
print(my_chrs)
###Output
['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', 'X', 'Y', 'MT']
###Markdown
Subset the anno dataframe to include only those genes whose chromosome annotations are in this list of chromosomes. Save this as a new dataframe called anno_filt.How big is this new annotation/how many things did we filter out? Print the first five columns of the dataframe with .head().You should use the .isin() function, and can also use .shape to get the size of a dataframe.(You should end up with 57106 rows remaining in anno_filt)
###Code
# print(anno.shape) to get the size of the current dataframe
anno_filt = anno[anno['chr'].isin(my_chrs)]
print(anno.shape)
print(anno_filt.shape)
anno_filt.head()
###Output
(62803, 8)
(57106, 8)
###Markdown
How many genes are on each chromosome? There's a few ways you could do this, but one is to use a Counter.1. from the package 'collections' import Counter. https://docs.python.org/3.6/library/collections.htmlcollections.Counter2. create a new variable called chr_count that is a Counter, and pass in the chr column of your dataframe to your counter to get the counts of how many times each chromosome is found3. print the results
###Code
from collections import Counter
chr_count = Counter(anno_filt['chr'])
for i in my_chrs:
print(i, chr_count[i])
###Output
1 5191
2 3919
3 2992
4 2466
5 2796
6 2823
7 2846
8 2335
9 2226
10 2177
11 3172
12 2852
13 1283
14 2194
15 2105
16 2373
17 2918
18 1123
19 2877
20 1381
21 821
22 1329
X 2351
Y 519
MT 37
###Markdown
Part 3: ENCODE data**Import the file 'all_ENCODE_metadata.tsv.gz' into a dataframe called encode. Set the index column to be the file accession number, and print the first rows with .head()**
###Code
fn = 'all_ENCODE_metadata.tsv.gz'
encode = pd.read_csv(path + fn, sep='\t')
encode = encode.set_index('File accession')
encode.head()
###Output
/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3063: DtypeWarning: Columns (13,14,15,23,26,34) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
How big is this dataframe? What type of information is present in the rows? Columns?
###Code
print(encode.shape)
print(encode.columns)
###Output
(301366, 49)
Index(['File format', 'Output type', 'Experiment accession', 'Assay',
'Biosample term id', 'Biosample term name', 'Biosample type',
'Biosample life stage', 'Biosample sex', 'Biosample Age',
'Biosample organism', 'Biosample treatments',
'Biosample subcellular fraction term name', 'Biosample phase',
'Biosample synchronization stage', 'Experiment target',
'Antibody accession', 'Library made from', 'Library depleted in',
'Library extraction method', 'Library lysis method',
'Library crosslinking method', 'Library strand specific',
'Experiment date released', 'Project', 'RBNS protein concentration',
'Library fragmentation method', 'Library size range',
'Biological replicate(s)', 'Technical replicate', 'Read length',
'Mapped read length', 'Run type', 'Paired end', 'Paired with',
'Derived from', 'Size', 'Lab', 'md5sum', 'dbxrefs', 'File download URL',
'Assembly', 'Platform', 'Controlled by', 'File Status', 'Audit WARNING',
'Audit INTERNAL_ACTION', 'Audit NOT_COMPLIANT', 'Audit ERROR'],
dtype='object')
###Markdown
Create a new dataframe called encode_filt that includes only samples that: - are from human (homo sapiens) - do not have audit errors. Specifically, only include rows where encode['Audit ERROR'].isnull() is True. For the first criteria, you may need to look at what columns are present in the dataframe to choose the appropriate ones to filter on. Your dataframe should have 223543 rows.
###Code
# I'm providing the answer here, so that you can see how to do this
# first, we're creating a variable m1 which asks "is the value equal to 'Homo sapeins'
# for each value in the 'Biosample organism' column"
m1 = encode['Biosample organism'] == 'Homo sapiens'
# second, we're creating a variable m2 which asks "is there no value, i.e., no error included'
# for each value in the 'Audit ERROR' column"
m2 = encode['Audit ERROR'].isnull()
# now, we're asking if for each element (note that these correspond to rows of the dataframe)
# are both criteria two?
mask = m1 & m2
# now, we're actually filtering the dataframe
encode_filt = encode.loc[mask]
print(encode_filt.shape)
###Output
(223543, 49)
###Markdown
Breaking briefly from the ENCODE data, to try to illustrate what is going on here:Here, we've created three arrays, with three values each. We're performing an "and" operation - meaning that if everything is True, it will return True; otherwise, it will return False.
###Code
# example on how to merge multiple masks
a = np.array([True, True, False])
b = np.array([False, True, False])
c = np.array([True, True, True])
d = a & b & c
print(d)
# note that you can't do this with lists
# (try it yourself and see what happens)
# arrays make our lives easier
###Output
[False True False]
###Markdown
What types of RNA-seq data are available? Create a dataframe called rna that only has rows that satisfy all of the following criteria: - They come from RNA-seq experiments. - Their libraries are made from RNA - They are depleted in rRNA - They are fastq files You will need to look at both the column listings, as well as the unique values in these columns, to be able to know what values to filter on. You will want to look at four columns, create a boolean mask for each of them (a array/series containing either True or False for each value), and then make a final mask that contains only values where all four sub-masks were True.Your final 'rna' dataframe should have 1017 rows.
###Code
m1 = encode_filt['Assay'] == 'RNA-seq'
m2 = encode_filt['Library made from'] == 'RNA'
m3 = encode_filt['Library depleted in'] == 'rRNA'
m4 = encode_filt['File format'] == 'fastq'
mask = m1 & m2 & m3 & m4
rna = encode_filt[mask]
print(rna.shape)
###Output
(1017, 49)
###Markdown
Get a list of the unique biosample term names in the rna dataframe. In other words, a list of biosample term names for which there exists RNA-seq data that satisfied our above criteria.
###Code
in_rna = rna['Biosample term name'].unique()
print(in_rna)
###Output
['pericardium fibroblast' 'pulmonary artery endothelial cell' 'K562'
'gastrocnemius medialis' 'metanephros' 'thyroid gland' 'body of pancreas'
'airway epithelial cell' 'urinary bladder' 'tongue' 'IMR-90'
'skeletal muscle tissue' 'esophagus muscularis mucosa' 'HeLa-S3'
'dermis microvascular lymphatic vessel endothelial cell' 'heart'
'smooth muscle cell of bladder' 'spinal cord' 'MCF-7' 'sigmoid colon'
'uterus' 'lung' 'skin of body' 'SJSA1' 'A549' 'stomach' 'M059J'
'suprapubic skin' 'fibroblast of lung' 'SK-N-SH' 'RPMI-7951'
'hair follicular keratinocyte' 'omental fat pad' 'adrenal gland'
'vein endothelial cell' 'keratinocyte' 'skeletal muscle satellite cell'
'HepG2' 'subcutaneous preadipocyte' 'subcutaneous adipose tissue'
'fibroblast of villous mesenchyme' 'spleen' 'cerebellum' 'temporal lobe'
'endothelial cell of coronary artery' 'HT1080'
'smooth muscle cell of the umbilical artery' 'right lobe of liver'
'transverse colon' 'umbilical cord' 'occipital lobe'
'glomerular endothelial cell' 'tibial nerve' 'upper lobe of left lung'
'ovary' 'A172' 'melanocyte of skin' 'esophagus squamous epithelium'
'cardiac ventricle fibroblast' 'testis' 'pericyte cell'
'bronchus fibroblast of lung' 'smooth muscle cell of the coronary artery'
'hair follicle dermal papilla cell' 'vagina' 'G401' 'ascending aorta'
"Peyer's patch" 'neural cell' 'heart left ventricle'
'gastroesophageal sphincter' 'frontal cortex'
"mesenchymal stem cell of Wharton's jelly"
'smooth muscle cell of the pulmonary artery' 'breast epithelium'
'parietal lobe' 'mesenchymal stem cell of adipose'
'thoracic aorta endothelial cell' 'placental epithelial cell'
'camera-type eye' 'mammary microvascular endothelial cell'
'kidney epithelial cell' 'cardiac atrium fibroblast'
'dermis blood vessel endothelial cell' 'aortic smooth muscle cell'
'mesangial cell' 'A375' 'bladder microvascular endothelial cell' 'HT-29'
'mesenchymal stem cell of the bone marrow' 'liver' 'diencephalon'
'skeletal muscle myoblast'
'nasal cavity respiratory epithelium epithelial cell of viscerocranial mucosa'
'smooth muscle cell of trachea'
'dermis lymphatic vessel endothelial cell'
'right atrium auricular region' 'lung microvascular endothelial cell'
'SK-N-DZ' 'epithelial cell of umbilical artery' 'NCI-H460' 'H1-hESC'
'epithelial cell of proximal tubule'
'fibroblast of the aortic adventitia'
'endometrial microvascular endothelial cells'
'epithelial cell of alveolus of lung' 'prostate gland'
'renal cortical epithelial cell' 'Caki2' 'MG63'
'hematopoietic multipotent progenitor cell' 'foreskin fibroblast'
'SK-MEL-5' 'LHCN-M2' 'lower leg skin' 'mammary epithelial cell'
'uterine smooth muscle cell' 'bronchial smooth muscle cell'
'tracheal epithelial cell' 'myometrial cell' 'regular cardiac myocyte'
'Daoy' 'thoracic aorta' 'articular chondrocyte of knee joint'
'hepatocyte' 'astrocyte' 'SJCRH30' 'GM12878' 'mononuclear cell' 'H4'
'smooth muscle cell' 'H7-hESC' 'myocyte' 'osteoblast'
'neural progenitor cell' 'myotube' 'fibroblast of dermis'
'induced pluripotent stem cell' 'CD14-positive monocyte' 'Karpas-422'
'bronchial epithelial cell' 'endothelial cell of umbilical vein' 'B cell'
'cardiac muscle cell' 'PC-3' 'bipolar neuron' 'OCI-LY7'
'fibroblast of arm']
###Markdown
What types of ChIP-seq data are available? Create a dataframe called chip that only has rows that satisfy all of the following criteria: - They come from ChIP-seq experiments - The ChIP-seq target is H3K27ac-human - The file format is bed narrowPeak - The output type is replicated peaks - The bed files were aligned to the GRCh38 assembly. Your final dataframe should have 80 rows.
###Code
m1 = encode_filt['Assay'] == 'ChIP-seq'
m2 = encode_filt['Experiment target'] == 'H3K27ac-human'
m3 = encode_filt['File format'] == 'bed narrowPeak'
m4 = encode_filt['Output type'] == 'replicated peaks'
m5 = encode_filt['Assembly'] == 'GRCh38'
mask = m1 & m2 & m3 & m4 & m5
chip = encode_filt.loc[mask]
print(chip.shape)
###Output
(80, 49)
###Markdown
Get a list of the unique biosample term names in the chip dataframe.
###Code
in_chip = chip['Biosample term name'].unique()
print(in_chip)
###Output
['ACC112' 'neural cell' 'RWPE1' '22Rv1' 'endodermal cell'
'gastrocnemius medialis' 'GM12878' 'SUDHL6' 'MCF-7' 'KMS-11'
'thoracic aorta' 'KOPT-K1' 'RWPE2' 'neutrophil' 'DND-41' 'Loucy' 'VCaP'
'DOHH2' 'OCI-LY1' 'OCI-LY3' 'A549' 'keratinocyte' 'SK-N-SH' 'C4-2B'
'epithelial cell of prostate' 'HCT116' 'smooth muscle cell'
'neuroepithelial stem cell' 'mid-neurogenesis radial glial cells'
'fibroblast of dermis' 'H9' 'neural progenitor cell' 'osteoblast'
'radial glial cell' 'induced pluripotent stem cell' 'Karpas-422'
'astrocyte' 'mammary epithelial cell' 'CD14-positive monocyte' 'Panc1'
'thyroid gland' 'HeLa-S3' 'myotube' 'IMR-90' 'H1-hESC' 'A673'
'mesenchymal stem cell' 'MM.1S' 'hepatocyte' 'iPS DF 19.11' 'B cell'
'mesendoderm' 'endothelial cell of umbilical vein' 'cardiac muscle cell'
'body of pancreas' 'PC-3' 'skeletal muscle myoblast' 'trophoblast cell'
'SK-N-MC' 'bipolar neuron' 'adrenal gland' 'right lobe of liver' 'PC-9'
'neural stem progenitor cell' 'fibroblast of lung' 'OCI-LY7' 'vagina'
'fibroblast of arm']
###Markdown
Now, get a list of the biosample term names which are shared between the two lists. In other words, find the intersection of biosample term names with RNA and ChIP data satisfying our various criteria. How many samples are there in this list?I've provided one way to do this below using list comprehensions - there are many other ways to do this, such as converting the lists to sets, and then finding the intersection of those sets.
###Code
list_1 = ['a','b','c','d','e','f','g']
list_2 = ['d','e','f','g','h','i','j']
list_3 = [i for i in list_1 if i in list_2]
print(list_3)
in_both = [i for i in in_rna if i in in_chip]
print(len(in_both))
print(sorted(in_both))
###Output
36
['A549', 'B cell', 'CD14-positive monocyte', 'GM12878', 'H1-hESC', 'HeLa-S3', 'IMR-90', 'Karpas-422', 'MCF-7', 'OCI-LY7', 'PC-3', 'SK-N-SH', 'adrenal gland', 'astrocyte', 'bipolar neuron', 'body of pancreas', 'cardiac muscle cell', 'endothelial cell of umbilical vein', 'fibroblast of arm', 'fibroblast of dermis', 'fibroblast of lung', 'gastrocnemius medialis', 'hepatocyte', 'induced pluripotent stem cell', 'keratinocyte', 'mammary epithelial cell', 'myotube', 'neural cell', 'neural progenitor cell', 'osteoblast', 'right lobe of liver', 'skeletal muscle myoblast', 'smooth muscle cell', 'thoracic aorta', 'thyroid gland', 'vagina']
###Markdown
The sample 'gastrocnemius medialis' should be in your list. Print the data in the rna and chip dataframes that are from this sample.
###Code
print(rna[rna['Biosample term name'] == 'gastrocnemius medialis'])
print('\n')
print(chip[chip['Biosample term name'] == 'gastrocnemius medialis'])
###Output
File format Output type Experiment accession Assay \
File accession
ENCFF173AFN fastq reads ENCSR609NZM RNA-seq
ENCFF054KNM fastq reads ENCSR609NZM RNA-seq
ENCFF387IXO fastq reads ENCSR853BNH RNA-seq
ENCFF307UCJ fastq reads ENCSR853BNH RNA-seq
ENCFF004CNM fastq reads ENCSR853BNH RNA-seq
ENCFF086LCO fastq reads ENCSR853BNH RNA-seq
ENCFF660SLV fastq reads ENCSR678TMV RNA-seq
ENCFF751QEC fastq reads ENCSR678TMV RNA-seq
ENCFF139SHO fastq reads ENCSR967JPI RNA-seq
ENCFF825PCO fastq reads ENCSR967JPI RNA-seq
Biosample term id Biosample term name Biosample type \
File accession
ENCFF173AFN UBERON:0011907 gastrocnemius medialis tissue
ENCFF054KNM UBERON:0011907 gastrocnemius medialis tissue
ENCFF387IXO UBERON:0011907 gastrocnemius medialis tissue
ENCFF307UCJ UBERON:0011907 gastrocnemius medialis tissue
ENCFF004CNM UBERON:0011907 gastrocnemius medialis tissue
ENCFF086LCO UBERON:0011907 gastrocnemius medialis tissue
ENCFF660SLV UBERON:0011907 gastrocnemius medialis tissue
ENCFF751QEC UBERON:0011907 gastrocnemius medialis tissue
ENCFF139SHO UBERON:0011907 gastrocnemius medialis tissue
ENCFF825PCO UBERON:0011907 gastrocnemius medialis tissue
Biosample life stage Biosample sex Biosample Age ... \
File accession ...
ENCFF173AFN adult female 51 year ...
ENCFF054KNM adult female 51 year ...
ENCFF387IXO adult male 37 year ...
ENCFF307UCJ adult male 37 year ...
ENCFF004CNM adult male 37 year ...
ENCFF086LCO adult male 37 year ...
ENCFF660SLV adult female 53 year ...
ENCFF751QEC adult female 53 year ...
ENCFF139SHO adult male 54 year ...
ENCFF825PCO adult male 54 year ...
dbxrefs \
File accession
ENCFF173AFN SRA:SRR4422023
ENCFF054KNM SRA:SRR4422023
ENCFF387IXO SRA:SRR4422372
ENCFF307UCJ SRA:SRR4422372
ENCFF004CNM SRA:SRR4422371
ENCFF086LCO SRA:SRR4422371
ENCFF660SLV SRA:SRR4422107
ENCFF751QEC SRA:SRR4422107
ENCFF139SHO SRA:SRR4422532
ENCFF825PCO SRA:SRR4422532
File download URL Assembly \
File accession
ENCFF173AFN https://www.encodeproject.org/files/ENCFF173AF... NaN
ENCFF054KNM https://www.encodeproject.org/files/ENCFF054KN... NaN
ENCFF387IXO https://www.encodeproject.org/files/ENCFF387IX... NaN
ENCFF307UCJ https://www.encodeproject.org/files/ENCFF307UC... NaN
ENCFF004CNM https://www.encodeproject.org/files/ENCFF004CN... NaN
ENCFF086LCO https://www.encodeproject.org/files/ENCFF086LC... NaN
ENCFF660SLV https://www.encodeproject.org/files/ENCFF660SL... NaN
ENCFF751QEC https://www.encodeproject.org/files/ENCFF751QE... NaN
ENCFF139SHO https://www.encodeproject.org/files/ENCFF139SH... NaN
ENCFF825PCO https://www.encodeproject.org/files/ENCFF825PC... NaN
Platform Controlled by File Status Audit WARNING \
File accession
ENCFF173AFN HiSeq 2500 /files/ENCFF001RTP/ released NaN
ENCFF054KNM HiSeq 2500 /files/ENCFF001RTP/ released NaN
ENCFF387IXO HiSeq 2000 /files/ENCFF001RTP/ released NaN
ENCFF307UCJ HiSeq 2000 /files/ENCFF001RTP/ released NaN
ENCFF004CNM HiSeq 2000 /files/ENCFF001RTP/ released NaN
ENCFF086LCO HiSeq 2000 /files/ENCFF001RTP/ released NaN
ENCFF660SLV HiSeq 2500 /files/ENCFF001RTP/ released NaN
ENCFF751QEC HiSeq 2500 /files/ENCFF001RTP/ released NaN
ENCFF139SHO HiSeq 2500 /files/ENCFF001RTP/ released NaN
ENCFF825PCO HiSeq 2500 /files/ENCFF001RTP/ released NaN
Audit INTERNAL_ACTION Audit NOT_COMPLIANT Audit ERROR
File accession
ENCFF173AFN NaN NaN NaN
ENCFF054KNM NaN NaN NaN
ENCFF387IXO NaN NaN NaN
ENCFF307UCJ NaN NaN NaN
ENCFF004CNM NaN NaN NaN
ENCFF086LCO NaN NaN NaN
ENCFF660SLV NaN NaN NaN
ENCFF751QEC NaN NaN NaN
ENCFF139SHO NaN NaN NaN
ENCFF825PCO NaN NaN NaN
[10 rows x 49 columns]
File format Output type Experiment accession \
File accession
ENCFF393HCO bed narrowPeak replicated peaks ENCSR736ALU
ENCFF718ILR bed narrowPeak replicated peaks ENCSR801IPH
ENCFF369MQM bed narrowPeak replicated peaks ENCSR948YYZ
Assay Biosample term id Biosample term name \
File accession
ENCFF393HCO ChIP-seq UBERON:0011907 gastrocnemius medialis
ENCFF718ILR ChIP-seq UBERON:0011907 gastrocnemius medialis
ENCFF369MQM ChIP-seq UBERON:0011907 gastrocnemius medialis
Biosample type Biosample life stage Biosample sex \
File accession
ENCFF393HCO tissue adult female
ENCFF718ILR tissue adult male
ENCFF369MQM tissue adult male
Biosample Age ... dbxrefs \
File accession ...
ENCFF393HCO 53 year ... NaN
ENCFF718ILR 37 year ... NaN
ENCFF369MQM 54 year ... NaN
File download URL Assembly \
File accession
ENCFF393HCO https://www.encodeproject.org/files/ENCFF393HC... GRCh38
ENCFF718ILR https://www.encodeproject.org/files/ENCFF718IL... GRCh38
ENCFF369MQM https://www.encodeproject.org/files/ENCFF369MQ... GRCh38
Platform Controlled by File Status \
File accession
ENCFF393HCO NaN NaN released
ENCFF718ILR NaN NaN released
ENCFF369MQM NaN NaN released
Audit WARNING \
File accession
ENCFF393HCO NaN
ENCFF718ILR mild to moderate bottlenecking, moderate libra...
ENCFF369MQM mild to moderate bottlenecking
Audit INTERNAL_ACTION \
File accession
ENCFF393HCO biological replicates with identical biosample
ENCFF718ILR NaN
ENCFF369MQM biological replicates with identical biosample
Audit NOT_COMPLIANT Audit ERROR
File accession
ENCFF393HCO NaN NaN
ENCFF718ILR NaN NaN
ENCFF369MQM NaN NaN
[3 rows x 49 columns]
|
examples/gallery/apis/stocks_mpl.ipynb | ###Markdown
This example is meant to make it easy to compare and contrast the different APIs Panel provides to declare apps and dashboards. Specifically, it compares four different implementations of the same app using 1) the quick and easy ``interact`` function, 2) more flexible reactive functions, 3) declarative Param-based code, and 4) explicit callbacks.Before comparing the different approaches, we will first declare some components of the app that will be shared, including the title of the app, a set of stock tickers, a function to return a dataframe given the stock ``ticker`` and the rolling mean ``window_size``, and another function to return a plot given those same inputs:
###Code
title = '## Stock Explorer Matplotlib'
tickers = ['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT']
def get_df(ticker, window_size):
df = pd.DataFrame(getattr(stocks, ticker))
df['date'] = pd.to_datetime(df.date)
return df.set_index('date').rolling(window=window_size).mean().reset_index()
def get_plot(ticker, window_size):
fig = Figure(figsize=(10, 6))
ax = fig.subplots()
FigureCanvas(fig) # not needed for mpl >= 3.1
df = get_df(ticker, window_size)
df.plot.line('date', 'close', ax=ax)
return fig
###Output
_____no_output_____
###Markdown
InteractIn the ``interact`` model the widgets are automatically generated from the arguments to the function or by providing additional hints to the ``interact`` call. This is a very convenient way to generate a simple app, particularly when first exploring some data. However, because widgets are created implicitly based on introspecting the code, it is difficult to see how to modify the behavior. Also, to compose the different components in a custom way it is necessary to unpack the layout returned by the ``interact`` call, as we do here:
###Code
interact = pn.interact(get_plot, ticker=tickers, window_size=(1, 21, 5))
pn.Row(
pn.Column(title, interact[0]),
interact[1]
)
###Output
_____no_output_____
###Markdown
ReactiveThe reactive programming model is similar to the ``interact`` function but relies on the user (a) explicitly instantiating widgets, (b) declaring how those widgets relate to the function arguments (using the ``depends`` decorator), and (c) laying out the widgets and other components explicitly. In principle we could reuse the ``get_plot`` function from above here and wrap it in the decorator but to be clearer we will repeat it:
###Code
ticker = pn.widgets.Select(name='Ticker', options=tickers)
window = pn.widgets.IntSlider(name='Window Size', value=6, start=1, end=21)
@pn.depends(ticker.param.value, window.param.value)
def get_plot(ticker, window_size):
fig = Figure(figsize=(10, 6))
ax = fig.subplots()
FigureCanvas(fig) # not needed for mpl >= 3.1
df = get_df(ticker, window_size)
df.plot.line('date', 'close', ax=ax)
return fig
pn.Row(
pn.Column(title, ticker, window),
get_plot
)
###Output
_____no_output_____
###Markdown
Parameterized classAnother approach expresses the app entirely as a single ``Parameterized`` class with parameters to declare the inputs, rather than explicit widgets. The parameters are independent of any GUI code, which can be important for maintaining large codebases, with parameters and functionality defined separately from any GUI or panel code. Once again the ``depends`` decorator is used to express the dependencies, but in this case the dependencies are expressed as strings referencing class parameters, not parameters of widgets. The parameters and the ``plot`` method can then be laid out independently, with Panel used only for this very last step.
###Code
import param
class StockExplorer(param.Parameterized):
ticker = param.Selector(default='AAPL', objects=tickers)
window_size = param.Integer(default=6, bounds=(1, 21))
@param.depends('ticker', 'window_size')
def plot(self):
return get_plot(self.ticker, self.window_size)
explorer = StockExplorer()
pn.Row(explorer.param, explorer.plot)
###Output
_____no_output_____
###Markdown
CallbacksThe above approaches are all reactive in some way, triggering actions whenever manipulating a widget causes a parameter to change, without users writing code to trigger callbacks explicitly. Explicit callbacks allow complete low-level control of precisely how the different components of the app are updated, but they can quickly become unmaintainable because the complexity increases dramatically as more callbacks are added. The approach works by defining callbacks using the ``.param.watch`` API that either update or replace the already rendered components when a watched parameter changes:
###Code
ticker = pn.widgets.Select(name='Ticker', options=['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT'])
window = pn.widgets.IntSlider(name='Window', value=6, start=1, end=21)
row = pn.Row(
pn.Column(title, ticker, window),
get_plot(ticker.options[0], window.value)
)
def update(event):
row[1].object = get_plot(ticker.value, window.value)
ticker.param.watch(update, 'value')
window.param.watch(update, 'value')
row.servable()
###Output
_____no_output_____
###Markdown
This example is meant to make it easy to compare and contrast the different APIs Panel provides to declare apps and dashboards. Specifically, it compares four different implementations of the same app using 1) the quick and easy ``interact`` function, 2) more flexible reactive functions, 3) declarative Param-based code, and 4) explicit callbacks.Before comparing the different approaches, we will first declare some components of the app that will be shared, including the title of the app, a set of stock tickers, a function to return a dataframe given the stock ``ticker`` and the rolling mean ``window_size``, and another function to return a plot given those same inputs:
###Code
title = '## Stock Explorer Matplotlib'
tickers = ['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT']
def get_df(ticker, window_size):
df = pd.DataFrame(getattr(stocks, ticker))
df['date'] = pd.to_datetime(df.date)
return df.set_index('date').rolling(window=window_size).mean().reset_index()
def get_plot(ticker, window_size):
fig = Figure(figsize=(10, 6))
ax = fig.subplots()
cv = FigureCanvas(fig) # not needed for mpl >= 3.1
df = get_df(ticker, window_size)
df.plot.line('date', 'close', ax=ax)
return fig
###Output
_____no_output_____
###Markdown
InteractIn the ``interact`` model the widgets are automatically generated from the arguments to the function or by providing additional hints to the ``interact`` call. This is a very convenient way to generate a simple app, particularly when first exploring some data. However, because widgets are created implicitly based on introspecting the code, it is difficult to see how to modify the behavior. Also, to compose the different components in a custom way it is necessary to unpack the layout returned by the ``interact`` call, as we do here:
###Code
interact = pn.interact(get_plot, ticker=tickers, window_size=(1, 21, 5))
pn.Row(
pn.Column(title, interact[0]),
interact[1]
)
###Output
_____no_output_____
###Markdown
ReactiveThe reactive programming model is similar to the ``interact`` function but relies on the user (a) explicitly instantiating widgets, (b) declaring how those widgets relate to the function arguments (using the ``depends`` decorator), and (c) laying out the widgets and other components explicitly. In principle we could reuse the ``get_plot`` function from above here and wrap it in the decorator but to be clearer we will repeat it:
###Code
ticker = pn.widgets.Select(name='Ticker', options=tickers)
window = pn.widgets.IntSlider(name='Window Size', value=6, start=1, end=21)
@pn.depends(ticker.param.value, window.param.value)
def get_plot(ticker, window_size):
fig = Figure(figsize=(10, 6))
ax = fig.subplots()
cv = FigureCanvas(fig) # not needed for mpl >= 3.1
df = get_df(ticker, window_size)
df.plot.line('date', 'close', ax=ax)
return fig
pn.Row(
pn.Column(title, ticker, window),
get_plot
)
###Output
_____no_output_____
###Markdown
Parameterized classAnother approach expresses the app entirely as a single ``Parameterized`` class with parameters to declare the inputs, rather than explicit widgets. The parameters are independent of any GUI code, which can be important for maintaining large codebases, with parameters and functionality defined separately from any GUI or panel code. Once again the ``depends`` decorator is used to express the dependencies, but in this case the dependencies are expressed as strings referencing class parameters, not parameters of widgets. The parameters and the ``plot`` method can then be laid out independently, with Panel used only for this very last step.
###Code
import param
class StockExplorer(param.Parameterized):
ticker = param.Selector(default='AAPL', objects=tickers)
window_size = param.Integer(default=6, bounds=(1, 21))
@param.depends('ticker', 'window_size')
def plot(self):
return get_plot(self.ticker, self.window_size)
explorer = StockExplorer()
pn.Row(explorer.param, explorer.plot)
###Output
_____no_output_____
###Markdown
CallbacksThe above approaches are all reactive in some way, triggering actions whenever manipulating a widget causes a parameter to change, without users writing code to trigger callbacks explicitly. Explicit callbacks allow complete low-level control of precisely how the different components of the app are updated, but they can quickly become unmaintainable because the complexity increases dramatically as more callbacks are added. The approach works by defining callbacks using the ``.param.watch`` API that either update or replace the already rendered components when a watched parameter changes:
###Code
ticker = pn.widgets.Select(name='Ticker', options=['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT'])
window = pn.widgets.IntSlider(name='Window', value=6, start=1, end=21)
row = pn.Row(
pn.Column(title, ticker, window),
get_plot(ticker.options[0], window.value)
)
def update(event):
row[1].object = get_plot(ticker.value, window.value)
ticker.param.watch(update, 'value')
window.param.watch(update, 'value')
row.servable()
###Output
_____no_output_____
###Markdown
This example is meant to make it easy to compare and contrast the different APIs Panel provides to declare apps and dashboards. Specifically, it compares four different implementations of the same app using 1) the quick and easy ``interact`` function, 2) more flexible reactive functions, 3) declarative Param-based code, and 4) explicit callbacks.Before comparing the different approaches, we will first declare some components of the app that will be shared, including the title of the app, a set of stock tickers, a function to return a dataframe given the stock ``ticker`` and the rolling mean ``window_size``, and another function to return a plot given those same inputs:
###Code
title = '## Stock Explorer Matplotlib'
tickers = ['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT']
def get_df(ticker, window_size):
df = pd.DataFrame(getattr(stocks, ticker))
df['date'] = pd.to_datetime(df.date)
return df.set_index('date').rolling(window=window_size).mean().reset_index()
def get_plot(ticker, window_size):
df = get_df(ticker, window_size)
df.plot.line('date', 'close', figsize=(10, 6))
fig = plt.gcf()
plt.close()
return fig
###Output
_____no_output_____
###Markdown
InteractIn the ``interact`` model the widgets are automatically generated from the arguments to the function or by providing additional hints to the ``interact`` call. This is a very convenient way to generate a simple app, particularly when first exploring some data. However, because widgets are created implicitly based on introspecting the code, it is difficult to see how to modify the behavior. Also, to compose the different components in a custom way it is necessary to unpack the layout returned by the ``interact`` call, as we do here:
###Code
interact = pn.interact(get_plot, ticker=tickers, window_size=(1, 21, 5))
pn.Row(
pn.Column(title, interact[0]),
interact[1]
)
###Output
_____no_output_____
###Markdown
ReactiveThe reactive programming model is similar to the ``interact`` function but relies on the user (a) explicitly instantiating widgets, (b) declaring how those widgets relate to the function arguments (using the ``depends`` decorator), and (c) laying out the widgets and other components explicitly. In principle we could reuse the ``get_plot`` function from above here and wrap it in the decorator but to be clearer we will repeat it:
###Code
ticker = pn.widgets.Select(name='Ticker', options=tickers)
window = pn.widgets.IntSlider(name='Window Size', value=6, start=1, end=21)
@pn.depends(ticker.param.value, window.param.value)
def get_plot(ticker, window_size):
df = get_df(ticker, window_size)
df.plot.line('date', 'close', figsize=(10, 6))
fig = plt.gcf()
plt.close()
return fig
pn.Row(
pn.Column(title, ticker, window),
get_plot
)
###Output
_____no_output_____
###Markdown
Parameterized classAnother approach expresses the app entirely as a single ``Parameterized`` class with parameters to declare the inputs, rather than explicit widgets. The parameters are independent of any GUI code, which can be important for maintaining large codebases, with parameters and functionality defined separately from any GUI or panel code. Once again the ``depends`` decorator is used to express the dependencies, but in this case the dependencies are expressed as strings referencing class parameters, not parameters of widgets. The parameters and the ``plot`` method can then be laid out independently, with Panel used only for this very last step.
###Code
import param
class StockExplorer(param.Parameterized):
ticker = param.Selector(default='AAPL', objects=tickers)
window_size = param.Integer(default=6, bounds=(1, 21))
@param.depends('ticker', 'window_size')
def plot(self):
return get_plot(self.ticker, self.window_size)
explorer = StockExplorer()
pn.Row(explorer.param, explorer.plot)
###Output
_____no_output_____
###Markdown
CallbacksThe above approaches are all reactive in some way, triggering actions whenever manipulating a widget causes a parameter to change, without users writing code to trigger callbacks explicitly. Explicit callbacks allow complete low-level control of precisely how the different components of the app are updated, but they can quickly become unmaintainable because the complexity increases dramatically as more callbacks are added. The approach works by defining callbacks using the ``.param.watch`` API that either update or replace the already rendered components when a watched parameter changes:
###Code
ticker = pn.widgets.Select(name='Ticker', options=['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT'])
window = pn.widgets.IntSlider(name='Window', value=6, start=1, end=21)
row = pn.Row(
pn.Column(title, ticker, window),
get_plot(ticker.options[0], window.value)
)
def update(event):
row[1].object = get_plot(ticker.value, window.value)
ticker.param.watch(update, 'value')
window.param.watch(update, 'value')
row.servable()
###Output
_____no_output_____ |
solutions/3_NeuralNets_PyTorch-Solution.ipynb | ###Markdown
3. Neural Networks with PyTorch 1. Prepare data
###Code
import pandas as pd
df = pd.read_csv("../data/titanic.csv")
cols = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "Survived"]
df = df[cols]
df = df.dropna()
X = df.drop("Survived", axis=1)
X = pd.get_dummies(X, columns=["Sex", "Embarked"]) # one-hot-encode
print(X.shape)
y = df["Survived"]
from sklearn.model_selection import train_test_split
X = X.values # to numpy-array
y = y.values # to numpy-array
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
2. Model Training
###Code
import torch
torch.manual_seed(0)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
train_x = torch.Tensor(X_train).float().to(device)
test_x = torch.Tensor(X_test).float().to(device)
train_y = torch.Tensor(y_train).long().to(device)
test_y = torch.Tensor(y_test).long().to(device)
import torch.nn as nn
class DeepNeuralNetwork(nn.Module): # the class has to inherent from nn.Module
def __init__(self):
super(DeepNeuralNetwork, self).__init__() # calling super constructor
# defining layers
self.hidden1 = nn.Linear(10, 15)
self.output = nn.Linear(15, 2)
def forward(self, x):
x = self.hidden1(x)
x = torch.relu(x)
x = self.output(x)
return x
import time
dnn = DeepNeuralNetwork()
dnn.to(device) # copy the model to the device
dnn.train() # set model into training mode
no_epochs = 200
learning_rate = 0.001
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(dnn.parameters(), lr=learning_rate)
start_time = time.time()
losses = []
for iteration in range(no_epochs):
optimizer.zero_grad()
y_hat = dnn(train_x) # we predict on all data points (= batch gradient descent)
loss = loss_func(y_hat, train_y) # calculate the loss
loss.backward() # backpropagate the loss to calculate gradients
optimizer.step() # update the weights using these gradients
losses.append(loss.item())
if iteration % 20 == 0:
print(f"Loss in epoch {iteration} is {loss.item()}")
import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(range(0, no_epochs), losses)
plt.xlabel('number of epochs')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
After the network is trained, we can use it to predict on the test data.
###Code
dnn.eval() # set network to evaluation mode
y_pred = dnn(test_x)
predicted = torch.argmax(y_pred.data, 1)
correct = (predicted == test_y).sum().item()
print(f"Accuarcy is {100. * correct / len(test_x)}%")
###Output
Accuarcy is 66.43356643356644%
###Markdown
3. Check for overfitting
###Code
import time
dnn = DeepNeuralNetwork()
dnn.to(device) # copy the model to the device
dnn.train() # set model into training mode
no_epochs = 200
learning_rate = 0.001
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(dnn.parameters(), lr=learning_rate)
start_time = time.time()
losses = []
train_acc = []
test_acc = []
for iteration in range(no_epochs):
optimizer.zero_grad()
y_hat = dnn(train_x) # we predict on all data points (= batch gradient descent)
loss = loss_func(y_hat, train_y) # calculate the loss
loss.backward() # backpropagate the loss to calculate gradients
optimizer.step() # update the weights using these gradients
losses.append(loss.item())
with torch.no_grad(): # temporarily deactivates autograd engine
dnn.eval()
# accuracy on train
y_hat = dnn(train_x)
predicted = torch.argmax(y_hat.data, 1)
correct = (predicted == train_y).sum().item()
accuracy_train = 100. * correct / len(train_x)
train_acc.append(accuracy_train)
# accuracy on test
y_hat = dnn(test_x)
predicted = torch.argmax(y_hat.data, 1)
correct = (predicted == test_y).sum().item()
accuracy_test = 100. * correct / len(test_x)
test_acc.append(accuracy_test)
dnn.train()
if iteration % 20 == 0:
print(f"Loss in epoch {iteration} is {loss.item()}")
plt.plot(train_acc, label= "Training loss")
plt.plot(test_acc, label= "Test loss")
plt.legend()
###Output
_____no_output_____ |
Complete-Python-Bootcamp-master/Methods.ipynb | ###Markdown
MethodsWe've already seen a few example of methods when learning about Object and Data Structure Types in Python. Methods are essentially functions built into objects. Later on in the course we will learn about how to create our own objects and methods using Object Oriented Programming (OOP) and classes.Methods will perform specific actions on the object and can also take arguments, just like a function. This lecture will serve as just a brief introduction to methods and get you thinking about overall design methods that we will touch back upon when we reach OOP in the course.Methods are in the form: object.method(arg1,arg2,etc...) You'll later see that we can think of methods as having an argument 'self' referring to the object itself. You can't see this argument but we will be using it later on in the course during the OOP lectures.Lets take a quick look at what an example of the various methods a list has:
###Code
# Create a simple list
l = [1,2,3,4,5]
###Output
_____no_output_____
###Markdown
Fortunately, with iPython and the Jupyter Notebook we can quickly see all the possible methods using the tab key. The methods for a list are:* append* count* extend* insert* pop* remove* reverse* sortLet's try out a few of them: append() allows us to add elements to the end of a list:
###Code
l.append(6)
l
###Output
_____no_output_____
###Markdown
Great! Now how about count()? The count() method will count the number of occurrences of an element in a list.
###Code
# Check how many times 2 shows up in the list
l.count(2)
###Output
_____no_output_____
###Markdown
You can always use Shift+Tab in the Jupyter Notebook to get more help about the method. In general Python you can use the help() function:
###Code
help(l.count)
###Output
Help on built-in function count:
count(...)
L.count(value) -> integer -- return number of occurrences of value
|
docs/display-objects.ipynb | ###Markdown
Display Objects
###Code
By default, returns the markdown view of the code cell. All the markdown syntaxes accepted by
the `__import__('mistune').Renderer;` are understood.
###Output
_____no_output_____
###Markdown
Suppressing Markdown Output
###Code
Sometimes it is desirable to suppress the Markdown input. Supply a __single blank line__
at the beginning of the cell to suppress the output.
> The code cell below illustrates this point.
For example, this will not show as Markdown.
### Motivation
In IPython, indented code with an empty first like with raise an `Exception;` Suppressing output
with a single blank removes a mode of failure in executing the code. It also provides
a single click UX to make this change.
###Output
_____no_output_____
###Markdown
Dereferencing Objects
###Code
The IPython display objects have logic for presenting URLS and filenames. __literacy__ tweaked these
opinions slightly to provide a canonical experience with Files and Urls.
### Configuration
1. Use a config file to format our converter.
%%file config.py
2. Exclude the code cell inputs.
c.TemplateExporter.exclude_input = True
###Output
_____no_output_____ |
examples/Setting up a basic measurement using the MC.ipynb | ###Markdown
Creating an instance of the measurement controlMeasurements are controlled through the `MeasurementControl` usually instantiated as `MC`
###Code
MC = measurement_control.MeasurementControl('MC',live_plot_enabled=True, verbose=True)
MC.station = station
station.add_component(MC)
from pycqed.instrument_drivers.virtual_instruments import instrument_monitor as im
IM = im.InstrumentMonitor('IM', station)
station.add_component(IM)
# Link the instrument monitor to the MC so that it gets updated in the loop
MC.instrument_monitor('IM')
IM.update()
###Output
_____no_output_____
###Markdown
Create instruments used in the experiment Let's start by creating a dummy instrument called MockParabola.
###Code
from pycqed.instrument_drivers.physical_instruments.dummy_instruments import DummyParHolder
dummy_instrument = DummyParHolder('dummy_instrument')
station.add_component(dummy_instrument)
###Output
_____no_output_____
###Markdown
A 1D hard measurement A hard measurement is a measurement where the data acquisition loop happens in the **hard**ware.
###Code
MC.soft_avg(15)
MC.persist_mode(True)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=0.5, delay=.02))
dat = MC.run('dummy_hard')
data_set = dat['dset']
###Output
Starting measurement: dummy_hard
Sweep function: None_Sweep
Detector function: Dummy_Detector
100% completed elapsed time: 1.0s time left: 0.0s
###Markdown
By setting persist_mode = True we can see a copy of the last measurements
###Code
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=0.5, delay=.02))
dat2 = MC.run('dummy_hard persistent')
data_set2 = dat2['dset']
###Output
Starting measurement: dummy_hard persistent
Sweep function: None_Sweep
Detector function: Dummy_Detector
100% completed elapsed time: 1.5s time left: 0.0s
###Markdown
A simple 1D soft measurement A soft measurement is a a measurement where the data acquisition loop occurs in the **soft**ware
###Code
dummy_instrument.x(145/134545)
IM.update()
dummy_instrument.delay(.01)
MC.soft_avg(15)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
dummy_instrument.noise(1)
MC.set_detector_function(dummy_instrument.parabola)
dat = MC.run('1D test')
data_set = dat['dset']
# the second plot will also show the first line
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
dat2= MC.run('1D test-persist')
data_set2 = dat2['dset']
dummy_instrument.delay(.01)
MC.soft_avg(15)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
MC.set_detector_function(det.Dummy_Detector_Soft())
dat = MC.run('1D test')
data_set = dat['dset']
from importlib import reload
reload(det)
d=det.Dummy_Detector_Soft()
d.acquire_data_point()
np.shape(d.acquire_data_point())
d=det.Dummy_Detector_Soft_diff_shape()
d.acquire_data_point()
len(np.shape(d.acquire_data_point()))
###Output
_____no_output_____
###Markdown
You can play around a bit with the options in the MC:
###Code
MC.persist_mode(True) # Turns on and off persistent plotting
MC.verbose(True)
MC.plotting_interval(.2)
MC.live_plot_enabled(True)
###Output
_____no_output_____
###Markdown
A simple 2D measurement
###Code
dummy_instrument.delay(.0001)
MC.soft_avg(4)
sweep_pts = np.linspace(-2, 2, 30)
sweep_pts_2D = np.linspace(-2, 2, 5)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_function_2D(dummy_instrument.y)
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(dummy_instrument.parabola)
dat=MC.run('test', mode='2D')
data_set = dat['dset']
###Output
Starting measurement: test
Sweep function 0: x
Sweep function 1: Sweep_function
Detector function: parabola
100% completed elapsed time: 3.9s time left: 0.0s
###Markdown
2D combinatioin of a hard inner and soft outer loopThe hard inner loop returns 30 values
###Code
MC.soft_avg(1)
sweep_pts = np.linspace(0, 10, 30)
sweep_pts_2D = np.linspace(0, 10, 30)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_function_2D(None_Sweep(sweep_control='soft'))
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(det.Dummy_Detector_Hard(delay=.05, noise=.1))
dat = MC.run('2D_hard', mode='2D')
data_set = dat['dset']
###Output
Starting measurement: 2D_hard
Sweep function 0: None_Sweep
Sweep function 1: None_Sweep
Detector function: Dummy_Detector
100% completed elapsed time: 11.4s time left: 0.0s
###Markdown
A Hard measurement that uses soft averaging The number of soft_averages determines how many times the experiment will be performed. Only the averaged data is plotted and saved. The number of soft-averages can be set as a parameter of the Measurement Control. Will first implement it for 1D hard sweeps (easier) and then follow for combinations of hard and soft sweeps.
###Code
MC.soft_avg(4)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=1.5, delay=.02))
dat = MC.run('dummy_hard')
data_set = dat['dset']
###Output
Starting measurement: dummy_hard
Sweep function: None_Sweep
Detector function: Dummy_Detector
100% completed elapsed time: 0.9s time left: 0.0s
###Markdown
2D soft averaging
###Code
MC.soft_avg(10)
sweep_pts = np.linspace(0, 10, 30)
sweep_pts_2D = np.linspace(0, 10, 5)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_function_2D(None_Sweep(sweep_control='soft'))
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(det.Dummy_Detector_Hard(noise=1.5, delay=.001))
dat = MC.run('dummy_hard_2D', mode='2D')
data_set = dat['dset']
###Output
Starting measurement: dummy_hard_2D
Sweep function 0: None_Sweep
Sweep function 1: None_Sweep
Detector function: Dummy_Detector
100% completed elapsed time: 13.6s time left: 0.0s
###Markdown
Starting an adaptive measurement This example does a 2D optimization over the mock parabola
###Code
from pycqed.measurement.optimization import nelder_mead
MC.soft_avg(1)
dummy_instrument
MC.set_sweep_functions([dummy_instrument.x, dummy_instrument.y])
MC.set_adaptive_function_parameters({'adaptive_function':nelder_mead,
'x0':[-5,-5], 'initial_step': [2.5, 2.5]})
dummy_instrument.noise(.5)
MC.set_detector_function(dummy_instrument.parabola)
dat = MC.run('1D test', mode='adaptive')
data_set = dat['dset']
###Output
Starting measurement: 1D test
Sweep function 0: module
Sweep function 1: module
Detector function: parabola
Optimization completed in 1.357s
###Markdown
Creating an instance of the measurement controlMeasurements are controlled through the `MeasurementControl` usually instantiated as `MC`
###Code
MC = measurement_control.MeasurementControl('MC',live_plot_enabled=True, verbose=True)
MC.station = station
station.add_component(MC)
from pycqed.instrument_drivers.virtual_instruments import instrument_monitor as im
IM = im.InstrumentMonitor('IM', station)
station.add_component(IM)
# Link the instrument monitor to the MC so that it gets updated in the loop
MC.instrument_monitor('IM')
IM.update()
###Output
_____no_output_____
###Markdown
Create instruments used in the experiment Let's start by creating a dummy instrument called MockParabola.
###Code
from pycqed.instrument_drivers.physical_instruments.dummy_instruments import DummyParHolder
dummy_instrument = DummyParHolder('dummy_instrument')
station.add_component(dummy_instrument)
###Output
_____no_output_____
###Markdown
A 1D hard measurement A hard measurement is a measurement where the data acquisition loop happens in the **hard**ware.
###Code
MC.soft_avg(15)
MC.persist_mode(True)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=0.5, delay=.02))
dat = MC.run('dummy_hard')
data_set = dat['dset']
###Output
_____no_output_____
###Markdown
By setting persist_mode = True we can see a copy of the last measurements
###Code
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=0.5, delay=.02))
dat2 = MC.run('dummy_hard persistent')
data_set2 = dat2['dset']
###Output
_____no_output_____
###Markdown
A simple 1D soft measurement A soft measurement is a a measurement where the data acquisition loop occurs in the **soft**ware
###Code
dummy_instrument.x(145/134545)
IM.update()
dummy_instrument.delay(.01)
MC.soft_avg(15)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
dummy_instrument.noise(1)
MC.set_detector_function(dummy_instrument.parabola)
dat = MC.run('1D test')
data_set = dat['dset']
# the second plot will also show the first line
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
dat2= MC.run('1D test-persist')
data_set2 = dat2['dset']
dummy_instrument.delay(.01)
MC.soft_avg(15)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
MC.set_detector_function(det.Dummy_Detector_Soft())
dat = MC.run('1D test')
data_set = dat['dset']
from importlib import reload
reload(det)
d=det.Dummy_Detector_Soft()
d.acquire_data_point()
np.shape(d.acquire_data_point())
d=det.Dummy_Detector_Soft_diff_shape()
d.acquire_data_point()
len(np.shape(d.acquire_data_point()))
###Output
_____no_output_____
###Markdown
You can play around a bit with the options in the MC:
###Code
MC.persist_mode(True) # Turns on and off persistent plotting
MC.verbose(True)
MC.plotting_interval(.2)
MC.live_plot_enabled(True)
###Output
_____no_output_____
###Markdown
A simple 2D measurement
###Code
dummy_instrument.delay(.0001)
MC.soft_avg(4)
sweep_pts = np.linspace(-2, 2, 30)
sweep_pts_2D = np.linspace(-2, 2, 5)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_function_2D(dummy_instrument.y)
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(dummy_instrument.parabola)
dat=MC.run('test', mode='2D')
data_set = dat['dset']
###Output
_____no_output_____
###Markdown
2D combinatioin of a hard inner and soft outer loopThe hard inner loop returns 30 values
###Code
MC.soft_avg(1)
sweep_pts = np.linspace(0, 10, 30)
sweep_pts_2D = np.linspace(0, 10, 30)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_function_2D(None_Sweep(sweep_control='soft'))
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(det.Dummy_Detector_Hard(delay=.05, noise=.1))
dat = MC.run('2D_hard', mode='2D')
data_set = dat['dset']
###Output
_____no_output_____
###Markdown
A Hard measurement that uses soft averaging The number of soft_averages determines how many times the experiment will be performed. Only the averaged data is plotted and saved. The number of soft-averages can be set as a parameter of the Measurement Control. Will first implement it for 1D hard sweeps (easier) and then follow for combinations of hard and soft sweeps.
###Code
MC.soft_avg(4)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=1.5, delay=.02))
dat = MC.run('dummy_hard')
data_set = dat['dset']
###Output
_____no_output_____
###Markdown
2D soft averaging
###Code
MC.soft_avg(10)
sweep_pts = np.linspace(0, 10, 30)
sweep_pts_2D = np.linspace(0, 10, 5)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_function_2D(None_Sweep(sweep_control='soft'))
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(det.Dummy_Detector_Hard(noise=1.5, delay=.001))
dat = MC.run('dummy_hard_2D', mode='2D')
data_set = dat['dset']
###Output
_____no_output_____
###Markdown
Starting an adaptive measurement This example does a 2D optimization over the mock parabola
###Code
from pycqed.measurement.optimization import nelder_mead
MC.soft_avg(1)
dummy_instrument
MC.set_sweep_functions([dummy_instrument.x, dummy_instrument.y])
MC.set_adaptive_function_parameters({'adaptive_function':nelder_mead,
'x0':[-5,-5], 'initial_step': [2.5, 2.5]})
dummy_instrument.noise(.5)
MC.set_detector_function(dummy_instrument.parabola)
dat = MC.run('1D test', mode='adaptive')
data_set = dat['dset']
###Output
_____no_output_____ |
notebook_basicCNN.ipynb | ###Markdown
Proyecto 2. Analisis Exploratorio Deteccion de covid en radiografias de torax Integrantes Cristina Bautista 161260 Jose Block 18935 Esteban Cabrera 17781 Byron Mota 15246 Primero correr esta dependecia
###Code
!pip3 install python-gdcm
import gdcm
###Output
_____no_output_____
###Markdown
Resetear y volver a correr
###Code
import os
import pandas as pd
import numpy as np
!pip install --upgrade numpy
import matplotlib.pyplot as plt
import matplotlib
import pydicom as dicom
import cv2
import ast
import warnings
warnings.filterwarnings("ignore")
os.listdir('/kaggle/input/siim-covid19-detection/')
df1 = pd.read_csv('/kaggle/input/siim-covid19-detection/train_image_level.csv')
df2 = pd.read_csv('/kaggle/input/siim-covid19-detection/train_study_level.csv')
df1['id_dcm'] = df1['id']
df1['id_dcm'] = df1['id'].str.replace('_image', '.dcm')
df1['id'] = df1['id'].str.replace('_image', '')
df2['id'] = df2['id'].str.replace('_study', '')
df1.head()
df1.info()
df2.head()
df2.info()
df = pd.merge(df1, df2, left_on='StudyInstanceUID', right_on='id', how='inner')
###Output
_____no_output_____
###Markdown
Se uso de guia para leer los archivos .dcm del repositorio: https://www.kaggle.com/drcapa/siim-fisabio-rsna-covid-19-detection-starter
###Code
len(df)
path = '/kaggle/input/siim-covid19-detection/'
temp = df.loc[0, 'StudyInstanceUID']
temp
temp_depth2 = os.listdir(path+'train/'+temp)
temp_depth2
temp_train_path = path+'train/'+temp+'/'+temp_depth2[0]
temp_train_path
os.listdir(temp_train_path)
def extraction(i):
data_file = dicom.dcmread(complete_path_train)
img = data_file.pixel_array
return img
def extractionPath(i):
path_train = path + 'train/' + df.loc[i, 'StudyInstanceUID']
last_folder_in_path = os.listdir(path_train)[0]
path_train = path_train + '/{}/'.format(last_folder_in_path)
img_id = df.loc[i, 'id_dcm']
complete_path_train = path_train + img_id
return complete_path_train
img_paths = []
for i in range(len(df)):
img_paths.append(extractionPath(i))
type(img_paths)
df['Image_Path'] = img_paths
df
df[df.eq('65761e66de9f').any(1)]
paths = []
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
paths.append(os.path.join(dirname, filename))
path = [x for x in paths if "test" not in x and 'csv' not in x]
path[:10]
order_paths = []
for i in df['id_dcm']:
for j in path:
if i == j[-16:-1]:
order_paths.append(j)
order_paths
import cv2
import time
def extract_resized_and_origin_img_info(path_list):
img_list = []
origin_img_heights = []
origin_img_widths = []
i = 0
for path in path_list:
data_file = dicom.read_file(path)
img = data_file.pixel_array
origin_img_heights.append(img.shape[0])
origin_img_widths.append(img.shape[1])
# scailing to 0~255
img = (img - np.min(img)) / np.max(img)
img = (img * 255).astype(np.uint8)
# resizing to 4000+ to 255 default
img = cv2.resize(img, (255,255))
img_list.append(img)
img_array = np.array(img_list)
i += 1
if i % 100 == 0:
print('{} / {}'.format(len(img_array),len(path_list)))
time.sleep(2)
return img_array, origin_img_heights, origin_img_widths
test_imgs_new, origin_img_heights2, origin_img_widths2 = extract_resized_and_origin_img_info(path[:])
type(test_imgs_new)
test_imgs_new = np.array(test_imgs_new)
test_imgs_new.shape
test_imgs_new_4dim = test_imgs_new[0:1,:,:,np.newaxis]
test_imgs_new_4dim.shape
x_scale_list=[]
y_scale_list=[]
if len(origin_img_heights2) == len(origin_img_widths2):
for i in range(len(origin_img_heights2)):
x_scale = 255 / origin_img_widths2[i]
x_scale_list.append(x_scale)
print(i)
y_scale = 255 / origin_img_heights2[i]
y_scale_list.append(y_scale)
clasificadores = list(df.columns[6:10])
clasificadores
!mkdir ./genData
!mkdir ./genData/NegPeu
!mkdir ./genData/Typical
!mkdir ./genData/Indeterminate
!mkdir ./genData/Atypical
imgs_NegPeu = list(df[df[clasificadores[0]]==1].index)
for idx in imgs_NegPeu:
plt.imsave('./genData/NegPeu/{}.jpg'.format(df.loc[idx,'id_x']), test_imgs_new[idx], cmap='gray')
imgs_Typical = list(df[df[clasificadores[1]]==1].index)
for idx in imgs_Typical:
plt.imsave('./genData/Typical/{}.jpg'.format(df.loc[idx,'id_x']), test_imgs_new[idx], cmap='gray')
imgs_Indeterminate = list(df[df[clasificadores[2]]==1].index)
for idx in imgs_Indeterminate:
plt.imsave('./genData/Indeterminate/{}.jpg'.format(df.loc[idx,'id_x']), test_imgs_new[idx], cmap='gray')
imgs_Atypical = list(df[df[clasificadores[3]]==1].index)
for idx in imgs_Atypical:
plt.imsave('./genData/Atypical/{}.jpg'.format(df.loc[idx,'id_x']), train_imgs[idx], cmap='gray')
from tensorflow.keras.preprocessing.image import ImageDataGenerator
idatagen = ImageDataGenerator(
rescale=1. / 255,
rotation_range=3,
width_shift_range=0.05,
height_shift_range=0.05,
zoom_range=0.05,
horizontal_flip=False,
fill_mode='reflect',
validation_split=0.2
)
train_gen = idatagen.flow_from_directory(
'./genData',
batch_size=64,
target_size=(256, 256),
class_mode='categorical',
color_mode='grayscale',
subset = 'training'
)
valid_gen = idatagen.flow_from_directory(
'./genData',
batch_size = 64,
target_size = (256, 256),
class_mode = 'categorical',
color_mode='grayscale',
subset = 'validation'
)
###Output
_____no_output_____
###Markdown
Clasificador Básico
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dropout, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint
model = Sequential([
Conv2D(64, (3,3), activation='relu', input_shape=(256, 256,1)),
MaxPooling2D(2,2),
Conv2D(64, (3,3), activation='relu'),
MaxPooling2D(2,2),
Conv2D(128, (3,3), activation='relu'),
MaxPooling2D(2,2),
Conv2D(128, (3,3), activation='relu'),
MaxPooling2D(2,2),
Flatten(),
Dropout(0.5),
Dense(128, activation='relu'),
Dense(4, activation='softmax')
])
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
checkpoint = ModelCheckpoint(
filepath = './checkpoint1.ckpt',
save_weights_only = True,
save_best_only = True,
monitor = 'val_loss',
verbose=1
)
model.fit(
train_gen,
validation_data = (valid_gen),
epochs = 20,
callbacks=[checkpoint]
)
model.load_weights('./checkpoint1.ckpt')
model.evaluate(valid_gen)
model.save('./baseCnn.h5')
model.predict(test_imgs_new_4dim)
a = input('Ingrese el directorio de una imagen')
a_4dim = a[:,:,:,np.newaxis]
a_4dim.shape
model.predict(a_4dim)
###Output
_____no_output_____ |
old/src/metapaths/.ipynb_checkpoints/01a_WikiData_Nodes-checkpoint.ipynb | ###Markdown
Query WikiData to get Biomedical EntitiesWe will get the nodes (and later some edges) for our biomedical graph from WikiData Tues meet w/ Andrew- Research Notes- Run two codes in tandem, see how far you get with 02 of rephetio (similar or different?) To Do 1. Is the script reproducible?2. Automate reproduced script.3. Does the script need improvements, now that's been reproduced?4. Automate improved script.5. Compare reproduced vs improved.6. Test with other algorithms besides Rephetio, compare again with reproduced + improved.7. Create web interface.- Include mechanisms such that libraries are transferrable, no matter who does it- Fix nodes where there's a time out (limit)- Check and include any differing nodes (for improved section)
###Code
import pandas as pd
from pathlib import Path
# 'ModuleNotFoundError' for both lines below
## Solution: pip install git+https://github.com/mmayers12/data_tools
### https://github.com/mmayers12/data_tools (fyi also a data_tools in pip, different)
#### so far, these work okay (there isn't a conflict)
from data_tools.df_processing import char_combine_iter
from data_tools.wiki import node_query_pipeline
# New line recommended by notebook
from tqdm.autonotebook import tqdm
nodes = []
###Output
_____no_output_____
###Markdown
Diseases
###Code
q = """ SELECT DISTINCT ?disease ?diseaseLabel ?umlscui ?snomed_ct ?doid ?mesh ?mondo ?omim ?orpha
WHERE {
# Initial typing for Disease
# Either instance of Disease of has a Disease Ontology ID
{?disease wdt:P31 wd:Q12136}UNION{?disease wdt:P699 ?doid}.
OPTIONAL {?disease wdt:P2892 ?umlscui .}
OPTIONAL {?disease wdt:P5806 ?snomed_ct. }
OPTIONAL {?disease wdt:P699 ?doid. }
OPTIONAL {?disease wdt:P486 ?mesh. }
OPTIONAL {?disease wdt:P5270 ?mondo. }
OPTIONAL {?disease wdt:P492 ?omim. }
OPTIONAL {?disease wdt:P1550 ?orpha. }
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
dis_curi_map = {'umlscui': 'UMLS', 'snomed_ct': 'SNOMED', 'mesh': 'MESH',
'doid': 'DOID', 'mondo': 'MONDO', 'omim': 'OMIM', 'orpha': 'ORPHA'}
res = node_query_pipeline(q, dis_curi_map, 'disease')
# what's happening in the 'node_query_pipeline()' function that's outputting format?
nodes.append(res)
nodes[0].head()
###Output
_____no_output_____
###Markdown
Compounds
###Code
q = """SELECT DISTINCT ?compound ?compoundLabel ?kegg_drug ?chebi ?drugbank_id ?umlscui ?chembl_id ?unii ?ikey ?pubchem_cid ?rxnorm ?mesh_supplemental_record_ui ?mesh_descriptor_ui
WHERE {
# Initial typing for Compound
?compound wdt:P31 wd:Q11173 .
# Give me all Wikidata items where the item is an instance of a chemical compound
# Whatever item up there may optionally have the following identifier + variable
OPTIONAL { ?compound wdt:P665 ?kegg_drug .}
OPTIONAL { ?compound wdt:P683 ?chebi .}
OPTIONAL { ?compound wdt:P715 ?drugbank_id .}
OPTIONAL { ?compound wdt:P2892 ?umlscui .}
OPTIONAL { ?compound wdt:P592 ?chembl_id .}
OPTIONAL { ?compound wdt:P652 ?unii .}
OPTIONAL { ?compound wdt:P3350 ?ikey .}
OPTIONAL { ?compound wdt:P662 ?pubchem_cid .}
OPTIONAL { ?compound wdt:P3345 ?rxnorm .}
OPTIONAL { ?compound wdt:P6680 ?mesh_supplemental_record_ui .}
OPTIONAL { ?compound wdt:P486 ?mesh_descriptor_ui .}
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}
limit 150000""" # limit needed here, fix later (266348 when Mike did it, but still less)
# max can be 150000 (changed from 200 to 150000)
###Output
_____no_output_____
###Markdown
1201261 nowSELECT (COUNT (DISTINCT ?compound) AS ?count) WHERE { Initial typing for Compound ?compound wdt:P31 wd:Q11173 . OPTIONAL { ?compound wdt:P665 ?kegg_drug .} OPTIONAL { ?compound wdt:P683 ?chebi .} OPTIONAL { ?compound wdt:P715 ?drugbank_id .} OPTIONAL { ?compound wdt:P2892 ?umlscui .} OPTIONAL { ?compound wdt:P592 ?chembl_id .} OPTIONAL { ?compound wdt:P652 ?unii .} OPTIONAL { ?compound wdt:P3350 ?ikey .} OPTIONAL { ?compound wdt:P662 ?pubchem_cid .} OPTIONAL { ?compound wdt:P3345 ?rxnorm .} OPTIONAL { ?compound wdt:P6680 ?mesh_supplemental_record_ui .} OPTIONAL { ?compound wdt:P486 ?mesh_descriptor_ui .} SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" } }
###Code
chem_curi_map = {'unii': 'UNII',
'rxnorm': 'RxCUI',
'drugbank_id': 'DB',
'umlscui': 'UMLS',
'chebi': 'CHEBI',
'chembl_id': 'CHEMBL',
'kegg_drug': 'KEGG',
'ikey': 'IKEY',
'pubchem_cid': 'PCID',
'mesh_supplemental_record_ui': 'MESH',
'mesh_descriptor_ui': 'MESH'}
res = node_query_pipeline(q, chem_curi_map, 'compound')
nodes.append(res)
nodes[1].head()
# JSONDecodeError is due to the time it takes
## Solution: Limit above (temporary fix)
###Output
_____no_output_____
###Markdown
Phenotype
###Code
q = """SELECT DISTINCT ?phenotype ?phenotypeLabel ?hpo ?mesh ?omim ?snomed
WHERE {
# Initial typing for phenotype
{?phenotype wdt:P31 wd:Q169872.}UNION{?phenotype wdt:P3841 ?hpo}
# Xrefs associated with phenotypes
OPTIONAL {?phenotype wdt:P3841 ?hpo .}
OPTIONAL {?phenotype wdt:P486 ?mesh . }
OPTIONAL {?phenotype wdt:P492 ?omim . }
OPTIONAL {?phenotype wdt:P5806 ?snomed . }
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
res = node_query_pipeline(q, {'mesh': 'MESH', 'omim': 'OMIM', 'hpo':'HP', 'snomed': 'SNOMED'}, 'phenotype')
nodes.append(res)
nodes[2].head()
###Output
_____no_output_____
###Markdown
GeneGenes are too numerous and will require filtering to a single taxon in order for the query to finish successfully.For now we will only extract human genes, but in the future we will do the same for infectious taxa.
###Code
q = """SELECT DISTINCT ?gene ?geneLabel ?entrez ?symbol ?hgnc ?omim ?ensembl
WHERE {{
# Initial typing for Gene
?gene wdt:P31 wd:Q7187.
?gene wdt:P703 wd:{tax}.
OPTIONAL{{?gene wdt:P351 ?entrez .}}
OPTIONAL{{?gene wdt:P353 ?symbol .}}
OPTIONAL{{?gene wdt:P354 ?hgnc .}}
OPTIONAL{{?gene wdt:P492 ?omim .}}
OPTIONAL{{?gene wdt:P594 ?ensembl .}}
SERVICE wikibase:label {{ bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }}
}}"""
human_tax_wd_id = 'Q15978631'
q = q.format(tax=human_tax_wd_id)
gene_curi_map = {'entrez': 'NCBIGene', 'symbol': 'SYM', 'hgnc':'HGNC', 'omim':'OMIM', 'ensembl':'ENSG'}
res = node_query_pipeline(q, gene_curi_map, 'gene')
nodes.append(res)
nodes[3].head()
###Output
_____no_output_____
###Markdown
Protein
###Code
q = """SELECT DISTINCT ?protein ?proteinLabel ?uniprot
WHERE {{
# Initial typing for Protein
?protein wdt:P31 wd:Q8054.
?protein wdt:P703 wd:{tax}.
OPTIONAL{{?protein wdt:P352 ?uniprot .}}
SERVICE wikibase:label {{ bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }}
}}"""
q = q.format(tax=human_tax_wd_id)
res = node_query_pipeline(q, {'uniprot':'UniProt'}, 'protein')
nodes.append(res)
nodes[4].head()
###Output
_____no_output_____
###Markdown
Pathway
###Code
q = """SELECT DISTINCT ?pathway ?pathwayLabel ?react ?wpid
WHERE {
# Initial typing for Pathway
?pathway wdt:P31 wd:Q4915012 .
OPTIONAL{?pathway wdt:P3937 ?react .}
OPTIONAL{?pathway wdt:P2410 ?wpid .}
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
res = node_query_pipeline(q, {'react':'REACT', 'wpid':'WP'}, 'pathway')
nodes.append(res)
nodes[5].head()
###Output
_____no_output_____
###Markdown
Molecular Function
###Code
q = """SELECT DISTINCT ?molecular_function ?molecular_functionLabel ?goid
WHERE {
# Initial typing for molecular Function
?molecular_function wdt:P31 wd:Q14860489 .
?molecular_function wdt:P686 ?goid
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
res = node_query_pipeline(q, {'goid':'GO'}, 'molecular_function')
nodes.append(res)
nodes[6].head()
###Output
_____no_output_____
###Markdown
Biological Process
###Code
q = """SELECT DISTINCT ?biological_process ?biological_processLabel ?goid
WHERE {
# Initial typing for molecular Function
?biological_process wdt:P31 wd:Q2996394 .
?biological_process wdt:P686 ?goid
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
res = node_query_pipeline(q, {'goid':'GO'}, 'biological_process')
nodes.append(res)
nodes[7].head()
###Output
_____no_output_____
###Markdown
Cellular Component
###Code
q = """SELECT DISTINCT ?cellular_component ?cellular_componentLabel ?goid
WHERE {
# Initial typing for Cellular Component
?cellular_component wdt:P31 wd:Q5058355 .
?cellular_component wdt:P686 ?goid
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
res = node_query_pipeline(q, {'goid':'GO'}, 'cellular_component')
nodes.append(res)
nodes[8].head()
###Output
_____no_output_____
###Markdown
Anatomy
###Code
q = """SELECT DISTINCT ?anatomy ?anatomyLabel ?uberon ?mesh
WHERE {
# Anatomical Strucutres
?anatomy wdt:P1554 ?uberon
OPTIONAL{?anatomy wdt:P486 ?mesh .}
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGAGE],en" }
}"""
res = node_query_pipeline(q, {'uberon':'UBERON', 'mesh': 'MESH'}, 'anatomy')
nodes.append(res)
nodes[9].head()
###Output
_____no_output_____
###Markdown
Put them all together
###Code
nodes = pd.concat(nodes, sort=False, ignore_index=True)
len(nodes)
nodes['id'].nunique()
nodes[nodes['id'].duplicated(keep=False)].sort_values('id').head(50)
nodes[nodes['id'].duplicated(keep=False)].sort_values('id').tail(50)
nodes['label'].value_counts()
###Output
_____no_output_____
###Markdown
Save
###Code
out_dir = Path('../results/')
# Make the output directory if doesn't already exist
out_dir.mkdir(parents=True, exist_ok=True)
nodes.to_csv(out_dir.joinpath('01a_nodes.csv'), index=False)
## edit 'pipeline' folder to be results?
###Output
_____no_output_____ |
notebooks/chap12.ipynb | ###Markdown
Classification Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
from utils import Or70, Pu50, Gr30
color_list3 = [Or70, Pu50, Gr30]
import matplotlib.pyplot as plt
from cycler import cycler
marker_cycle = cycler(marker=['s', 'o', '^'])
color_cycle = cycler(color=color_list3)
plt.rcParams['axes.prop_cycle'] = color_cycle + marker_cycle
###Output
_____no_output_____
###Markdown
Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of [spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering).In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014).We'll use this data to classify penguins by species. The following cell downloads the raw data.
###Code
# Load the data files from
# https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
import os
if not os.path.exists('penguins_raw.csv'):
!wget https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv
###Output
_____no_output_____
###Markdown
Penguin DataI'll use Pandas to load the data into a `DataFrame`.
###Code
import pandas as pd
df = pd.read_csv('penguins_raw.csv')
df.shape
###Output
_____no_output_____
###Markdown
The dataset contains one row for each penguin and one column for each variable.
###Code
df.head()
###Output
_____no_output_____
###Markdown
For convenience, I'll create a new column called `Species2` that contains a shorter version of the species names.
###Code
def shorten(species):
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
###Output
_____no_output_____
###Markdown
Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo. These species are shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license): The measurements we'll use are:* Body Mass in grams (g).* Flipper Length in millimeters (mm).* Culmen Length in millimeters. * Culmen Depth in millimeters.If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurementCulmen). The culmen is shown in the following illustration (also by Allison Horst): These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species. The following function takes the `DataFrame` and a column name.It returns a dictionary that maps from each species name to a `Cdf` of the values in the column named `colname`.
###Code
def make_cdf_map(df, colname, by='Species2'):
"""Make a CDF for each species."""
cdf_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
cdf_map[species] = Cdf.from_seq(group, name=species)
return cdf_map
###Output
_____no_output_____
###Markdown
The following function plots a `Cdf` of the values in the given column for each species:
###Code
from empiricaldist import Cdf
from utils import decorate
def plot_cdfs(df, colname, by='Species2'):
"""Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
cdf_map = make_cdf_map(df, colname, by)
for species, cdf in cdf_map.items():
cdf.plot(marker='')
decorate(xlabel=colname,
ylabel='CDF')
###Output
_____no_output_____
###Markdown
Here's what the distributions look like for culmen length.
###Code
colname = 'Culmen Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.Here are the distributions for flipper length.
###Code
colname = 'Flipper Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section. Here are the distributions for culmen depth.
###Code
colname = 'Culmen Depth (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
And here are the distributions of body mass.
###Code
colname = 'Body Mass (g)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length. Normal modelsLet's use these features to classify penguins. We'll proceed in the usual Bayesian way:1. Define a prior distribution with the three possible species and a prior probability for each,2. Compute the likelihood of the data for each hypothetical species, and then3. Compute the posterior probability of each hypothesis.To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species.The following function takes a `DataFrame` and a column name; it returns a dictionary that maps from each species name to a `norm` object.`norm` is defined in SciPy; it represents a normal distribution with a given mean and standard deviation.
###Code
from scipy.stats import norm
def make_norm_map(df, colname, by='Species2'):
"""Make a map from species to norm object."""
norm_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
mean = group.mean()
std = group.std()
norm_map[species] = norm(mean, std)
return norm_map
###Output
_____no_output_____
###Markdown
For example, here's the dictionary of `norm` objects for flipper length:
###Code
flipper_map = make_norm_map(df, 'Flipper Length (mm)')
flipper_map.keys()
###Output
_____no_output_____
###Markdown
Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?The `norm` object provides `pdf`, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
###Code
data = 193
flipper_map['Adelie'].pdf(data)
###Output
_____no_output_____
###Markdown
The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.Here's how we compute the likelihood of the data in each distribution.
###Code
hypos = flipper_map.keys()
likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos]
likelihood
###Output
_____no_output_____
###Markdown
Now we're ready to do the update. The UpdateAs usual I'll use a `Pmf` to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
###Code
from empiricaldist import Pmf
prior = Pmf(1/3, hypos)
prior
###Output
_____no_output_____
###Markdown
Now we can do the update in the usual way.
###Code
posterior = prior * likelihood
posterior.normalize()
posterior
###Output
_____no_output_____
###Markdown
A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement). The following function encapsulates the steps we just ran.It takes a `Pmf` representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
The return value is the posterior distribution.Here's the previous example again, using `update_penguin`:
###Code
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
###Output
_____no_output_____
###Markdown
As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.But culmen length *can* make this distinction, so let's use it to do a second round of classification.First we estimate distributions of culmen length for each species like this:
###Code
culmen_map = make_norm_map(df, 'Culmen Length (mm)')
###Output
_____no_output_____
###Markdown
Now suppose we see a penguin with culmen length 48 mm.We can use this data to update the prior.
###Code
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
###Output
_____no_output_____
###Markdown
A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.We can do better using multiple features. Naive Bayesian classificationTo make it easier to do multiple updates, I'll use the following function, which takes a prior `Pmf`, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
###Code
def update_naive(prior, data_seq, norm_maps):
"""Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
"""
posterior = prior.copy()
for data, norm_map in zip(data_seq, norm_maps):
posterior = update_penguin(posterior, data, norm_map)
return posterior
###Output
_____no_output_____
###Markdown
It performs a series of updates, using one variable at a time, and returns the posterior `Pmf`.To test it, I'll use the same features we looked at in the previous section: culmen length and flipper length.
###Code
colnames = ['Flipper Length (mm)', 'Culmen Length (mm)']
norm_maps = [flipper_map, culmen_map]
###Output
_____no_output_____
###Markdown
Now suppose we find a penguin with flipper length 193 mm and culmen length 48.Here's the update:
###Code
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
###Output
_____no_output_____
###Markdown
It is almost certain to be a Chinstrap.
###Code
posterior.max_prob()
###Output
_____no_output_____
###Markdown
We can loop through the dataset and classify each penguin with these two features.
###Code
import numpy as np
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames]
posterior = update_naive(prior, data_seq, norm_maps)
df.loc[i, 'Classification'] = posterior.max_prob()
###Output
_____no_output_____
###Markdown
This loop adds a column called `Classification` to the `DataFrame`; it contains the species with the maximum posterior probability for each penguin.So let's see how many we got right.
###Code
len(df)
valid = df['Classification'].notna()
valid.sum()
same = df['Species2'] == df['Classification']
same.sum()
###Output
_____no_output_____
###Markdown
There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.Of those, 324 are classified correctly. Which is almost 95%.
###Code
same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.
###Code
def accuracy(df):
"""Compute the accuracy of classification."""
valid = df['Classification'].notna()
same = df['Species2'] == df['Classification']
return same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features. Joint distributionsI'll start by making a scatter plot of the data.
###Code
import matplotlib.pyplot as plt
def scatterplot(df, var1, var2):
"""Make a scatter plot."""
grouped = df.groupby('Species2')
for species, group in grouped:
plt.plot(group[var1], group[var2],
label=species, lw=0, alpha=0.3)
decorate(xlabel=var1, ylabel=var2)
###Output
_____no_output_____
###Markdown
Here's a scatter plot of culmen length and flipper length for the three species.
###Code
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
scatterplot(df, var1, var2)
###Output
_____no_output_____
###Markdown
Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.The following function makes a discrete `Pmf` that approximates a normal distribution.
###Code
def make_pmf_norm(dist, sigmas=3, n=101):
"""Make a Pmf approximation to a normal distribution."""
mean, std = dist.mean(), dist.std()
low = mean - sigmas * std
high = mean + sigmas * std
qs = np.linspace(low, high, n)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
###Output
_____no_output_____
###Markdown
We can use it, along with `make_joint`, to make a joint distribution of culmen length and flipper length for each species.
###Code
from utils import make_joint
joint_map = {}
for species in hypos:
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = make_joint(pmf1, pmf2)
###Output
_____no_output_____
###Markdown
The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
###Code
from utils import plot_contour
scatterplot(df, var1, var2)
for species in hypos:
plot_contour(joint_map[species], alpha=0.5)
###Output
_____no_output_____
###Markdown
The contours of a joint normal distribution form ellipses.In this example, because the features are uncorrelated, the ellipses are aligned with the axes.But they are not well aligned with the data.We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution. Multivariate normal distributionAs we have seen, a univariate normal distribution is characterized by its mean and standard deviation.A multivariate normal distribution is characterized by the means of the features and the **covariance matrix**, which contains **variances**, which quantify the spread of the features, and the **covariances**, which quantify the relationships among them.We can use the data to estimate the means and covariance matrix for the population of penguins.First I'll select the columns we want.
###Code
features = df[[var1, var2]]
###Output
_____no_output_____
###Markdown
And compute the means.
###Code
mean = features.mean()
mean
###Output
_____no_output_____
###Markdown
We can also compute the covariance matrix:
###Code
cov = features.cov()
cov
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.Instead, we'll pass the covariance matrix to `multivariate_normal` which is a SciPy function that creates an object that represents a multivariate normal distribution.As arguments it takes a sequence of means and a covariance matrix:
###Code
from scipy.stats import multivariate_normal
multinorm = multivariate_normal(mean, cov)
###Output
_____no_output_____
###Markdown
The following function makes a `multivariate_normal` object for each species.
###Code
def make_multinorm_map(df, colnames):
"""Make a map from each species to a multivariate normal."""
multinorm_map = {}
grouped = df.groupby('Species2')
for species, group in grouped:
features = group[colnames]
mean = features.mean()
cov = features.cov()
multinorm_map[species] = multivariate_normal(mean, cov)
return multinorm_map
###Output
_____no_output_____
###Markdown
Here's how we make this map for the first two features, flipper length and culmen length.
###Code
multinorm_map = make_multinorm_map(df, [var1, var2])
###Output
_____no_output_____
###Markdown
Visualizing a Multivariate Normal DistributionThis section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.I'll start by making a contour map for the distribution of features among Adélie penguins. Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
###Code
norm1 = flipper_map['Adelie']
norm2 = culmen_map['Adelie']
multinorm = multinorm_map['Adelie']
###Output
_____no_output_____
###Markdown
I'll make a discrete `Pmf` approximation for each of the univariate distributions.
###Code
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
###Output
_____no_output_____
###Markdown
And use them to make a mesh grid that contains all pairs of values.
###Code
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
X.shape
###Output
_____no_output_____
###Markdown
The mesh is represented by two arrays: the first contains the quantities from `pmf1` along the `x` axis; the second contains the quantities from `pmf2` along the `y` axis.In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays.
###Code
pos = np.dstack((X, Y))
pos.shape
###Output
_____no_output_____
###Markdown
The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to `multinorm.pdf`, it evaluates the probability density function of the distribution for each pair of values.
###Code
densities = multinorm.pdf(pos)
densities.shape
###Output
_____no_output_____
###Markdown
The result is an array of probability densities. If we put them in a `DataFrame` and normalize them, the result is a discrete approximation of the joint distribution of the two features.
###Code
from utils import normalize
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
normalize(joint)
###Output
_____no_output_____
###Markdown
Here's what the result looks like.
###Code
plot_contour(joint)
decorate(xlabel=var1,
ylabel=var2)
###Output
_____no_output_____
###Markdown
The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.The following function encapsulate the steps we just did.
###Code
def make_joint(norm1, norm2, multinorm):
"""Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
"""
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
pos = np.dstack((X, Y))
densities = multinorm.pdf(pos)
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
return joint
###Output
_____no_output_____
###Markdown
The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
###Code
scatterplot(df, var1, var2)
for species in hypos:
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
plot_contour(joint, alpha=0.5)
###Output
_____no_output_____
###Markdown
Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications. A Less Naive ClassifierIn a previous section we used `update_penguin` to update a prior `Pmf` based on observed data and a collection of `norm` objects that model the distribution of observations under each hypothesis. Here it is again:
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
Last time we used this function, the values in `norm_map` were `norm` objects, but it also works if they are `multivariate_normal` objects.We can use it to classify a penguin with flipper length 193 and culmen length 48:
###Code
data = 193, 48
update_penguin(prior, data, multinorm_map)
###Output
_____no_output_____
###Markdown
A penguin with those measurements is almost certainly an Chinstrap. Now let's see if this classifier does any better than the naive Bayesian classifier.I'll apply it to each penguin in the dataset:
###Code
df['Classification'] = np.nan
for i, row in df.iterrows():
data = row[colnames]
posterior = update_penguin(prior, data, multinorm_map)
df.loc[i, 'Classification'] = posterior.idxmax()
###Output
_____no_output_____
###Markdown
And compute the accuracy:
###Code
accuracy(df)
###Output
_____no_output_____
###Markdown
It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier. SummaryIn this chapter, we implemented a naive Bayesian classifier, which is "naive" in the sense that it assumes that the features is uses for classification are independent.To see how bad that assumption is, we also implemented a classifier that uses the a multivariate normal distribution to model the joint distribution of the features, which includes their dependencies.In this example, the non-naive classifier is only marginally better.In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement.But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes.Speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak.But there *are* scientific uses for this type of classification. One of them is the subject of the research paper we started with: [sexual dimorphism](https://en.wikipedia.org/wiki/Sexual_dimorphism), that is, differences in shape between male and female animals.In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart.And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution. One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism.As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic. Exercises **Exercise:** In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features.Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass.Is it more accurate than the model with two features?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately.As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve? Note: One Gentoo penguin has an invalid value for `Sex`. I used the following code to select one species and filter out invalid data.
###Code
gentoo = (df['Species2'] == 'Gentoo')
subset = df[gentoo].copy()
subset['Sex'].value_counts()
valid = df['Sex'] != '.'
valid.sum()
subset = df[valid & gentoo].copy()
###Output
_____no_output_____
###Markdown
OK, you can finish it off from here.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Classification Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
from utils import Or70, Pu50, Gr30
color_list3 = [Or70, Pu50, Gr30]
import matplotlib.pyplot as plt
from cycler import cycler
marker_cycle = cycler(marker=['s', 'o', '^'])
color_cycle = cycler(color=color_list3)
line_cycle = cycler(linestyle=['-', '--', ':'])
plt.rcParams['axes.prop_cycle'] = (color_cycle +
marker_cycle +
line_cycle)
###Output
_____no_output_____
###Markdown
Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of [spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering).In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014).We'll use this data to classify penguins by species. The following cell downloads the raw data.
###Code
# Load the data files from
# https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
download('https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv')
###Output
_____no_output_____
###Markdown
Penguin DataI'll use Pandas to load the data into a `DataFrame`.
###Code
import pandas as pd
df = pd.read_csv('penguins_raw.csv')
df.shape
###Output
_____no_output_____
###Markdown
The dataset contains one row for each penguin and one column for each variable.
###Code
df.head()
###Output
_____no_output_____
###Markdown
For convenience, I'll create a new column called `Species2` that contains a shorter version of the species names.
###Code
def shorten(species):
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
###Output
_____no_output_____
###Markdown
Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo. These species are shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license): The measurements we'll use are:* Body Mass in grams (g).* Flipper Length in millimeters (mm).* Culmen Length in millimeters. * Culmen Depth in millimeters.If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurementCulmen). The culmen is shown in the following illustration (also by Allison Horst): These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species. The following function takes the `DataFrame` and a column name.It returns a dictionary that maps from each species name to a `Cdf` of the values in the column named `colname`.
###Code
def make_cdf_map(df, colname, by='Species2'):
"""Make a CDF for each species."""
cdf_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
cdf_map[species] = Cdf.from_seq(group, name=species)
return cdf_map
###Output
_____no_output_____
###Markdown
The following function plots a `Cdf` of the values in the given column for each species:
###Code
from empiricaldist import Cdf
from utils import decorate
def plot_cdfs(df, colname, by='Species2'):
"""Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
cdf_map = make_cdf_map(df, colname, by)
for species, cdf in cdf_map.items():
cdf.plot(label=species, marker='')
decorate(xlabel=colname,
ylabel='CDF')
###Output
_____no_output_____
###Markdown
Here's what the distributions look like for culmen length.
###Code
colname = 'Culmen Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.Here are the distributions for flipper length.
###Code
colname = 'Flipper Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section. Here are the distributions for culmen depth.
###Code
colname = 'Culmen Depth (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
And here are the distributions of body mass.
###Code
colname = 'Body Mass (g)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length. Normal ModelsLet's use these features to classify penguins. We'll proceed in the usual Bayesian way:1. Define a prior distribution with the three possible species and a prior probability for each,2. Compute the likelihood of the data for each hypothetical species, and then3. Compute the posterior probability of each hypothesis.To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species.The following function takes a `DataFrame` and a column name; it returns a dictionary that maps from each species name to a `norm` object.`norm` is defined in SciPy; it represents a normal distribution with a given mean and standard deviation.
###Code
from scipy.stats import norm
def make_norm_map(df, colname, by='Species2'):
"""Make a map from species to norm object."""
norm_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
mean = group.mean()
std = group.std()
norm_map[species] = norm(mean, std)
return norm_map
###Output
_____no_output_____
###Markdown
For example, here's the dictionary of `norm` objects for flipper length:
###Code
flipper_map = make_norm_map(df, 'Flipper Length (mm)')
flipper_map.keys()
###Output
_____no_output_____
###Markdown
Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?The `norm` object provides `pdf`, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
###Code
data = 193
flipper_map['Adelie'].pdf(data)
###Output
_____no_output_____
###Markdown
The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.Here's how we compute the likelihood of the data in each distribution.
###Code
hypos = flipper_map.keys()
likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos]
likelihood
###Output
_____no_output_____
###Markdown
Now we're ready to do the update. The UpdateAs usual I'll use a `Pmf` to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
###Code
from empiricaldist import Pmf
prior = Pmf(1/3, hypos)
prior
###Output
_____no_output_____
###Markdown
Now we can do the update in the usual way.
###Code
posterior = prior * likelihood
posterior.normalize()
posterior
###Output
_____no_output_____
###Markdown
A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement). The following function encapsulates the steps we just ran.It takes a `Pmf` representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
The return value is the posterior distribution.Here's the previous example again, using `update_penguin`:
###Code
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
###Output
_____no_output_____
###Markdown
As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.But culmen length *can* make this distinction, so let's use it to do a second round of classification.First we estimate distributions of culmen length for each species like this:
###Code
culmen_map = make_norm_map(df, 'Culmen Length (mm)')
###Output
_____no_output_____
###Markdown
Now suppose we see a penguin with culmen length 48 mm.We can use this data to update the prior.
###Code
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
###Output
_____no_output_____
###Markdown
A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.We can do better using multiple features. Naive Bayesian ClassificationTo make it easier to do multiple updates, I'll use the following function, which takes a prior `Pmf`, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
###Code
def update_naive(prior, data_seq, norm_maps):
"""Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
"""
posterior = prior.copy()
for data, norm_map in zip(data_seq, norm_maps):
posterior = update_penguin(posterior, data, norm_map)
return posterior
###Output
_____no_output_____
###Markdown
It performs a series of updates, using one variable at a time, and returns the posterior `Pmf`.To test it, I'll use the same features we looked at in the previous section: culmen length and flipper length.
###Code
colnames = ['Flipper Length (mm)', 'Culmen Length (mm)']
norm_maps = [flipper_map, culmen_map]
###Output
_____no_output_____
###Markdown
Now suppose we find a penguin with flipper length 193 mm and culmen length 48.Here's the update:
###Code
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
###Output
_____no_output_____
###Markdown
It is almost certain to be a Chinstrap.
###Code
posterior.max_prob()
###Output
_____no_output_____
###Markdown
We can loop through the dataset and classify each penguin with these two features.
###Code
import numpy as np
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames]
posterior = update_naive(prior, data_seq, norm_maps)
df.loc[i, 'Classification'] = posterior.max_prob()
###Output
_____no_output_____
###Markdown
This loop adds a column called `Classification` to the `DataFrame`; it contains the species with the maximum posterior probability for each penguin.So let's see how many we got right.
###Code
len(df)
valid = df['Classification'].notna()
valid.sum()
same = df['Species2'] == df['Classification']
same.sum()
###Output
_____no_output_____
###Markdown
There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.Of those, 324 are classified correctly, which is almost 95%.
###Code
same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.
###Code
def accuracy(df):
"""Compute the accuracy of classification."""
valid = df['Classification'].notna()
same = df['Species2'] == df['Classification']
return same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features. Joint DistributionsI'll start by making a scatter plot of the data.
###Code
import matplotlib.pyplot as plt
def scatterplot(df, var1, var2):
"""Make a scatter plot."""
grouped = df.groupby('Species2')
for species, group in grouped:
plt.plot(group[var1], group[var2],
label=species, lw=0, alpha=0.3)
decorate(xlabel=var1, ylabel=var2)
###Output
_____no_output_____
###Markdown
Here's a scatter plot of culmen length and flipper length for the three species.
###Code
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
scatterplot(df, var1, var2)
###Output
_____no_output_____
###Markdown
Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.The following function makes a discrete `Pmf` that approximates a normal distribution.
###Code
def make_pmf_norm(dist, sigmas=3, n=101):
"""Make a Pmf approximation to a normal distribution."""
mean, std = dist.mean(), dist.std()
low = mean - sigmas * std
high = mean + sigmas * std
qs = np.linspace(low, high, n)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
###Output
_____no_output_____
###Markdown
We can use it, along with `make_joint`, to make a joint distribution of culmen length and flipper length for each species.
###Code
from utils import make_joint
joint_map = {}
for species in hypos:
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = make_joint(pmf1, pmf2)
###Output
_____no_output_____
###Markdown
The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
###Code
from utils import plot_contour
scatterplot(df, var1, var2)
for species in hypos:
plot_contour(joint_map[species], alpha=0.5)
###Output
_____no_output_____
###Markdown
The contours of a joint normal distribution form ellipses.In this example, because the features are uncorrelated, the ellipses are aligned with the axes.But they are not well aligned with the data.We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution. Multivariate Normal DistributionAs we have seen, a univariate normal distribution is characterized by its mean and standard deviation.A multivariate normal distribution is characterized by the means of the features and the **covariance matrix**, which contains **variances**, which quantify the spread of the features, and the **covariances**, which quantify the relationships among them.We can use the data to estimate the means and covariance matrix for the population of penguins.First I'll select the columns we want.
###Code
features = df[[var1, var2]]
###Output
_____no_output_____
###Markdown
And compute the means.
###Code
mean = features.mean()
mean
###Output
_____no_output_____
###Markdown
We can also compute the covariance matrix:
###Code
cov = features.cov()
cov
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.Instead, we'll pass the covariance matrix to `multivariate_normal`, which is a SciPy function that creates an object that represents a multivariate normal distribution.As arguments it takes a sequence of means and a covariance matrix:
###Code
from scipy.stats import multivariate_normal
multinorm = multivariate_normal(mean, cov)
###Output
_____no_output_____
###Markdown
The following function makes a `multivariate_normal` object for each species.
###Code
def make_multinorm_map(df, colnames):
"""Make a map from each species to a multivariate normal."""
multinorm_map = {}
grouped = df.groupby('Species2')
for species, group in grouped:
features = group[colnames]
mean = features.mean()
cov = features.cov()
multinorm_map[species] = multivariate_normal(mean, cov)
return multinorm_map
###Output
_____no_output_____
###Markdown
Here's how we make this map for the first two features, flipper length and culmen length.
###Code
multinorm_map = make_multinorm_map(df, [var1, var2])
###Output
_____no_output_____
###Markdown
Visualizing a Multivariate Normal DistributionThis section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.I'll start by making a contour map for the distribution of features among Adélie penguins. Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
###Code
norm1 = flipper_map['Adelie']
norm2 = culmen_map['Adelie']
multinorm = multinorm_map['Adelie']
###Output
_____no_output_____
###Markdown
I'll make a discrete `Pmf` approximation for each of the univariate distributions.
###Code
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
###Output
_____no_output_____
###Markdown
And use them to make a mesh grid that contains all pairs of values.
###Code
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
X.shape
###Output
_____no_output_____
###Markdown
The mesh is represented by two arrays: the first contains the quantities from `pmf1` along the `x` axis; the second contains the quantities from `pmf2` along the `y` axis.In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays.
###Code
pos = np.dstack((X, Y))
pos.shape
###Output
_____no_output_____
###Markdown
The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to `multinorm.pdf`, it evaluates the probability density function of the distribution for each pair of values.
###Code
densities = multinorm.pdf(pos)
densities.shape
###Output
_____no_output_____
###Markdown
The result is an array of probability densities. If we put them in a `DataFrame` and normalize them, the result is a discrete approximation of the joint distribution of the two features.
###Code
from utils import normalize
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
normalize(joint)
###Output
_____no_output_____
###Markdown
Here's what the result looks like.
###Code
plot_contour(joint)
decorate(xlabel=var1,
ylabel=var2)
###Output
_____no_output_____
###Markdown
The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.The following function encapsulate the steps we just did.
###Code
def make_joint(norm1, norm2, multinorm):
"""Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
"""
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
pos = np.dstack((X, Y))
densities = multinorm.pdf(pos)
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
return joint
###Output
_____no_output_____
###Markdown
The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
###Code
scatterplot(df, var1, var2)
for species in hypos:
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
plot_contour(joint, alpha=0.5)
###Output
_____no_output_____
###Markdown
Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications. A Less Naive ClassifierIn a previous section we used `update_penguin` to update a prior `Pmf` based on observed data and a collection of `norm` objects that model the distribution of observations under each hypothesis. Here it is again:
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
Last time we used this function, the values in `norm_map` were `norm` objects, but it also works if they are `multivariate_normal` objects.We can use it to classify a penguin with flipper length 193 and culmen length 48:
###Code
data = 193, 48
update_penguin(prior, data, multinorm_map)
###Output
_____no_output_____
###Markdown
A penguin with those measurements is almost certainly a Chinstrap.Now let's see if this classifier does any better than the naive Bayesian classifier.I'll apply it to each penguin in the dataset:
###Code
df['Classification'] = np.nan
for i, row in df.iterrows():
data = row[colnames]
posterior = update_penguin(prior, data, multinorm_map)
df.loc[i, 'Classification'] = posterior.idxmax()
###Output
_____no_output_____
###Markdown
And compute the accuracy:
###Code
accuracy(df)
###Output
_____no_output_____
###Markdown
It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier. SummaryIn this chapter, we implemented a naive Bayesian classifier, which is "naive" in the sense that it assumes that the features it uses for classification are independent.To see how bad that assumption is, we also implemented a classifier that uses a multivariate normal distribution to model the joint distribution of the features, which includes their dependencies.In this example, the non-naive classifier is only marginally better.In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement.But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes.Speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak.But there *are* scientific uses for this type of classification. One of them is the subject of the research paper we started with: [sexual dimorphism](https://en.wikipedia.org/wiki/Sexual_dimorphism), that is, differences in shape between male and female animals.In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart.And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution. One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism.As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic. Exercises **Exercise:** In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features.Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass.Is it more accurate than the model with two features?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately.As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve? Note: One Gentoo penguin has an invalid value for `Sex`. I used the following code to select one species and filter out invalid data.
###Code
gentoo = (df['Species2'] == 'Gentoo')
subset = df[gentoo].copy()
subset['Sex'].value_counts()
valid = df['Sex'] != '.'
valid.sum()
subset = df[valid & gentoo].copy()
###Output
_____no_output_____
###Markdown
OK, you can finish it off from here.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
0.333 0.25 0.46716293183605073
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
def students_sick_outbreak_peak(results):
""" Determine max number of students sick
results: DataFrame with columns S, I, R
return integer
"""
return results.I.max()
students_sick_outbreak_peak(results)
def day_outbreak_peak(results):
""" Determine the day that the number of sick students peaked
results: DataFrame with columns S, I, R
return index
"""
return results.I.idxmax()
day_outbreak_peak(results)
def fraction_students_sick_at_end(results):
""" Determine the fraction of students sick at end of the semester
results: DataFrame with columns S, I, R
return index
"""
return get_last_value(results.I)
fraction_students_sick_at_end(results)
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
Saving figure to file figs/chap12-fig01.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
0.0 0.468320811028781
0.1 0.30650802853979753
0.2 0.16136545700638427
0.30000000000000004 0.0728155898425179
0.4 0.03552021675299155
0.5 0.019688715782459176
0.6000000000000001 0.011622057998337987
0.7000000000000001 0.006838737800619332
0.8 0.003696496253713877
0.9 0.0014815326722661948
1.0 -0.00016121210941239666
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
Saving figure to file figs/chap12-fig02.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have.
###Code
#### spending = linspace(0, 1200, 21)
def logistic2(x, A=0, B=1, C=1, M=1000, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
def compute_factor2(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic2(spending, M=800, K=240, B=0.035)
percent_reduction2 = compute_factor2(spending) * 100
plot(spending, percent_reduction2)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
0.0 0.3328871432717143 0.4667702312363652
100.0 0.3321342526691939 0.46414165040064037
200.0 0.33017160845482885 0.4572170063132055
300.0 0.32538647186519215 0.4398872029120663
400.0 0.3154039052420003 0.40163064627138245
500.0 0.3 0.3370342594898199
600.0 0.28459609475799963 0.26731703056804546
700.0 0.2746135281348078 0.22184699045990752
800.0 0.26982839154517113 0.20079159841614402
900.0 0.2678657473308061 0.1923921833925878
1000.0 0.26711285672828566 0.18921320781833872
1100.0 0.26683150821044227 0.18803175228016467
1200.0 0.26672740341296003 0.1875955039953746
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
Saving figure to file figs/chap12-fig03.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
0 0.9888888888888889 0.26672740341296003 0.1875955039953746
1 0.9777777777777779 0.26683150821044227 0.17458071882622528
2 0.9666666666666667 0.26711285672828566 0.16290983834857686
3 0.9555555555555556 0.2678657473308061 0.15350834947768177
4 0.9444444444444445 0.26982839154517113 0.1485650923152827
5 0.9333333333333333 0.2746135281348078 0.15294595061102179
6 0.9222222222222223 0.28459609475799963 0.1749644150235239
7 0.9111111111111112 0.3 0.21734316168444845
8 0.9 0.3154039052420003 0.2590710444883414
9 0.888888888888889 0.32538647186519215 0.27840288410342784
10 0.8777777777777778 0.33017160845482885 0.2779145346228302
11 0.8666666666666667 0.3321342526691939 0.2673574966927026
12 0.8555555555555556 0.3328871432717143 0.25279694563572175
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
Saving figure to file figs/chap12-fig04.pdf
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? If doses only cost 50 dollars then buying up to 10 doses will continue to decrease the total fraction of infected students. To minimize the percentage of students infected, 10 doses should be purchases and 700 dollars spent on campaigns.The infection rate would be around 0.11 where the minimal infection rate was 0.15 when vaccines were $100 per dose.
###Code
price_per_dose = 50
infected_sweep2 = sweep_doses(dose_array)
plot(infected_sweep2)
decorate(xlabel='Doses of vaccine - $50 per Vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
0.333 0.25 0.46716293183605073
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
results = run_simulation(system, update_func)
def get_metrics(results):
frac_sick_peak = results['I'].max()
day_peak = results['I'].idxmax()
frac_sick_end = results['I'].tail(1).values[0]
return frac_sick_peak, frac_sick_end, day_peak
frac_sick_peak, frac_sick_end, day_peak = get_metrics(results)
print('The fraction of students who are sick at the peak of the outbreak', frac_sick_peak)
print('The fraction of students who are sick at the end of the semester', frac_sick_end)
print('The day the outbreak peaks', day_peak)
###Output
The fraction of students who are sick at the peak of the outbreak 0.043536202687592354
The fraction of students who are sick at the end of the semester 0.0006741943156034474
The day the outbreak peaks 30
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
Saving figure to file figs/chap12-fig01.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
0.0 0.468320811028781
0.1 0.30650802853979753
0.2 0.16136545700638427
0.30000000000000004 0.0728155898425179
0.4 0.03552021675299155
0.5 0.019688715782459176
0.6000000000000001 0.011622057998337987
0.7000000000000001 0.006838737800619332
0.8 0.003696496253713877
0.9 0.0014815326722661948
1.0 -0.00016121210941239666
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
Saving figure to file figs/chap12-fig02.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have.
###Code
def compute_factor(spending, M=500, K=0.2, B=0.01):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=M, K=K, B=B)
plt.figure(figsize=(20,10))
spending = linspace(0, 1200, 21)
for M in np.arange(500,601,50):
for K in np.arange(.1,.4,.1):
for B in np.arange(.01,.03,.01):
percent_reduction = compute_factor(spending, M=M, K=K, B=B) * 100
plot(spending, percent_reduction, label=('M:',M,'K:',K,'B:',B))
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=True)
###Output
_____no_output_____
###Markdown
Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
0.0 0.3328871432717143 0.4667702312363652
100.0 0.3321342526691939 0.46414165040064037
200.0 0.33017160845482885 0.4572170063132055
300.0 0.32538647186519215 0.4398872029120663
400.0 0.3154039052420003 0.40163064627138245
500.0 0.3 0.3370342594898199
600.0 0.28459609475799963 0.26731703056804546
700.0 0.2746135281348078 0.22184699045990752
800.0 0.26982839154517113 0.20079159841614402
900.0 0.2678657473308061 0.1923921833925878
1000.0 0.26711285672828566 0.18921320781833872
1100.0 0.26683150821044227 0.18803175228016467
1200.0 0.26672740341296003 0.1875955039953746
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
Saving figure to file figs/chap12-fig03.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
0 0.9888888888888889 0.26672740341296003 0.1875955039953746
1 0.9777777777777779 0.26683150821044227 0.17458071882622528
2 0.9666666666666667 0.26711285672828566 0.16290983834857686
3 0.9555555555555556 0.2678657473308061 0.15350834947768177
4 0.9444444444444445 0.26982839154517113 0.1485650923152827
5 0.9333333333333333 0.2746135281348078 0.15294595061102179
6 0.9222222222222223 0.28459609475799963 0.1749644150235239
7 0.9111111111111112 0.3 0.21734316168444845
8 0.9 0.3154039052420003 0.2590710444883414
9 0.888888888888889 0.32538647186519215 0.27840288410342784
10 0.8777777777777778 0.33017160845482885 0.2779145346228302
11 0.8666666666666667 0.3321342526691939 0.2673574966927026
12 0.8555555555555556 0.3328871432717143 0.25279694563572175
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
Saving figure to file figs/chap12-fig04.pdf
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model? Model with a drop in $50 per dose
###Code
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
infected_sweep = sweep_doses(dose_array)
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
###Output
_____no_output_____
###Markdown
Incorporating effect of quarantine
###Code
def add_quarantine(system, factor):
system.beta *= (1 - factor)
def sweep_doses(dose_array, factor):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
add_quarantine(system, factor)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
plt.figure(figsize=(20,10))
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
for factor in np.arange(.01,.05, .01):
infected_sweep = sweep_doses(dose_array, factor)
plt.plot(infected_sweep,label=('Quarantine Factor', factor))
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=True)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
0.333 0.25 0.46716293183605073
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
def calc_peak_percent(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return max(results.I) / get_first_value(results.S)
calc_peak_percent(results)
def calc_peak_day(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: day of peak outbreak
"""
return results.I.idxmax()
calc_peak_day(results)
def calc_sick_end(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_last_value(results.I) / (get_first_value(results.S) + get_first_value(results.I))
calc_sick_end(results)
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
Saving figure to file figs/chap12-fig01.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
0.0 0.468320811028781
0.1 0.30650802853979753
0.2 0.16136545700638427
0.30000000000000004 0.0728155898425179
0.4 0.03552021675299155
0.5 0.019688715782459176
0.6000000000000001 0.011622057998337987
0.7000000000000001 0.006838737800619332
0.8 0.003696496253713877
0.9 0.0014815326722661948
1.0 -0.00016121210941239666
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
Saving figure to file figs/chap12-fig02.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
0.0 0.3328871432717143 0.4667702312363652
100.0 0.3321342526691939 0.46414165040064037
200.0 0.33017160845482885 0.4572170063132055
300.0 0.32538647186519215 0.4398872029120663
400.0 0.3154039052420003 0.40163064627138245
500.0 0.3 0.3370342594898199
600.0 0.28459609475799963 0.26731703056804546
700.0 0.2746135281348078 0.22184699045990752
800.0 0.26982839154517113 0.20079159841614402
900.0 0.2678657473308061 0.1923921833925878
1000.0 0.26711285672828566 0.18921320781833872
1100.0 0.26683150821044227 0.18803175228016467
1200.0 0.26672740341296003 0.1875955039953746
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
Saving figure to file figs/chap12-fig03.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
0 0.9888888888888889 0.26672740341296003 0.1875955039953746
1 0.9777777777777779 0.26683150821044227 0.17458071882622528
2 0.9666666666666667 0.26711285672828566 0.16290983834857686
3 0.9555555555555556 0.2678657473308061 0.15350834947768177
4 0.9444444444444445 0.26982839154517113 0.1485650923152827
5 0.9333333333333333 0.2746135281348078 0.15294595061102179
6 0.9222222222222223 0.28459609475799963 0.1749644150235239
7 0.9111111111111112 0.3 0.21734316168444845
8 0.9 0.3154039052420003 0.2590710444883414
9 0.888888888888889 0.32538647186519215 0.27840288410342784
10 0.8777777777777778 0.33017160845482885 0.2779145346228302
11 0.8666666666666667 0.3321342526691939 0.2673574966927026
12 0.8555555555555556 0.3328871432717143 0.25279694563572175
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
Saving figure to file figs/chap12-fig04.pdf
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
def add_quarantine(system, fraction):
"""Model the effect of quarantine by adjusting gamma.
system: System object
fraction: fraction of students quarantined
"""
# `low` represents the number of days a student
# is infectious if quarantined.
# `high` is the number of days they are infectious
# if not quarantined
low = 1
high = 4
tr = high - fraction * (high-low)
system.gamma = 1 / tr
###Output
_____no_output_____
###Markdown
Classification Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
from utils import Or70, Pu50, Gr30
color_list3 = [Or70, Pu50, Gr30]
import matplotlib.pyplot as plt
from cycler import cycler
marker_cycle = cycler(marker=['s', 'o', '^'])
color_cycle = cycler(color=color_list3)
line_cycle = cycler(linestyle=['-', '--', ':'])
plt.rcParams['axes.prop_cycle'] = (color_cycle +
marker_cycle +
line_cycle)
###Output
_____no_output_____
###Markdown
Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of [spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering).In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014).We'll use this data to classify penguins by species. The following cell downloads the raw data.
###Code
# Load the data files from
# https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
import os
if not os.path.exists('penguins_raw.csv'):
!wget https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv
###Output
_____no_output_____
###Markdown
Penguin DataI'll use Pandas to load the data into a `DataFrame`.
###Code
import pandas as pd
df = pd.read_csv('penguins_raw.csv')
df.shape
###Output
_____no_output_____
###Markdown
The dataset contains one row for each penguin and one column for each variable.
###Code
df.head()
###Output
_____no_output_____
###Markdown
For convenience, I'll create a new column called `Species2` that contains a shorter version of the species names.
###Code
def shorten(species):
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
###Output
_____no_output_____
###Markdown
Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo. These species are shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license): The measurements we'll use are:* Body Mass in grams (g).* Flipper Length in millimeters (mm).* Culmen Length in millimeters. * Culmen Depth in millimeters.If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurementCulmen). The culmen is shown in the following illustration (also by Allison Horst): These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species. The following function takes the `DataFrame` and a column name.It returns a dictionary that maps from each species name to a `Cdf` of the values in the column named `colname`.
###Code
def make_cdf_map(df, colname, by='Species2'):
"""Make a CDF for each species."""
cdf_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
cdf_map[species] = Cdf.from_seq(group, name=species)
return cdf_map
###Output
_____no_output_____
###Markdown
The following function plots a `Cdf` of the values in the given column for each species:
###Code
from empiricaldist import Cdf
from utils import decorate
def plot_cdfs(df, colname, by='Species2'):
"""Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
cdf_map = make_cdf_map(df, colname, by)
for species, cdf in cdf_map.items():
cdf.plot(label=species, marker='')
decorate(xlabel=colname,
ylabel='CDF')
###Output
_____no_output_____
###Markdown
Here's what the distributions look like for culmen length.
###Code
colname = 'Culmen Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.Here are the distributions for flipper length.
###Code
colname = 'Flipper Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section. Here are the distributions for culmen depth.
###Code
colname = 'Culmen Depth (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
And here are the distributions of body mass.
###Code
colname = 'Body Mass (g)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length. Normal ModelsLet's use these features to classify penguins. We'll proceed in the usual Bayesian way:1. Define a prior distribution with the three possible species and a prior probability for each,2. Compute the likelihood of the data for each hypothetical species, and then3. Compute the posterior probability of each hypothesis.To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species.The following function takes a `DataFrame` and a column name; it returns a dictionary that maps from each species name to a `norm` object.`norm` is defined in SciPy; it represents a normal distribution with a given mean and standard deviation.
###Code
from scipy.stats import norm
def make_norm_map(df, colname, by='Species2'):
"""Make a map from species to norm object."""
norm_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
mean = group.mean()
std = group.std()
norm_map[species] = norm(mean, std)
return norm_map
###Output
_____no_output_____
###Markdown
For example, here's the dictionary of `norm` objects for flipper length:
###Code
flipper_map = make_norm_map(df, 'Flipper Length (mm)')
flipper_map.keys()
###Output
_____no_output_____
###Markdown
Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?The `norm` object provides `pdf`, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
###Code
data = 193
flipper_map['Adelie'].pdf(data)
###Output
_____no_output_____
###Markdown
The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.Here's how we compute the likelihood of the data in each distribution.
###Code
hypos = flipper_map.keys()
likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos]
likelihood
###Output
_____no_output_____
###Markdown
Now we're ready to do the update. The UpdateAs usual I'll use a `Pmf` to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
###Code
from empiricaldist import Pmf
prior = Pmf(1/3, hypos)
prior
###Output
_____no_output_____
###Markdown
Now we can do the update in the usual way.
###Code
posterior = prior * likelihood
posterior.normalize()
posterior
###Output
_____no_output_____
###Markdown
A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement). The following function encapsulates the steps we just ran.It takes a `Pmf` representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
The return value is the posterior distribution.Here's the previous example again, using `update_penguin`:
###Code
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
###Output
_____no_output_____
###Markdown
As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.But culmen length *can* make this distinction, so let's use it to do a second round of classification.First we estimate distributions of culmen length for each species like this:
###Code
culmen_map = make_norm_map(df, 'Culmen Length (mm)')
###Output
_____no_output_____
###Markdown
Now suppose we see a penguin with culmen length 48 mm.We can use this data to update the prior.
###Code
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
###Output
_____no_output_____
###Markdown
A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.We can do better using multiple features. Naive Bayesian ClassificationTo make it easier to do multiple updates, I'll use the following function, which takes a prior `Pmf`, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
###Code
def update_naive(prior, data_seq, norm_maps):
"""Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
"""
posterior = prior.copy()
for data, norm_map in zip(data_seq, norm_maps):
posterior = update_penguin(posterior, data, norm_map)
return posterior
###Output
_____no_output_____
###Markdown
It performs a series of updates, using one variable at a time, and returns the posterior `Pmf`.To test it, I'll use the same features we looked at in the previous section: culmen length and flipper length.
###Code
colnames = ['Flipper Length (mm)', 'Culmen Length (mm)']
norm_maps = [flipper_map, culmen_map]
###Output
_____no_output_____
###Markdown
Now suppose we find a penguin with flipper length 193 mm and culmen length 48.Here's the update:
###Code
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
###Output
_____no_output_____
###Markdown
It is almost certain to be a Chinstrap.
###Code
posterior.max_prob()
###Output
_____no_output_____
###Markdown
We can loop through the dataset and classify each penguin with these two features.
###Code
import numpy as np
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames]
posterior = update_naive(prior, data_seq, norm_maps)
df.loc[i, 'Classification'] = posterior.max_prob()
###Output
_____no_output_____
###Markdown
This loop adds a column called `Classification` to the `DataFrame`; it contains the species with the maximum posterior probability for each penguin.So let's see how many we got right.
###Code
len(df)
valid = df['Classification'].notna()
valid.sum()
same = df['Species2'] == df['Classification']
same.sum()
###Output
_____no_output_____
###Markdown
There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.Of those, 324 are classified correctly, which is almost 95%.
###Code
same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.
###Code
def accuracy(df):
"""Compute the accuracy of classification."""
valid = df['Classification'].notna()
same = df['Species2'] == df['Classification']
return same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features. Joint DistributionsI'll start by making a scatter plot of the data.
###Code
import matplotlib.pyplot as plt
def scatterplot(df, var1, var2):
"""Make a scatter plot."""
grouped = df.groupby('Species2')
for species, group in grouped:
plt.plot(group[var1], group[var2],
label=species, lw=0, alpha=0.3)
decorate(xlabel=var1, ylabel=var2)
###Output
_____no_output_____
###Markdown
Here's a scatter plot of culmen length and flipper length for the three species.
###Code
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
scatterplot(df, var1, var2)
###Output
_____no_output_____
###Markdown
Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.The following function makes a discrete `Pmf` that approximates a normal distribution.
###Code
def make_pmf_norm(dist, sigmas=3, n=101):
"""Make a Pmf approximation to a normal distribution."""
mean, std = dist.mean(), dist.std()
low = mean - sigmas * std
high = mean + sigmas * std
qs = np.linspace(low, high, n)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
###Output
_____no_output_____
###Markdown
We can use it, along with `make_joint`, to make a joint distribution of culmen length and flipper length for each species.
###Code
from utils import make_joint
joint_map = {}
for species in hypos:
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = make_joint(pmf1, pmf2)
###Output
_____no_output_____
###Markdown
The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
###Code
from utils import plot_contour
scatterplot(df, var1, var2)
for species in hypos:
plot_contour(joint_map[species], alpha=0.5)
###Output
_____no_output_____
###Markdown
The contours of a joint normal distribution form ellipses.In this example, because the features are uncorrelated, the ellipses are aligned with the axes.But they are not well aligned with the data.We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution. Multivariate Normal DistributionAs we have seen, a univariate normal distribution is characterized by its mean and standard deviation.A multivariate normal distribution is characterized by the means of the features and the **covariance matrix**, which contains **variances**, which quantify the spread of the features, and the **covariances**, which quantify the relationships among them.We can use the data to estimate the means and covariance matrix for the population of penguins.First I'll select the columns we want.
###Code
features = df[[var1, var2]]
###Output
_____no_output_____
###Markdown
And compute the means.
###Code
mean = features.mean()
mean
###Output
_____no_output_____
###Markdown
We can also compute the covariance matrix:
###Code
cov = features.cov()
cov
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.Instead, we'll pass the covariance matrix to `multivariate_normal` which is a SciPy function that creates an object that represents a multivariate normal distribution.As arguments it takes a sequence of means and a covariance matrix:
###Code
from scipy.stats import multivariate_normal
multinorm = multivariate_normal(mean, cov)
###Output
_____no_output_____
###Markdown
The following function makes a `multivariate_normal` object for each species.
###Code
def make_multinorm_map(df, colnames):
"""Make a map from each species to a multivariate normal."""
multinorm_map = {}
grouped = df.groupby('Species2')
for species, group in grouped:
features = group[colnames]
mean = features.mean()
cov = features.cov()
multinorm_map[species] = multivariate_normal(mean, cov)
return multinorm_map
###Output
_____no_output_____
###Markdown
Here's how we make this map for the first two features, flipper length and culmen length.
###Code
multinorm_map = make_multinorm_map(df, [var1, var2])
###Output
_____no_output_____
###Markdown
Visualizing a Multivariate Normal DistributionThis section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.I'll start by making a contour map for the distribution of features among Adélie penguins. Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
###Code
norm1 = flipper_map['Adelie']
norm2 = culmen_map['Adelie']
multinorm = multinorm_map['Adelie']
###Output
_____no_output_____
###Markdown
I'll make a discrete `Pmf` approximation for each of the univariate distributions.
###Code
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
###Output
_____no_output_____
###Markdown
And use them to make a mesh grid that contains all pairs of values.
###Code
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
X.shape
###Output
_____no_output_____
###Markdown
The mesh is represented by two arrays: the first contains the quantities from `pmf1` along the `x` axis; the second contains the quantities from `pmf2` along the `y` axis.In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays.
###Code
pos = np.dstack((X, Y))
pos.shape
###Output
_____no_output_____
###Markdown
The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to `multinorm.pdf`, it evaluates the probability density function of the distribution for each pair of values.
###Code
densities = multinorm.pdf(pos)
densities.shape
###Output
_____no_output_____
###Markdown
The result is an array of probability densities. If we put them in a `DataFrame` and normalize them, the result is a discrete approximation of the joint distribution of the two features.
###Code
from utils import normalize
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
normalize(joint)
###Output
_____no_output_____
###Markdown
Here's what the result looks like.
###Code
plot_contour(joint)
decorate(xlabel=var1,
ylabel=var2)
###Output
_____no_output_____
###Markdown
The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.The following function encapsulate the steps we just did.
###Code
def make_joint(norm1, norm2, multinorm):
"""Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
"""
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
pos = np.dstack((X, Y))
densities = multinorm.pdf(pos)
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
return joint
###Output
_____no_output_____
###Markdown
The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
###Code
scatterplot(df, var1, var2)
for species in hypos:
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
plot_contour(joint, alpha=0.5)
###Output
_____no_output_____
###Markdown
Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications. A Less Naive ClassifierIn a previous section we used `update_penguin` to update a prior `Pmf` based on observed data and a collection of `norm` objects that model the distribution of observations under each hypothesis. Here it is again:
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
Last time we used this function, the values in `norm_map` were `norm` objects, but it also works if they are `multivariate_normal` objects.We can use it to classify a penguin with flipper length 193 and culmen length 48:
###Code
data = 193, 48
update_penguin(prior, data, multinorm_map)
###Output
_____no_output_____
###Markdown
A penguin with those measurements is almost certainly an Chinstrap. Now let's see if this classifier does any better than the naive Bayesian classifier.I'll apply it to each penguin in the dataset:
###Code
df['Classification'] = np.nan
for i, row in df.iterrows():
data = row[colnames]
posterior = update_penguin(prior, data, multinorm_map)
df.loc[i, 'Classification'] = posterior.idxmax()
###Output
_____no_output_____
###Markdown
And compute the accuracy:
###Code
accuracy(df)
###Output
_____no_output_____
###Markdown
It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier. SummaryIn this chapter, we implemented a naive Bayesian classifier, which is "naive" in the sense that it assumes that the features is uses for classification are independent.To see how bad that assumption is, we also implemented a classifier that uses the a multivariate normal distribution to model the joint distribution of the features, which includes their dependencies.In this example, the non-naive classifier is only marginally better.In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement.But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes.Speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak.But there *are* scientific uses for this type of classification. One of them is the subject of the research paper we started with: [sexual dimorphism](https://en.wikipedia.org/wiki/Sexual_dimorphism), that is, differences in shape between male and female animals.In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart.And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution. One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism.As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic. Exercises **Exercise:** In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features.Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass.Is it more accurate than the model with two features?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately.As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve? Note: One Gentoo penguin has an invalid value for `Sex`. I used the following code to select one species and filter out invalid data.
###Code
gentoo = (df['Species2'] == 'Gentoo')
subset = df[gentoo].copy()
subset['Sex'].value_counts()
valid = df['Sex'] != '.'
valid.sum()
subset = df[valid & gentoo].copy()
###Output
_____no_output_____
###Markdown
OK, you can finish it off from here.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Classification Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
from utils import Or70, Pu50, Gr30
color_list3 = [Or70, Pu50, Gr30]
import matplotlib.pyplot as plt
from cycler import cycler
marker_cycle = cycler(marker=['s', 'o', '^'])
color_cycle = cycler(color=color_list3)
line_cycle = cycler(linestyle=['-', '--', ':'])
plt.rcParams['axes.prop_cycle'] = (color_cycle +
marker_cycle +
line_cycle)
###Output
_____no_output_____
###Markdown
Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of [spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering).In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014).We'll use this data to classify penguins by species. The following cell downloads the raw data.
###Code
# Load the data files from
# https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
import os
if not os.path.exists('penguins_raw.csv'):
!wget https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv
###Output
_____no_output_____
###Markdown
Penguin DataI'll use Pandas to load the data into a `DataFrame`.
###Code
import pandas as pd
df = pd.read_csv('penguins_raw.csv')
df.shape
###Output
_____no_output_____
###Markdown
The dataset contains one row for each penguin and one column for each variable.
###Code
df.head()
###Output
_____no_output_____
###Markdown
For convenience, I'll create a new column called `Species2` that contains a shorter version of the species names.
###Code
def shorten(species):
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
###Output
_____no_output_____
###Markdown
Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo. These species are shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license): The measurements we'll use are:* Body Mass in grams (g).* Flipper Length in millimeters (mm).* Culmen Length in millimeters. * Culmen Depth in millimeters.If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurementCulmen). The culmen is shown in the following illustration (also by Allison Horst): These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species. The following function takes the `DataFrame` and a column name.It returns a dictionary that maps from each species name to a `Cdf` of the values in the column named `colname`.
###Code
def make_cdf_map(df, colname, by='Species2'):
"""Make a CDF for each species."""
cdf_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
cdf_map[species] = Cdf.from_seq(group, name=species)
return cdf_map
###Output
_____no_output_____
###Markdown
The following function plots a `Cdf` of the values in the given column for each species:
###Code
from empiricaldist import Cdf
from utils import decorate
def plot_cdfs(df, colname, by='Species2'):
"""Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
cdf_map = make_cdf_map(df, colname, by)
for species, cdf in cdf_map.items():
cdf.plot(label=species, marker='')
decorate(xlabel=colname,
ylabel='CDF')
###Output
_____no_output_____
###Markdown
Here's what the distributions look like for culmen length.
###Code
colname = 'Culmen Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.Here are the distributions for flipper length.
###Code
colname = 'Flipper Length (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section. Here are the distributions for culmen depth.
###Code
colname = 'Culmen Depth (mm)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
And here are the distributions of body mass.
###Code
colname = 'Body Mass (g)'
plot_cdfs(df, colname)
###Output
_____no_output_____
###Markdown
Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length. Normal ModelsLet's use these features to classify penguins. We'll proceed in the usual Bayesian way:1. Define a prior distribution with the three possible species and a prior probability for each,2. Compute the likelihood of the data for each hypothetical species, and then3. Compute the posterior probability of each hypothesis.To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species.The following function takes a `DataFrame` and a column name; it returns a dictionary that maps from each species name to a `norm` object.`norm` is defined in SciPy; it represents a normal distribution with a given mean and standard deviation.
###Code
from scipy.stats import norm
def make_norm_map(df, colname, by='Species2'):
"""Make a map from species to norm object."""
norm_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
mean = group.mean()
std = group.std()
norm_map[species] = norm(mean, std)
return norm_map
###Output
_____no_output_____
###Markdown
For example, here's the dictionary of `norm` objects for flipper length:
###Code
flipper_map = make_norm_map(df, 'Flipper Length (mm)')
flipper_map.keys()
###Output
_____no_output_____
###Markdown
Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?The `norm` object provides `pdf`, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
###Code
data = 193
flipper_map['Adelie'].pdf(data)
###Output
_____no_output_____
###Markdown
The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.Here's how we compute the likelihood of the data in each distribution.
###Code
hypos = flipper_map.keys()
likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos]
likelihood
###Output
_____no_output_____
###Markdown
Now we're ready to do the update. The UpdateAs usual I'll use a `Pmf` to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
###Code
from empiricaldist import Pmf
prior = Pmf(1/3, hypos)
prior
###Output
_____no_output_____
###Markdown
Now we can do the update in the usual way.
###Code
posterior = prior * likelihood
posterior.normalize()
posterior
###Output
_____no_output_____
###Markdown
A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement). The following function encapsulates the steps we just ran.It takes a `Pmf` representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
The return value is the posterior distribution.Here's the previous example again, using `update_penguin`:
###Code
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
###Output
_____no_output_____
###Markdown
As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.But culmen length *can* make this distinction, so let's use it to do a second round of classification.First we estimate distributions of culmen length for each species like this:
###Code
culmen_map = make_norm_map(df, 'Culmen Length (mm)')
###Output
_____no_output_____
###Markdown
Now suppose we see a penguin with culmen length 48 mm.We can use this data to update the prior.
###Code
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
###Output
_____no_output_____
###Markdown
A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.We can do better using multiple features. Naive Bayesian ClassificationTo make it easier to do multiple updates, I'll use the following function, which takes a prior `Pmf`, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
###Code
def update_naive(prior, data_seq, norm_maps):
"""Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
"""
posterior = prior.copy()
for data, norm_map in zip(data_seq, norm_maps):
posterior = update_penguin(posterior, data, norm_map)
return posterior
###Output
_____no_output_____
###Markdown
It performs a series of updates, using one variable at a time, and returns the posterior `Pmf`.To test it, I'll use the same features we looked at in the previous section: culmen length and flipper length.
###Code
colnames = ['Flipper Length (mm)', 'Culmen Length (mm)']
norm_maps = [flipper_map, culmen_map]
###Output
_____no_output_____
###Markdown
Now suppose we find a penguin with flipper length 193 mm and culmen length 48.Here's the update:
###Code
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
###Output
_____no_output_____
###Markdown
It is almost certain to be a Chinstrap.
###Code
posterior.max_prob()
###Output
_____no_output_____
###Markdown
We can loop through the dataset and classify each penguin with these two features.
###Code
import numpy as np
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames]
posterior = update_naive(prior, data_seq, norm_maps)
df.loc[i, 'Classification'] = posterior.max_prob()
###Output
_____no_output_____
###Markdown
This loop adds a column called `Classification` to the `DataFrame`; it contains the species with the maximum posterior probability for each penguin.So let's see how many we got right.
###Code
len(df)
valid = df['Classification'].notna()
valid.sum()
same = df['Species2'] == df['Classification']
same.sum()
###Output
_____no_output_____
###Markdown
There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.Of those, 324 are classified correctly, which is almost 95%.
###Code
same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.
###Code
def accuracy(df):
"""Compute the accuracy of classification."""
valid = df['Classification'].notna()
same = df['Species2'] == df['Classification']
return same.sum() / valid.sum()
###Output
_____no_output_____
###Markdown
The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features. Joint DistributionsI'll start by making a scatter plot of the data.
###Code
import matplotlib.pyplot as plt
def scatterplot(df, var1, var2):
"""Make a scatter plot."""
grouped = df.groupby('Species2')
for species, group in grouped:
plt.plot(group[var1], group[var2],
label=species, lw=0, alpha=0.3)
decorate(xlabel=var1, ylabel=var2)
###Output
_____no_output_____
###Markdown
Here's a scatter plot of culmen length and flipper length for the three species.
###Code
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
scatterplot(df, var1, var2)
###Output
_____no_output_____
###Markdown
Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.The following function makes a discrete `Pmf` that approximates a normal distribution.
###Code
def make_pmf_norm(dist, sigmas=3, n=101):
"""Make a Pmf approximation to a normal distribution."""
mean, std = dist.mean(), dist.std()
low = mean - sigmas * std
high = mean + sigmas * std
qs = np.linspace(low, high, n)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
###Output
_____no_output_____
###Markdown
We can use it, along with `make_joint`, to make a joint distribution of culmen length and flipper length for each species.
###Code
from utils import make_joint
joint_map = {}
for species in hypos:
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = make_joint(pmf1, pmf2)
###Output
_____no_output_____
###Markdown
The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
###Code
from utils import plot_contour
scatterplot(df, var1, var2)
for species in hypos:
plot_contour(joint_map[species], alpha=0.5)
###Output
_____no_output_____
###Markdown
The contours of a joint normal distribution form ellipses.In this example, because the features are uncorrelated, the ellipses are aligned with the axes.But they are not well aligned with the data.We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution. Multivariate Normal DistributionAs we have seen, a univariate normal distribution is characterized by its mean and standard deviation.A multivariate normal distribution is characterized by the means of the features and the **covariance matrix**, which contains **variances**, which quantify the spread of the features, and the **covariances**, which quantify the relationships among them.We can use the data to estimate the means and covariance matrix for the population of penguins.First I'll select the columns we want.
###Code
features = df[[var1, var2]]
###Output
_____no_output_____
###Markdown
And compute the means.
###Code
mean = features.mean()
mean
###Output
_____no_output_____
###Markdown
We can also compute the covariance matrix:
###Code
cov = features.cov()
cov
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.Instead, we'll pass the covariance matrix to `multivariate_normal`, which is a SciPy function that creates an object that represents a multivariate normal distribution.As arguments it takes a sequence of means and a covariance matrix:
###Code
from scipy.stats import multivariate_normal
multinorm = multivariate_normal(mean, cov)
###Output
_____no_output_____
###Markdown
The following function makes a `multivariate_normal` object for each species.
###Code
def make_multinorm_map(df, colnames):
"""Make a map from each species to a multivariate normal."""
multinorm_map = {}
grouped = df.groupby('Species2')
for species, group in grouped:
features = group[colnames]
mean = features.mean()
cov = features.cov()
multinorm_map[species] = multivariate_normal(mean, cov)
return multinorm_map
###Output
_____no_output_____
###Markdown
Here's how we make this map for the first two features, flipper length and culmen length.
###Code
multinorm_map = make_multinorm_map(df, [var1, var2])
###Output
_____no_output_____
###Markdown
Visualizing a Multivariate Normal DistributionThis section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.I'll start by making a contour map for the distribution of features among Adélie penguins. Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
###Code
norm1 = flipper_map['Adelie']
norm2 = culmen_map['Adelie']
multinorm = multinorm_map['Adelie']
###Output
_____no_output_____
###Markdown
I'll make a discrete `Pmf` approximation for each of the univariate distributions.
###Code
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
###Output
_____no_output_____
###Markdown
And use them to make a mesh grid that contains all pairs of values.
###Code
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
X.shape
###Output
_____no_output_____
###Markdown
The mesh is represented by two arrays: the first contains the quantities from `pmf1` along the `x` axis; the second contains the quantities from `pmf2` along the `y` axis.In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays.
###Code
pos = np.dstack((X, Y))
pos.shape
###Output
_____no_output_____
###Markdown
The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to `multinorm.pdf`, it evaluates the probability density function of the distribution for each pair of values.
###Code
densities = multinorm.pdf(pos)
densities.shape
###Output
_____no_output_____
###Markdown
The result is an array of probability densities. If we put them in a `DataFrame` and normalize them, the result is a discrete approximation of the joint distribution of the two features.
###Code
from utils import normalize
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
normalize(joint)
###Output
_____no_output_____
###Markdown
Here's what the result looks like.
###Code
plot_contour(joint)
decorate(xlabel=var1,
ylabel=var2)
###Output
_____no_output_____
###Markdown
The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.The following function encapsulate the steps we just did.
###Code
def make_joint(norm1, norm2, multinorm):
"""Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
"""
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
pos = np.dstack((X, Y))
densities = multinorm.pdf(pos)
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
return joint
###Output
_____no_output_____
###Markdown
The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
###Code
scatterplot(df, var1, var2)
for species in hypos:
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
plot_contour(joint, alpha=0.5)
###Output
_____no_output_____
###Markdown
Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications. A Less Naive ClassifierIn a previous section we used `update_penguin` to update a prior `Pmf` based on observed data and a collection of `norm` objects that model the distribution of observations under each hypothesis. Here it is again:
###Code
def update_penguin(prior, data, norm_map):
"""Update hypothetical species."""
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
Last time we used this function, the values in `norm_map` were `norm` objects, but it also works if they are `multivariate_normal` objects.We can use it to classify a penguin with flipper length 193 and culmen length 48:
###Code
data = 193, 48
update_penguin(prior, data, multinorm_map)
###Output
_____no_output_____
###Markdown
A penguin with those measurements is almost certainly a Chinstrap.Now let's see if this classifier does any better than the naive Bayesian classifier.I'll apply it to each penguin in the dataset:
###Code
df['Classification'] = np.nan
for i, row in df.iterrows():
data = row[colnames]
posterior = update_penguin(prior, data, multinorm_map)
df.loc[i, 'Classification'] = posterior.idxmax()
###Output
_____no_output_____
###Markdown
And compute the accuracy:
###Code
accuracy(df)
###Output
_____no_output_____
###Markdown
It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier. SummaryIn this chapter, we implemented a naive Bayesian classifier, which is "naive" in the sense that it assumes that the features it uses for classification are independent.To see how bad that assumption is, we also implemented a classifier that uses a multivariate normal distribution to model the joint distribution of the features, which includes their dependencies.In this example, the non-naive classifier is only marginally better.In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement.But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes.Speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak.But there *are* scientific uses for this type of classification. One of them is the subject of the research paper we started with: [sexual dimorphism](https://en.wikipedia.org/wiki/Sexual_dimorphism), that is, differences in shape between male and female animals.In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart.And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution. One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism.As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic. Exercises **Exercise:** In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features.Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass.Is it more accurate than the model with two features?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately.As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve? Note: One Gentoo penguin has an invalid value for `Sex`. I used the following code to select one species and filter out invalid data.
###Code
gentoo = (df['Species2'] == 'Gentoo')
subset = df[gentoo].copy()
subset['Sex'].value_counts()
valid = df['Sex'] != '.'
valid.sum()
subset = df[valid & gentoo].copy()
###Output
_____no_output_____
###Markdown
OK, you can finish it off from here.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
0.333 0.25 0.46716293183605073
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# Solution goes here
def peak_students(tf):
return tf.max()
# Solution goes here
def day_peak_outbreak(tf):
return tf.idmax()
# Solution goes here
def fraction_end(tf):
return tf.iloc[-1]
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
system.init.S
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
Saving figure to file figs/chap12-fig01.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
0.0 0.468320811028781
0.1 0.30650802853979753
0.2 0.16136545700638427
0.30000000000000004 0.0728155898425179
0.4 0.03552021675299155
0.5 0.019688715782459176
0.6000000000000001 0.011622057998337987
0.7000000000000001 0.006838737800619332
0.8 0.003696496253713877
0.9 0.0014815326722661948
1.0 -0.00016121210941239666
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
Saving figure to file figs/chap12-fig02.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have.
###Code
def compute_factor_b(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=.00001)
ercent_reduction = compute_factor_b(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
def compute_factor_k(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=1, B=0.01)
percent_reduction = compute_factor_k(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
def compute_factor_m(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=100, K=0.2, B=0.01)
percent_reduction = compute_factor_m(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
0.0 0.3328871432717143 0.4667702312363652
100.0 0.3321342526691939 0.46414165040064037
200.0 0.33017160845482885 0.4572170063132055
300.0 0.32538647186519215 0.4398872029120663
400.0 0.3154039052420003 0.40163064627138245
500.0 0.3 0.3370342594898199
600.0 0.28459609475799963 0.26731703056804546
700.0 0.2746135281348078 0.22184699045990752
800.0 0.26982839154517113 0.20079159841614402
900.0 0.2678657473308061 0.1923921833925878
1000.0 0.26711285672828566 0.18921320781833872
1100.0 0.26683150821044227 0.18803175228016467
1200.0 0.26672740341296003 0.1875955039953746
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
Saving figure to file figs/chap12-fig03.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
0 0.9888888888888889 0.26672740341296003 0.1875955039953746
1 0.9777777777777779 0.26676674548378243 0.1743247948919835
2 0.9666666666666667 0.26683150821044227 0.16186489064019816
3 0.9555555555555556 0.2669380091810597 0.15027269543863153
4 0.9444444444444445 0.26711285672828566 0.13961230443995154
5 0.9333333333333333 0.2673991295087062 0.12996358194403557
6 0.9222222222222223 0.2678657473308061 0.12143480916638649
7 0.9111111111111112 0.26862081538342375 0.11418037085597799
8 0.9 0.26982839154517113 0.10842378710730838
9 0.888888888888889 0.2717238786680829 0.10448337420345766
10 0.8777777777777778 0.2746135281348078 0.10278809822691182
11 0.8666666666666667 0.27882836825375706 0.10384617953405528
12 0.8555555555555556 0.28459609475799963 0.10808357014846459
13 0.8444444444444446 0.291836044586543 0.11545011940928851
14 0.8333333333333334 0.3 0.12487748027930945
15 0.8222222222222223 0.308163955413457 0.13414988309928788
16 0.8111111111111111 0.3154039052420003 0.14076454366302038
17 0.8 0.32117163174624286 0.1432024907477738
18 0.788888888888889 0.32538647186519215 0.14139228717345576
19 0.7777777777777778 0.3282761213319171 0.13622953434928864
20 0.7666666666666667 0.33017160845482885 0.1288888309908195
21 0.7555555555555555 0.33137918461657623 0.12040298646488024
22 0.7444444444444445 0.3321342526691939 0.11152660171117068
23 0.7333333333333334 0.3326008704912938 0.1027509074946692
24 0.7222222222222223 0.3328871432717143 0.09436692248570877
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
Saving figure to file figs/chap12-fig04.pdf
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Answer:** It doubles the max doeses and changes the shape of the curve of infected students to budget aloocation. **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?**Answer:** I would write a function that updates beta similar the effect of quarentine would be to decrease beta or the rate at which students contact the virus for the system. I would expect quarentine to have a greater impact than the amount adjusted coming from a compute function, we would pass this variable into function istelf.
###Code
# Solution goes here
def add_quarentine(system, quarentine_effect):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
system.beta *= (1 - quarentine_effect)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
_____no_output_____
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
_____no_output_____
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
_____no_output_____
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) Assignment 12 Completed by: Philip Tanofsky
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
0.333 0.25 0.46716293183605073
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# The fraction of students who are sick at the peak of the outbreak
def calc_peak_infected(results):
"""Fraction of population sick at the peak of the outbreak
results: DataFrame with columns S, I, R
returns: fraction of population
"""
# Return appropriate calcuation
return results.I.max()
# Example
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_peak_infected(results))
# Solution goes here
def calc_day_of_outbreak_peak(results):
"""The day of the outbreak peaks
results: DataFrame with columns S, I, R
returns: day of peak
"""
# Return appropriate calcuation
return results.I.idxmax()
# Example
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_day_of_outbreak_peak(results))
# Solution goes here
def calc_infected_at_final(results):
"""Fraction of students sick at the end of the semester.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
# Return appropriate calcuation
return get_last_value(results.I)
# Example
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_infected_at_final(results))
###Output
0.333 0.25 0.0006741943156034474
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
Saving figure to file figs/chap12-fig01.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
0.0 0.468320811028781
0.1 0.30650802853979753
0.2 0.16136545700638427
0.30000000000000004 0.0728155898425179
0.4 0.03552021675299155
0.5 0.019688715782459176
0.6000000000000001 0.011622057998337987
0.7000000000000001 0.006838737800619332
0.8 0.003696496253713877
0.9 0.0014815326722661948
1.0 -0.00016121210941239666
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
Saving figure to file figs/chap12-fig02.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. **Answer:** Lowering the value of M will move the plot increase leftward, while increasing the M value will plot the increase further to the right in the plot. The K value will determine the height to which the plot can reach. With the initial setting of 0.2, the graph plateaus at 20 percent reduction. If the K value is decreased to 0.1, then the plot plateaus at 10 percent reduction, while increasing as the opposite effect. A K value of 0.5 allows the plot to plateau at 50 percent reduction. The B value impacts the severity of the curve upward. A decreased value, initially at 0.01 to 0.005, displays a curve approaching a more linear view of the graph. On the other hand, increasing the B value to 0.05 shows a plot that immediately reaches the plateau once the curve begins showing increased percent reductions. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
0.0 0.3328871432717143 0.4667702312363652
100.0 0.3321342526691939 0.46414165040064037
200.0 0.33017160845482885 0.4572170063132055
300.0 0.32538647186519215 0.4398872029120663
400.0 0.3154039052420003 0.40163064627138245
500.0 0.3 0.3370342594898199
600.0 0.28459609475799963 0.26731703056804546
700.0 0.2746135281348078 0.22184699045990752
800.0 0.26982839154517113 0.20079159841614402
900.0 0.2678657473308061 0.1923921833925878
1000.0 0.26711285672828566 0.18921320781833872
1100.0 0.26683150821044227 0.18803175228016467
1200.0 0.26672740341296003 0.1875955039953746
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
Saving figure to file figs/chap12-fig03.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
0 0.9888888888888889 0.26672740341296003 0.1875955039953746
1 0.9777777777777779 0.26683150821044227 0.17458071882622528
2 0.9666666666666667 0.26711285672828566 0.16290983834857686
3 0.9555555555555556 0.2678657473308061 0.15350834947768177
4 0.9444444444444445 0.26982839154517113 0.1485650923152827
5 0.9333333333333333 0.2746135281348078 0.15294595061102179
6 0.9222222222222223 0.28459609475799963 0.1749644150235239
7 0.9111111111111112 0.3 0.21734316168444845
8 0.9 0.3154039052420003 0.2590710444883414
9 0.888888888888889 0.32538647186519215 0.27840288410342784
10 0.8777777777777778 0.33017160845482885 0.2779145346228302
11 0.8666666666666667 0.3321342526691939 0.2673574966927026
12 0.8555555555555556 0.3328871432717143 0.25279694563572175
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
Saving figure to file figs/chap12-fig04.pdf
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending?
###Code
# Solution goes here
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
infected_sweep = sweep_doses(dose_array)
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig14.pdf')
###Output
Saving figure to file figs/chap12-fig14.pdf
###Markdown
**Answer:** The optimal allocation of money would be $500 on a total of 10 vaccines.This means a total of 10 doses and the rest of the budget for the hand-washing campaign. By cutting the price of the vaccine in half, not only are more vaccines recommended, but the total fraction infected is below 0.11. Compared to the vaccine at $100, the optimal vaccine dose of 4 only gets the total fraction of infected to 0.15. Important to see the different ranges in the y axis. **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
def make_system_siqr(beta, gamma):
"""Make a system object for the SIQR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, Q=0, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func_siqr(state, t, system):
"""Update the SIQR model.
state: State with variables S, I, Q, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, q, r = state
# beta: contact rate in days
# gamma: recovery rate in days
# Infected become quarantined
quarantined = i
# Infected * rate of contact * suspectible
infected = system.beta * i * s
# Recovered is recovery rate for those in quarantine
recovered = system.gamma * q
s -= infected
i += infected - quarantined
q += quarantined - recovered
r += recovered
return State(S=s, I=i, Q=q, R=r)
def plot_results(S, I, Q, R):
"""Plot the results of a SIQR model.
S: TimeSeries
I: TimeSeries
Q: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(Q, '.', label='Quarantined')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
# Solution goes here
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system_siqr(beta, gamma)
results_ex = run_simulation(system, update_func_siqr)
results_ex.head()
# Plot the results
plot_results(results_ex.S, results_ex.I, results_ex.Q, results_ex.R)
savefig('figs/chap11-fig111.pdf')
###Output
Saving figure to file figs/chap11-fig111.pdf
###Markdown
**Answer:** Very interesting lesson learned during this time of the pandemic Covid-19. If infected individuals are immediately quarantined, then yes, the susceptible fraction of the population remains very high, almost at 1 (or 100%), while the infected rate quickly goes to 0.0 (zero).
###Code
plot_results(results_ex.I, results_ex.I, results_ex.Q, results_ex.R)
###Output
_____no_output_____
###Markdown
Table of Contents1 Modeling and Simulation in Python1.0.1 Code1.0.2 Metrics1.0.3 What if?1.0.4 Logistic function1.0.5 Hand washing1.0.6 Optimization1.0.7 Exercises Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
results
###Output
_____no_output_____
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
results.index
def peak_sick(results):
return results.I.max(),results.I.idxmax()
peak_sick(results)
def sick_when_semester_end(results):
return results.I.iloc[-1],results.index[-1]
sick_when_semester_end(results)
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
Saving figure to file figs/chap12-fig01.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
0.0 0.468320811028781
0.1 0.30650802853979753
0.2 0.16136545700638427
0.30000000000000004 0.0728155898425179
0.4 0.03552021675299155
0.5 0.019688715782459176
0.6000000000000001 0.011622057998337987
0.7000000000000001 0.006838737800619332
0.8 0.003696496253713877
0.9 0.0014815326722661948
1.0 -0.00016121210941239666
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
# savefig('figs/chap12-fig02.pdf')
###Output
_____no_output_____
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
0.0 0.3328871432717143 0.4667702312363652
100.0 0.3321342526691939 0.46414165040064037
200.0 0.33017160845482885 0.4572170063132055
300.0 0.32538647186519215 0.4398872029120663
400.0 0.3154039052420003 0.40163064627138245
500.0 0.3 0.3370342594898199
600.0 0.28459609475799963 0.26731703056804546
700.0 0.2746135281348078 0.22184699045990752
800.0 0.26982839154517113 0.20079159841614402
900.0 0.2678657473308061 0.1923921833925878
1000.0 0.26711285672828566 0.18921320781833872
1100.0 0.26683150821044227 0.18803175228016467
1200.0 0.26672740341296003 0.1875955039953746
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 21)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
Saving figure to file figs/chap12-fig03.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
0 0.9888888888888889 0.26672740341296003 0.1875955039953746
1 0.9777777777777779 0.26683150821044227 0.17458071882622528
2 0.9666666666666667 0.26711285672828566 0.16290983834857686
3 0.9555555555555556 0.2678657473308061 0.15350834947768177
4 0.9444444444444445 0.26982839154517113 0.1485650923152827
5 0.9333333333333333 0.2746135281348078 0.15294595061102179
6 0.9222222222222223 0.28459609475799963 0.1749644150235239
7 0.9111111111111112 0.3 0.21734316168444845
8 0.9 0.3154039052420003 0.2590710444883414
9 0.888888888888889 0.32538647186519215 0.27840288410342784
10 0.8777777777777778 0.33017160845482885 0.2779145346228302
11 0.8666666666666667 0.3321342526691939 0.2673574966927026
12 0.8555555555555556 0.3328871432717143 0.25279694563572175
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
Saving figure to file figs/chap12-fig04.pdf
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
# Solution goes here
num_students = 90
budget = 1200
new_max_doses = int(budget / 50)
new_dose_array = linrange(new_max_doses, endpoint=True)
infected_sweep = sweep_doses(new_dose_array)
plot(infected_sweep)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 12Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
CodeHere's the code from the previous notebook that we'll need.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
**Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
results = run_simulation(system, update_func)
calc_total_infected(results)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap12-fig01.pdf')
###Output
_____no_output_____
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap12-fig02.pdf')
###Output
_____no_output_____
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap12-fig03.pdf')
###Output
_____no_output_____
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results, run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
###Output
_____no_output_____
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap12-fig04.pdf')
###Output
_____no_output_____
###Markdown
Exercises**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
# Solution goes here
###Output
_____no_output_____ |
11_program_specific_methods/2_qe_methods/1_pdos/tutorial.ipynb | ###Markdown
pDOS analysis of the Quantum Espresso calculations Table of Content 1. [General setups](setups)2. [Run QE pDOS calculations](run_qe) 3. [Compute the pDOS](compute_pdos) 4. [Optional cleanup](cleanup) A. Learning objectives- to compute various types of pDOS based on the QE calculations B. Use cases- computing pDOS C. Functions- `libra_py` - `data_conv` - [`MATRIX2nparray`](MATRIX2nparray) - `pdos` - [`QE_pdos`](QE_pdos) D. Classes and class membersNone 1. General setups [Back to TOC](TOC)
###Code
import os
import sys
if sys.platform=="cygwin":
from cyglibra_core import *
elif sys.platform=="linux" or sys.platform=="linux2":
from liblibra_core import *
import util.libutil as comn
from libra_py import pdos, data_conv
import matplotlib.pyplot as plt # plots
#matplotlib.use('Agg')
#%matplotlib inline
import numpy as np
#from matplotlib.mlab import griddata
plt.rc('axes', titlesize=24) # fontsize of the axes title
plt.rc('axes', labelsize=20) # fontsize of the x and y labels
plt.rc('legend', fontsize=20) # legend fontsize
plt.rc('xtick', labelsize=16) # fontsize of the tick labels
plt.rc('ytick', labelsize=16) # fontsize of the tick labels
plt.rc('figure.subplot', left=0.2)
plt.rc('figure.subplot', right=0.95)
plt.rc('figure.subplot', bottom=0.13)
plt.rc('figure.subplot', top=0.88)
colors = {}
colors.update({"11": "#8b1a0e"}) # red
colors.update({"12": "#FF4500"}) # orangered
colors.update({"13": "#B22222"}) # firebrick
colors.update({"14": "#DC143C"}) # crimson
colors.update({"21": "#5e9c36"}) # green
colors.update({"22": "#006400"}) # darkgreen
colors.update({"23": "#228B22"}) # forestgreen
colors.update({"24": "#808000"}) # olive
colors.update({"31": "#8A2BE2"}) # blueviolet
colors.update({"32": "#00008B"}) # darkblue
colors.update({"41": "#2F4F4F"}) # darkslategray
clrs_index = ["11", "21", "31", "41", "12", "22", "32", "13","23", "14", "24"]
###Output
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >, false> > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > >, false> > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > >, false> > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<std::complex<double>, std::allocator<std::complex<double> > >, std::allocator<std::vector<std::complex<double>, std::allocator<std::complex<double> > > > > already registered; second conversion method ignored.
return f(*args, **kwds)
/home/alexey/Conda/Miniconda3/envs/libra/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<std::complex<double>, std::allocator<std::complex<double> > >, std::allocator<std::vector<std::complex<double>, std::allocator<std::complex<double> > > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<std::complex<double>, std::allocator<std::complex<double> > >, std::allocator<std::vector<std::complex<double>, std::allocator<std::complex<double> > > > >, false> > already registered; second conversion method ignored.
return f(*args, **kwds)
###Markdown
=============== System selection: CdSe ==================This tutorial demonstrates the calculations for CdSe system, so we first need to get into that directory.
###Code
os.chdir("CdSe")
print(os.getcwd())
###Output
/mnt/c/cygwin/home/Alexey-user/compchem-cybertraining/Tutorials_Libra/11_program_specific_methods/2_qe_methods/1_pdos/CdSe
###Markdown
2. Run QE pDOS calculations [Back to TOC](TOC)In this tutorial, we are not going into details of the pDOS calculations with QE. It is covered in a different tutorial. However, for completeness, the input/submit/output files are included in this tutorial folder.On the UB CCR cluster, the calculations are done with the following command: sbatch submit.slmThe calculations produced a set of `x.pdos_atm*` files, which we placed into the `pdos` folder, which is presently archived.Here, we will mimick the calculations by simply unzipping the `pdos` folder
###Code
os.system("unzip pdos.zip -d pdos")
###Output
_____no_output_____
###Markdown
The files in the newly created directory contain orbital-resolved densities of states. We use these files as the input to compute various kinds of pDOS 3. Compute the pDOS [Back to TOC](TOC)We are going to use the `QE_pdos` function from the `libra_py.pdos` module.
###Code
help(pdos.QE_pdos)
E_f = 2.1608
Cd_p = [["p"], [1, 2], ["Cd"] ]
Cd_d = [["d"], [1, 2], ["Cd"] ]
Se_p = [["p"], [3, 4], ["Se"] ]
Se_d = [["d"], [3, 4], ["Se"] ]
projections = [ Cd_p, Cd_d, Se_p, Se_d ]
E, pdosa, pdosb = pdos.QE_pdos("pdos/x.pdos_atm#", -10.0, 10.0, 0.1, projections,\
E_f, "pdos_", 1, 0.01, 0.1, nspin=2)
###Output
multiplication factor is = 10
original grid spacing = 0.1
new grid spacing = 0.01
gaussian variance = 0.1
multiplication factor is = 10
original grid spacing = 0.1
new grid spacing = 0.01
gaussian variance = 0.1
###Markdown
Here, we requested to compute the densities of p and d states resolved by Cd and Se atom typels. This is defined by the projections.**Note that we should be using the atom indices starting from 1, not from 0, when we define the projections** The `x.pdos.in` file used the energy grid spacing of 0.1 eV, so we use it as the input of the `de` variable in the `QE_pdos` function.However, we want to makes the pDOS a bit more smooth, so we use the new energy grid spacing to be 0.01 eV. We also broaden each line by 0.1 eV, which is comparable to the size of the original energy grid spacing. To make a smooth plot, we turn on the convolution of the original data with the Galussians by setting the `do_convolve` argument to 1. Finally, our calculations were conducted as spin-polarized (unrestricted), so we need to set the `nspin` parameter to 2, to properly read the information.We also need to set the `E_f` variable to the correct value (taken from the output of the single-point calculations), such that the pDOS we are going to plot next are centered on the Fermi energy level. The function `QE_pdos` retuns the new energy grid, as well as the requested alpha and beta projections, all in the MATRIX format. For convenience, we convert the MATRIX objects to numpy arrays using the `MATRIX2nparray` function:
###Code
e_grid = data_conv.MATRIX2nparray(E)
proja = data_conv.MATRIX2nparray(pdosa)
projb = data_conv.MATRIX2nparray(pdosb)
e_grid.shape, proja.shape, projb.shape
###Output
_____no_output_____
###Markdown
Finally, let's define a plotting function to produce nice pictures. This function plots the pDOSs of alpha (on positive half of the y axis) and beta (on negative one) types. Because the system is polarized, the alpha and beta states are different, which means the two pDOSs are not symmetric w.r.t. reflection around x axis.
###Code
def plot(_energy, _pdosa, _pdosb):
plt.rc('axes', titlesize=18) # fontsize of the axes title
plt.rc('axes', labelsize=18) # fontsize of the x and y labels
plt.rc('legend', fontsize=12) # legend fontsize
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.ylim(-5.0, 5.0)
plt.xlim(-5.0, 7.0)
#======== Now lets plot what we have computed ===========
plt.figure(1, figsize=(36, 24), dpi=300, frameon=False)
lw = 3
plt.title('CdSe pDOS')
plt.xlabel('$E - E_f, eV$')
plt.ylabel('pDOS, 1/eV')
plt.plot(_energy[:,0], _pdosa[:, 0], label='Cd(p)', linewidth=lw, color = colors["11"])
plt.plot(_energy[:,0], _pdosa[:, 1], label='Cd(d)', linewidth=lw, color = colors["21"])
plt.plot(_energy[:,0], _pdosa[:, 2], label='Se(p)', linewidth=lw, color = colors["31"])
plt.plot(_energy[:,0], _pdosa[:, 3], label='Se(d)', linewidth=lw, color = colors["41"])
plt.plot(_energy[:,0], -_pdosb[:, 0], linewidth=lw, color = colors["11"])
plt.plot(_energy[:,0], -_pdosb[:, 1], linewidth=lw, color = colors["21"])
plt.plot(_energy[:,0], -_pdosb[:, 2], linewidth=lw, color = colors["31"])
plt.plot(_energy[:,0], -_pdosb[:, 3], linewidth=lw, color = colors["41"])
plt.legend()
plt.tight_layout()
plt.savefig("pdos.png")
plt.show()
plt.close()
plot(e_grid, proja, projb)
###Output
_____no_output_____
###Markdown
Note that the projections are defined by meeting all 3 conditions: orbital type, atom numbers, element type. All three criteria must be met for the projection to pick the corresponding data from the `pdos` folder (and then from a corresponding place in the file).This can be used for convenience. For instance, in the example above, we didn't have to know the indices of Cd and Se atoms, and we could have just identified the entire range of possible indices, that is `[0,1,2,3]` and even could have gone more.
###Code
E_f = 2.1608
all_atoms = list(range(1,10))
Cd_p = [["p"], all_atoms, ["Cd"] ]
Cd_d = [["d"], all_atoms, ["Cd"] ]
Se_p = [["p"], all_atoms, ["Se"] ]
Se_d = [["d"], all_atoms, ["Se"] ]
projections = [ Cd_p, Cd_d, Se_p, Se_d ]
E, pdosa, pdosb = pdos.QE_pdos("pdos/x.pdos_atm#", -10.0, 10.0, 0.1, projections,\
E_f, "pdos_", 1, 0.01, 0.1, nspin=2)
e_grid = data_conv.MATRIX2nparray(E)
proja = data_conv.MATRIX2nparray(pdosa)
projb = data_conv.MATRIX2nparray(pdosb)
plot(e_grid, proja, projb)
###Output
multiplication factor is = 10
original grid spacing = 0.1
new grid spacing = 0.01
gaussian variance = 0.1
multiplication factor is = 10
original grid spacing = 0.1
new grid spacing = 0.01
gaussian variance = 0.1
###Markdown
Once the file is generated, we can execute ErgoSCF. For this small problem, it can be run as simple as: 4. Optional cleanup [Back to TOC](TOC)Change 0 to 1 if you want to run the instruction below - to remove all the files that were generated by this tutorial. Be sure not to run it in a different directory (in case you may have other files with the same extension)
###Code
if 0:
os.system("rm -r pdos")
os.system("rm pdos_*")
###Output
_____no_output_____
###Markdown
===================== Finishing the CdSe system ==================
###Code
os.chdir("../")
print(os.getcwd())
###Output
/mnt/c/cygwin/home/Alexey-user/compchem-cybertraining/Tutorials_Libra/11_program_specific_methods/2_qe_methods/1_pdos
|
examples/experimental/PromiseTensor Mockup.ipynb | ###Markdown
Step 1: Action Events
###Code
alices_params = th.tensor([1,2,3,4]).send(alice)
bobs_params = th.tensor([1,2,3,4]).send(bob)
input_data = th.tensor([1,2,3,4])
# these ways of forming promises are identical
alice_data_promise = sy.promise(alice, id=input_data.id)
bob_data_promise = input_data.promise(bob)
alices.promise_queue = {}
# key = tensor ID
# value = list of function which required this tensor ID
# whenever alices.recv_obj is called, after it serializes the object, it checks the keys in the queue to see if there are any outstanding commands
alices_result_promise = alices_params * alice_data_promise
bobs_result_promise = bobs_params * bobs_data_promise
new_averaged_model = (alices_result_promise.get() + bobs_result_promise.get())/2
# ASYNC using sockets
# input_data.send(bob)
# input_data.send(alice)
alice_data_promise.fulfill(input_data)
bobs_data_promise.fulfill(input_data)
print(new_averaged_model.get())
yp = y.promise(bob)
zp = x.add(yp)
ap = zp * zp
bp = ap.send(alice)
cp = ap.send(bill)
y.move(bob)
###Output
_____no_output_____
###Markdown
Step 1: Action Events
###Code
alices_params = th.tensor([1,2,3,4]).send(alice)
bobs_params = th.tensor([1,2,3,4]).send(bob)
input_data = th.tensor([1,2,3,4])
# these ways of forming promises are identical
alice_data_promise = sy.promise(alice, id=input_data.id)
bob_data_promise = input_data.promise(bob)
alices.promise_queue = {}
# key = tensor ID
# value = list of function which required this tensor ID
# whenever alices.recv_obj is called, after it serializes the object, it checks the keys in the queue to see if there are any outstanding commands
alices_result_promise = alices_params * alice_data_promise
bobs_result_promise = bobs_params * bobs_data_promise
new_averaged_model = (alices_result_promise.get() + bobs_result_promise.get())/2
# ASYNC using sockets
# input_data.send(bob)
# input_data.send(alice)
alice_data_promise.fulfill(input_data)
bobs_data_promise.fulfill(input_data)
print(new_averaged_model.get())
###Output
_____no_output_____
###Markdown
Scratch
###Code
class Promise():
def __init__(self, id):
self.operations = list()
self.trigger_id = id
def fulfill(self, tensor):
""
def __add__(self, other_promise):
return
x = Promise(10)
y = x + x
y
###Output
_____no_output_____ |
02-training-and-regularization-tactics/05-practice-exercise-medium-leaky-relu-div-KL.ipynb | ###Markdown
Importing and formatting dataImport MNIST dataset of 60,000 training images and 10,000 testing images
###Code
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
# For drawing the MNIST digits as well as plots to help us evaluate performance we
# will make extensive use of matplotlib
from matplotlib import pyplot as plt
# All of the Keras datasets are in keras.datasets
from tensorflow.keras.datasets import mnist
# Keras has already split the data into training and test data
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Training images is a list of 60,000 2D lists.
# Each 2D list is 28 by 28—the size of the MNIST pixel data.
# Each item in the 2D array is an integer from 0 to 255 representing its grayscale
# intensity where 0 means white, 255 means black.
print(len(training_images), training_images[0].shape)
# training_labels are a value between 0 and 9 indicating which digit is represented.
# The first item in the training data is a 5
print(len(training_labels), training_labels[0])
###Output
60000 (28, 28)
60000 5
###Markdown
Visualize the first 100 images in the dataset
###Code
# Lets visualize the first 100 images from the dataset
for i in range(100):
ax = plt.subplot(10, 10, i+1)
ax.axis('off')
plt.imshow(training_images[i], cmap='Greys')
###Output
_____no_output_____
###Markdown
Fixing the data format: using `numpy.reshape` and `keras.util.to_categorical`
###Code
from tensorflow.keras.utils import to_categorical
# Preparing the dataset
# Setup train and test splits
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# 28 x 28 = 784, because that's the dimensions of the MNIST data.
image_size = 784
# Reshaping the training_images and test_images to lists of vectors with length 784
# instead of lists of 2D arrays. Same for the test_images
training_data = training_images.reshape(training_images.shape[0], image_size)
test_data = test_images.reshape(test_images.shape[0], image_size)
# [
# [1,2,3]
# [4,5,6]
# ]
# => [1,2,3,4,5,6]
# Just showing the changes...
print("training data: ", training_images.shape, " ==> ", training_data.shape)
print("test data: ", test_images.shape, " ==> ", test_data.shape)
# Create 1-hot encoded vectors using to_categorical
num_classes = 10 # Because it's how many digits we have (0-9)
# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors
training_labels = to_categorical(training_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
# Recall that before this transformation, training_labels[0] was the value 5. Look now:
print(training_labels[0])
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Using Leakly ReLU is slightly different in Keras, which can be annoying.
# Additionally, Keras allows us to choose any slope we want for the "leaky" part
# rather than being statically 0.01 as in the above two functions.
from tensorflow.keras.layers import LeakyReLU
# Sequential models are a series of layers applied linearly.
medium_model = Sequential()
# The first layer must specify it's input_shape.
# This is how the first two layers are added, the input layer and the hidden layer.
medium_model.add(Dense(units=30, input_shape=(image_size,)))
medium_model.add(LeakyReLU(alpha=.1))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.09))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.08))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.07))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.06))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.05))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.04))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.03))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.02))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.01))
# This is how the output layer gets added, the 'softmax' activation function ensures
# that the sum of the values in the output nodes is 1. Softmax is very
# common in classification networks.
medium_model.add(Dense(units=num_classes, activation='softmax'))
# This function provides useful text data for our network
medium_model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 30) 23550
_________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_1 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_2 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_3 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_4 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_5 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_6 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_7 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_8 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_9 (Dense) (None, 30) 930
_________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 30) 0
_________________________________________________________________
dense_10 (Dense) (None, 10) 310
=================================================================
Total params: 32,230
Trainable params: 32,230
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compiling and training the model
###Code
# sgd stands for stochastic gradient descent.
# categorical_crossentropy is a common loss function used for categorical classification.
# accuracy is the percent of predictions that were correct.
medium_model.compile(optimizer="sgd", loss='kullback_leibler_divergence', metrics=['accuracy'])
# The network will make predictions for 128 flattened images per correction.
# It will make a prediction on each item in the training set 5 times (5 epochs)
# And 10% of the data will be used as validation data.
history = medium_model.fit(training_data, training_labels, batch_size=128, epochs=30, verbose=True, validation_split=.1)
###Output
Train on 54000 samples, validate on 6000 samples
Epoch 1/30
54000/54000 [==============================] - 3s 58us/sample - loss: 0.9541 - accuracy: 0.6914 - val_loss: 0.3781 - val_accuracy: 0.8925
Epoch 2/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.3919 - accuracy: 0.8871 - val_loss: 0.2893 - val_accuracy: 0.9142
Epoch 3/30
54000/54000 [==============================] - 2s 44us/sample - loss: 0.3105 - accuracy: 0.9101 - val_loss: 0.2468 - val_accuracy: 0.9303
Epoch 4/30
54000/54000 [==============================] - 2s 44us/sample - loss: 0.2623 - accuracy: 0.9242 - val_loss: 0.2073 - val_accuracy: 0.9422
Epoch 5/30
54000/54000 [==============================] - 3s 47us/sample - loss: 0.2329 - accuracy: 0.9311 - val_loss: 0.1988 - val_accuracy: 0.9452
Epoch 6/30
54000/54000 [==============================] - 3s 48us/sample - loss: 0.2114 - accuracy: 0.9375 - val_loss: 0.1736 - val_accuracy: 0.9510
Epoch 7/30
54000/54000 [==============================] - 2s 44us/sample - loss: 0.1954 - accuracy: 0.9421 - val_loss: 0.1814 - val_accuracy: 0.9505
Epoch 8/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1810 - accuracy: 0.9458 - val_loss: 0.1673 - val_accuracy: 0.9543
Epoch 9/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1708 - accuracy: 0.9491 - val_loss: 0.1552 - val_accuracy: 0.9577
Epoch 10/30
54000/54000 [==============================] - 2s 44us/sample - loss: 0.1636 - accuracy: 0.9524 - val_loss: 0.1542 - val_accuracy: 0.9548
Epoch 11/30
54000/54000 [==============================] - 2s 46us/sample - loss: 0.1542 - accuracy: 0.9542 - val_loss: 0.1452 - val_accuracy: 0.9582
Epoch 12/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1467 - accuracy: 0.9563 - val_loss: 0.1457 - val_accuracy: 0.9580
Epoch 13/30
54000/54000 [==============================] - 3s 47us/sample - loss: 0.1417 - accuracy: 0.9578 - val_loss: 0.1865 - val_accuracy: 0.9468
Epoch 14/30
54000/54000 [==============================] - 2s 44us/sample - loss: 0.1348 - accuracy: 0.9592 - val_loss: 0.1423 - val_accuracy: 0.9592
Epoch 15/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1290 - accuracy: 0.9610 - val_loss: 0.1434 - val_accuracy: 0.9602
Epoch 16/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1271 - accuracy: 0.9617 - val_loss: 0.1352 - val_accuracy: 0.9635
Epoch 17/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1205 - accuracy: 0.9636 - val_loss: 0.1347 - val_accuracy: 0.9623
Epoch 18/30
54000/54000 [==============================] - 2s 46us/sample - loss: 0.1163 - accuracy: 0.9642 - val_loss: 0.1252 - val_accuracy: 0.9628
Epoch 19/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.1117 - accuracy: 0.9653 - val_loss: 0.1454 - val_accuracy: 0.9582
Epoch 20/30
54000/54000 [==============================] - 3s 46us/sample - loss: 0.1107 - accuracy: 0.9663 - val_loss: 0.1320 - val_accuracy: 0.9637
Epoch 21/30
54000/54000 [==============================] - 3s 48us/sample - loss: 0.1058 - accuracy: 0.9674 - val_loss: 0.1431 - val_accuracy: 0.9577
Epoch 22/30
54000/54000 [==============================] - 2s 46us/sample - loss: 0.1031 - accuracy: 0.9677 - val_loss: 0.1265 - val_accuracy: 0.9623
Epoch 23/30
54000/54000 [==============================] - 3s 47us/sample - loss: 0.0998 - accuracy: 0.9689 - val_loss: 0.1432 - val_accuracy: 0.9603
Epoch 24/30
54000/54000 [==============================] - 3s 47us/sample - loss: 0.0965 - accuracy: 0.9711 - val_loss: 0.1280 - val_accuracy: 0.9625
Epoch 25/30
54000/54000 [==============================] - 3s 47us/sample - loss: 0.0950 - accuracy: 0.9702 - val_loss: 0.1239 - val_accuracy: 0.9643
Epoch 26/30
54000/54000 [==============================] - 3s 48us/sample - loss: 0.0917 - accuracy: 0.9715 - val_loss: 0.1233 - val_accuracy: 0.9645
Epoch 27/30
54000/54000 [==============================] - 3s 48us/sample - loss: 0.0903 - accuracy: 0.9720 - val_loss: 0.1300 - val_accuracy: 0.9640
Epoch 28/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.0873 - accuracy: 0.9730 - val_loss: 0.1353 - val_accuracy: 0.9620
Epoch 29/30
54000/54000 [==============================] - 2s 45us/sample - loss: 0.0841 - accuracy: 0.9744 - val_loss: 0.1206 - val_accuracy: 0.9660
Epoch 30/30
54000/54000 [==============================] - 2s 46us/sample - loss: 0.0843 - accuracy: 0.9745 - val_loss: 0.1322 - val_accuracy: 0.9618
###Markdown
Evaluating our model
###Code
loss, accuracy = medium_model.evaluate(test_data, test_labels, verbose=True)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
print(history.history['accuracy'])
print(history.history['val_accuracy'])
###Output
10000/10000 [==============================] - 1s 87us/sample - loss: 0.1758 - accuracy: 0.9538
###Markdown
Look at specific results
###Code
from numpy import argmax
# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.
predictions = medium_model.predict(test_data)
# For pagination & style in second cell
page = 0
fontdict = {'color': 'black'}
# Repeatedly running this cell will page through the predictions
for i in range(16):
ax = plt.subplot(4, 4, i+1)
ax.axis('off')
plt.imshow(test_images[i + page], cmap='Greys')
prediction = argmax(predictions[i + page])
true_value = argmax(test_labels[i + page])
fontdict['color'] = 'black' if prediction == true_value else 'red'
plt.title("{}, {}".format(prediction, true_value), fontdict=fontdict)
page += 16
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Exercise_1/Exercise_1.ipynb | ###Markdown
Exercise_1 After execution of all cells, choose a scenario with click on the respective button in the GUI. Then click on the "Start"-button to run the simulation. To avoid unexpected behavior, it's recommended to wait until the scenario is finished (i.e. the seconds are not counted further) or to close the GUI window and execute the "main"-cell again.
###Code
import tkinter as tk
import numpy as np
import math
import time
import seaborn as sns
#list for measurement for RiMEA 4 scenario
measurement = []
class Person():
'''
pos_x, pos_y: position of pedestrian
speed: in cells/timestep. if timestep=1s, cellsize=0.25m and desiredSpeed=1.6m/s -> speed needs to be 1.6/0.25=6.4 cells/timestep
move(): see below
'''
def __init__(self, position_x, position_y, speed=1):
self.pos_x = position_x
self.pos_y = position_y
self.speed = speed
self.leap = 0 # >0 bonus for next step, <0 penalty for next step
def move(self, grid, scenario):
'''
Equals an update for the position depending on speed, interaction of pedestrians and cost field
looks for minimal cost in all directions within the feasible area
Effectivley updates position by passing position to grid
'''
self.pedestrianInteraction(scenario)#updates interactionField
costField = np.array(scenario.costField) + np.array(self.interactionField)
## for debugging of costField, interactionField ..
#for (x,y),v in np.ndenumerate(np.array(self.interactionField)):
# if v > 0:
# print('('+str(x),str(y)+')'+': '+str(v))
# print('cost single'+': '+str(scenario.costField[x][y]))
# print('cost combined'+': '+str(costField[x][y]))
##
(x_opt, y_opt) = (self.pos_x, self.pos_y)
cost_min = costField[round(self.pos_x)][round(self.pos_y)]
#assuming constant timesteps of 1s
pos_dist = self.leap + self.speed #possible distance for the pedestrian to move in this timestep
if round(pos_dist) <= 0:#pedestrian needs to wait in order to move with correct speed
self.leap += self.speed
return
while round(pos_dist) > 0:#if move at this timestep
#steps only +-1
dxplus = self.pos_x + 1
dxminus = self.pos_x - 1
dyplus = self.pos_y + 1
dyminus = self.pos_y - 1
dxx = (self.pos_x, dxplus, dxminus)
dyy = (self.pos_y, dyplus, dyminus)
cell_move = (cost_min, self.pos_x, self.pos_y)
flag_diag = False #indicates wether the pedestrian moves diagonal or not
for x in dxx:
for y in dyy:
if scenario.isInDomain(x,y):
if (x,y) == (self.pos_x, self.pos_y):
continue#no need to check own position again
else:
if scenario.vanish and grid.grid[x][y] == 'T':#allows vanishing persons e.g. through doors
grid.grid[round(self.pos_x)][round(self.pos_y)]='E'
scenario.persons.remove(self)
return
if grid.grid[x][y] == 'E':#neighbored cell is empty
cell_dist = costField[x][y]
if cell_dist < cell_move[0]:
cell_move = (cell_dist,x,y)
if ((self.pos_x - x)+(self.pos_y - y))%2 == 0:#diagonal movement yes/no
flag_diag = True
else:
flag_diag = False
#updates grid for drawing and own position
grid.grid[round(self.pos_x)][round(self.pos_y)]='E'
grid.grid[round(cell_move[1])][round(cell_move[2])]='P'
self.pos_x = cell_move[1]
self.pos_y = cell_move[2]
#calculate new (remaining) possible distance depending on (striaght/diag) and update self leap
if flag_diag == True:
pos_dist -= math.sqrt(2)
self.leap = pos_dist
else:
pos_dist -= 1
self.leap = pos_dist
def pedestrianInteraction(self, scenario):#TODO: needs isInDomain() check
self.interactionField = [[0.0 for y in range(scenario.columns)] for x in range(scenario.rows)]
for p in scenario.persons:
if p == self:
continue#this is to avoid penalty with yourself. I assume you like yourself :)
else:
x = round(p.pos_x)
y = round(p.pos_y)
dxx = [x]
dyy = [y]
for r in range(scenario.rmax)[1:]:#rmax = 3 -> (0,)1,2
dxx.append(x+r)
dxx.append(x-r)
dyy.append(y+r)
dyy.append(y-r)
for xx in dxx:
for yy in dyy:
if scenario.isInDomain(xx,yy):
if (xx - x)**2 + (yy - y)**2 >= scenario.rmax**2:#cell not within cirlce around pedestrian
continue
r = math.sqrt((x - xx)**2 + (y - yy)**2)#eucliddistance between p.pos_xy and xxyy
self.interactionField[xx][yy] += math.exp(1/(r**2 - scenario.rmax**2))
class Scenario():
'''
creates persons, targets, obstacles and grid size depending on the scenario
rmax: maximum radius of pedestrian interaction. Depends on cellsize -> adjust for each scenario
'''
def __init__(self):
self.rows = 25
self.columns = 25
self.persons = []
self.targets = []
self.obstacles = []
self.grid = Grid(self)
self.rmax = 3
self.time_limit = 20
self.vanish = False #Set to true for scenarios where the target is a door
self.fastMarching = False # bottleneck and chickentest both once without Dijkstra/fastMarching and once with
self.line_distance = 20
def setTo1(self):#one person straight
self.rows = 50
self.columns = 50
self.persons = [Person(5, 25)]
self.targets = [(25, 25)]
self.obstacles = []
self.rmax = 3
self.time_limit = 25
self.vanish = False
self.fastMarching = False
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
def setTo2(self):#five persons in a circle
self.rows = 50
self.columns = 50
self.persons = [Person(5, 25), Person(25, 45), Person(25, 5), Person(41, 37), Person(41, 13)]
self.targets = [(25, 25)]
self.obstacles = []
self.rmax = 3
self.time_limit = 25
self.vanish = False
self.fastMarching = False
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
def setTo3(self):#bottleneck
self.rows = 54
self.columns = 54
p = []
for i in range(10):
for j in range(20):
if j%4 is not 0:
p.append((Person(i+2,j+2,3)))
self.persons = p
self.targets = [(52, 11), (52, 12)]
list = []
for i in range(10):
#left room
list.extend([(1,11-i),(1,12+i),(2+i,1),(2+i,22),(12+i,1),(12+i,22),(22,11-i),(22,12+i)])
#bottleneck
list.append((23+i,10))
list.append((23+i,13))
#right room
list.extend([(31,11-i),(31,12+i),(32+i,1),(32+i,22),(42+i,1),(42+i,22),(52,11-i),(52,12+i)])
list.remove((32, 10))
list.remove((32, 13))
list.remove((22,11))
list.remove((22,12))
list.remove((31,11))
list.remove((31,12))
list.remove((52,11))
list.remove((52,12))
#corners
list.extend([(1,1), (1,22), (22,1), (22,22), (31,1), (31,22), (52,1), (52,22)])
self.obstacles = list
self.rmax = 2
self.time_limit = 90
self.vanish = True
self.fastMarching = False
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
def setTo3wD(self):#bottleneck with Dijkstra
self.rows = 54
self.columns = 54
p = []
for i in range(10):
for j in range(20):
if j%4 is not 0:
p.append((Person(i+2,j+2,3)))
self.persons = p
self.targets = [(52, 11), (52, 12)]
list = []
for i in range(10):
#left room
list.extend([(1,11-i),(1,12+i),(2+i,1),(2+i,22),(12+i,1),(12+i,22),(22,11-i),(22,12+i)])
#bottleneck
list.append((23+i,10))
list.append((23+i,13))
#right room
list.extend([(31,11-i),(31,12+i),(32+i,1),(32+i,22),(42+i,1),(42+i,22),(52,11-i),(52,12+i)])
list.remove((32, 10))
list.remove((32, 13))
list.remove((22,11))
list.remove((22,12))
list.remove((31,11))
list.remove((31,12))
list.remove((52,11))
list.remove((52,12))
#corners
list.extend([(1,1), (1,22), (22,1), (22,22), (31,1), (31,22), (52,1), (52,22)])
self.obstacles = list
self.rmax = 2
self.time_limit = 90
self.vanish = True
self.fastMarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
def setTo4(self):#chickentest
self.rows = 25
self.columns = 25
self.persons = [Person(6, 12)]
self.targets = [(18, 12)]
self.obstacles = [(12,12),(12,13),(12,11),(12,14),(12,10),(12,15),(12,9),(11,15),(10,15),(11,9),(10,9)]
self.rmax = 3
self.time_limit = 20
self.vanish = False
self.fastMarching = False
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
def setTo4wD(self):#chickentest with Dijkstra
self.rows = 25
self.columns = 25
self.persons = [Person(6, 12)]
self.targets = [(18, 12)]
self.obstacles = [(12,12),(12,13),(12,11),(12,14),(12,10),(12,15),(12,9),(11,15),(10,15),(11,9),(10,9)]
self.rmax = 3
self.time_limit = 20
self.vanish = False
self.fastMarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
###Debugging scenario
#def setTo5(self):#added 5th scenario to test hypothesis
# self.rows = 50
# self.columns = 50
# self.persons = [Person(5, 45), Person(10, 47), Person(3, 42), Person(18, 43), Person(46, 44), Person(38,41), Person(45,44), Person(45,42)]
# self.targets = [(25, 0)]
# self.obstacles = []
# self.rmax = 3
# self.time_limit = 25
# self.vanish = False
# self.fastMarching = False
# self.setupCostField()
# self.grid = Grid(self)
def rimea1(self):#RiMEA Scenario 1: Straight Line Corridor
self.rows = 50
self.columns = 50
self.persons = [Person(8, 25-20, 1.33)]
self.targets = [(48, 25-20)]
list = [(49,25-20),(49,24-20)]
for i in range(8,50):
list.append((i,26-20))
list.append((i,23-20))
self.obstacles = list
self.rmax = 3
self.time_limit = 35
self.vanish = False
self.fastmarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
#Rimea 4 : 0.5p/sqm density
'''def rimea4(self):#RiMEA Scenario 4: Measurement of the Fundamental Diagram
self.rows = 50
self.columns = 50
#plist = [Person(50, 25, 1.2), Person(50, 27, 1.2)]
plist = []
#l = [(random.randrange(2, 30), random.randrange(21, 31)) for k in range(200)]
#for e in l:
#plist.append(Person(e[0],e[1],1.2))
for i in range(0,50):
if (i % 2) == 0:
for j in range(21,31):
#if (j % 2) == 0:
plist.append(Person(i,j,1.2))
#self.targets = [(99,11),(99,12),(99,13),(99,14),(99,15),(99,16)]
self.persons = plist
tlist = []
olist = []
for i in range(0,50):
olist.append((i,20))
olist.append((i,31))
for i in range(21,31):
tlist.append((49,i))
self.rmax = 2
self.obstacles = olist
self.targets = tlist
self.fastmarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 10
self.vanish = True
self.time_limit = 100'''
#Rimea 4 : 1p/sqm density
def rimea4(self):#RiMEA Scenario 4: Measurement of the Fundamental Diagram
self.rows = 50
self.columns = 50
#plist = [Person(50, 25, 1.2), Person(50, 27, 1.2)]
plist = []
#l = [(random.randrange(2, 80), random.randrange(13, 38)) for k in range(2)]
#for e in l:
#plist.append(Person(e[0],e[1],1.2))
for i in range(1,49):
for j in range(21,31):
plist.append(Person(i,j,1.2))
#self.targets = [(99,11),(99,12),(99,13),(99,14),(99,15),(99,16)]
self.persons = plist
tlist = []
olist = []
for i in range(0,50):
olist.append((i,20))
olist.append((i,31))
for i in range(21,31):
tlist.append((49,i))
self.rmax = 1
self.obstacles = olist
self.targets = tlist
self.fastmarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 10
self.vanish = True
self.time_limit = 100
#Rimea 4 : 2p/sqm density
'''def rimea4(self):#RiMEA Scenario 4: Measurement of the Fundamental Diagram
self.rows = 50
self.columns = 50
#plist = [Person(50, 25, 1.2), Person(50, 27, 2.4)]
plist = []
#l = [(random.randrange(2, 80), random.randrange(13, 38)) for k in range(2)]
#for e in l:
#plist.append(Person(e[0],e[1],1.2))
for i in range(1,49):
if (i % 2) == 0:
for j in range(21,41):
plist.append(Person(i,j,2.4))
#self.targets = [(99,11),(99,12),(99,13),(99,14),(99,15),(99,16)]
self.persons = plist
tlist = []
olist = []
for i in range(0,50):
olist.append((i,20))
olist.append((i,41))
for i in range(21,41):
tlist.append((49,i))
self.rmax = 2
self.obstacles = olist
self.targets = tlist
self.fastmarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 10
self.vanish = True
self.time_limit = 100'''
#Rimea 4 : 6p/sqm density (not working)
'''def rimea4(self):#RiMEA Scenario 4: Measurement of the Fundamental Diagram
self.rows = 50
self.columns = 100
#self.persons = [Person(1, 25, 1.2), Person(3,25,1.2), ]
listp = []
l = [(random.randrange(2, 80), random.randrange(13, 38)) for k in range(2)]
for e in l:
listp.append(Person(e[0],e[1],1.2))
#for i in range(1,80):
#for j in range(12,39):
#listp.append(Person(i,j,1.2))
#self.targets = [(99,11),(99,12),(99,13),(99,14),(99,15),(99,16)]
self.persons = listp
listb = []
list = []
for i in range(0,99):
list.append((i,10))
list.append((i,40))
for i in range(11,40):
listb.append((99,i))
self.rmax = 9
self.obstacles = list
self.targets = listb
self.fastmarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 10
self.vanish = True
self.time_limit = 100'''
def rimea6(self):#RiMEA Scenario 6: Movement Around Corridor
self.rows = 50
self.columns = 50
self.persons = [Person(10, 24),Person(9, 24),Person(8, 24),Person(7, 24),Person(6, 24),Person(5, 24),Person(4, 24),Person(11, 24),Person(12, 24),Person(13, 24),
Person(10, 25),Person(9, 25),Person(8, 25),Person(7, 25),Person(6, 25),Person(5, 25),Person(4, 25),Person(13, 25),Person(12, 25),Person(11, 25)]
self.targets = [(20, 13),(19,13)]
list = [(49,25),(49,24)]
for i in range(0,19):
list.append((i,23))
for k in range(0,22):
list.append((k,26))
for z in range(13,23):
list.append((18,z))
for j in range(13,27):
list.append((21,j))
self.obstacles = list
self.rmax = 10
self.time_limit = 50
self.vanish = True
self.fastMarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
def rimea7(self):#RiMEA Scenario 7: average speed
self.rows = 50
self.columns = 50
speed_list=[]
for i in range(2,13):
for k in range(2,8):
speed=np.random.normal(1.5,0.25)
p=Person(i,k,speed)
self.persons.append(p)
speed_list.append(speed)
print("SPEED-LIST:",speed_list)
avg_speed=np.mean(speed_list)
minimum=np.min(speed_list)
maximum=np.max(speed_list)
print("minimum speed:", minimum,"maximum speed: ", maximum,
"average speed: ", avg_speed)
sns.kdeplot(speed_list)
self.targets = [(49, 5)]
list = []
for z in range(0,50):
list.append((z,1))
for j in range(0,50):
list.append((j,9))
self.obstacles = list
self.rmax = 10
self.time_limit = 50
self.vanish = True
self.fastMarching = True
self.setupCostField()
self.grid = Grid(self)
self.line_distance = 20
#TODO: add RiMEA tests only 6 missing
def setupCostField(self):
#cost scaled by max possible cost/distance
if self.fastMarching == False:#rudimentary obstacle avoidance only
self.costField = [[1.0 for y in range(self.columns)] for x in range(self.rows)]
max_dist = math.sqrt(self.rows**2 + self.columns**2)
for x in range(self.rows):
for y in range(self.columns):
if (x,y) in self.targets:
self.costField[x][y] = 0
elif (x,y) in self.obstacles:
self.costField[x][y] = math.exp(10)
else:#computes cost/distance for nearest target
dist = max_dist
for t in self.targets:
min_dist = math.sqrt((x - t[0])**2 + (y - t[1])**2)
if min_dist < dist:
dist = min_dist
cost = dist#/max_dist # scaling by max_dist can cause pedestrians to stop before the target
self.costField[x][y] = cost
else:#dijkstra or fastmarching
self.Dijkstra()
#self.fastMarching()
def Dijkstra(self):
#initialisation
self.costField = [[np.inf for y in range(self.columns)] for x in range(self.rows)]
Q = []
for x in range(self.rows):
for y in range(self.columns):
if (x,y) in self.targets:
self.costField[x][y] = 0
Q.append((x,y))
#loop
while len(Q) > 0:
#find minimal value in Q
x_min = Q[0][0]
y_min = Q[0][1]
for (x,y) in Q:
if self.costField[x][y] < self.costField[x_min][y_min]:
(x_min, y_min) = (x,y)
Q.remove((x_min,y_min))
#print('Remaining values to calculate: '+str(len(Q)))
#neighbors of x_min,y_min
neighbors = [(x_min+1,y_min),(x_min,y_min+1),(x_min-1,y_min),(x_min,y_min-1),(x_min+1,y_min+1),(x_min+1,y_min-1),(x_min-1,y_min-1),(x_min-1,y_min+1)]
for (nx,ny) in neighbors:
if self.isInDomain(nx,ny):
#update distance for neighbors
if (nx,ny) in Q:
if (nx,ny) in self.obstacles:
self.costField[nx][ny] = math.exp(10)
else:
alternativ = self.costField[x_min][y_min] + math.sqrt((nx - x_min)**2 + (ny - y_min)**2)
if alternativ < self.costField[nx][ny]:
self.costField[nx][ny] = alternativ
def isInDomain(self,x,y):
if 0 <= y < self.rows and 0 <= x < self.columns:
return True
else:
return False
def fastMarching(self):
self.obstacleField = [[0.0 for y in range(self.columns)] for x in range(self.rows)]
if self.fastMarching == False:#rudimentary obstacle avoidance only
for x in range(self.rows):
for y in range(self.columns):
if (x,y) in self.obstacles:
self.obstacleField[x][y] = math.exp(10)
else:#Fast Marching algorithm
# If there is time left, we can check on this again
return
accepted = []
far = []
considered = []
#step 1
for x in range(self.rows):
for y in range(self.columns):
if (x,y) in self.obstacles:
self.obstacleField[x][y] = 0
accepted.append((x,y))
else:
self.obstacleField[x][y] = np.inf
far.append((x,y))
#step 2
for (x,y) in far:
U = 1#some new value with update formula
if U < self.obstacleField[x][y]:
self.obstacleField[x][y] = U
far.remove((x,y))
considered.append((x,y))
#steps 3-5
while len(considered) > 0:
#step 3
v = np.inf
for (x,y) in considered:
if self.obstacleField[x][y] < v:
v = self.obstacleField[x][y]
x_tilda = x
y_tilda = y
considered.remove((x_tilda,y_tilda))
accepted.append((x_tilda,y_tilda))
#step 4
neighbors = [(x_tilda+1,y_tilda),(x_tilda-1,y_tilda),(x_tilda,y_tilda+1),(x_tilda,y_tilda-1)]#diagonal neighbors too?
for (x,y) in neighbors:
U = 1#update formula
if U < self.obstacleField[x][y]:
self.obstacleField[x][y] = U
#step 5
if (x,y) in far:
far.remove((x,y))
considered.append((x,y))
def move(self):
for p in self.persons:
p.move(self.grid, self)
class Grid():
'''
Basically only for visualization. 2d-array which stores markers for empty, pedestrian, target and obstacle.
'''
def __init__(self, scenario):
self.scenario = scenario
#self.grid = [['E' for y in range(scenario.columns)] for x in range(scenario.rows)]
self.grid = [['E' for y in range(scenario.rows)] for x in range(scenario.columns)]
for p in scenario.persons:
self.grid[p.pos_x][p.pos_y] = 'P'
for t in scenario.targets:
self.grid[t[0]][t[1]] = 'T'
for o in scenario.obstacles:
self.grid[o[0]][o[1]] = 'O'
def draw(self, canvas, line_distance):
'''
Draws the grid and the persons/targets/obstacles.
'''
# vertical lines
for x in range(line_distance,canvas.winfo_width(),line_distance):
canvas.create_line(x, 0, x, canvas.winfo_height(), fill="#476042")
# horizontal lines
for y in range(line_distance,canvas.winfo_height(),line_distance):
canvas.create_line(0, y, canvas.winfo_width(), y, fill="#476042")
# P, T, O or nothing in between the lines
for (x, y), value in np.ndenumerate(self.grid):
#measurement cells (2m x 2m) code block:
if x == 25 and y == 25:# 0.5 & 1 p/sqm
measurement.append(value)
if x == 26 and y == 25:# 0.5 & 1 p/sqm
measurement.append(value)
#if x == 27 and y == 25:# 2p/sqm
#measurement.append(value)
#if x == 28 and y == 25:# 2p/sqm
#measurement.append(value)
if x == 25 and y == 26:# 0.5 & 1 p/sqm
measurement.append(value)
if x == 26 and y == 26:# 0.5 & 1 p/sqm
measurement.append(value)
#if x == 27 and y == 26:# 2p/sqm
#measurement.append(value)
#if x == 28 and y == 26:# 2p/sqm
#measurement.append(value)
#if x == 25 and y == 27:# 2p/sqm
#measurement.append(value)
#if x == 26 and y == 27:# 2p/sqm
#measurement.append(value)
#if x == 27 and y == 27:# 2p/sqm
#measurement.append(value)
#if x == 28 and y == 27:# 2p/sqm
#measurement.append(value)
#if x == 25 and y == 28:# 2p/sqm
#measurement.append(value)
#if x == 26 and y == 28:# 2p/sqm
#measurement.append(value)
#if x == 27 and y == 28:# 2p/sqm
#measurement.append(value)
#if x == 28 and y == 28:# 2p/sqm
#measurement.append(value)
#end of measurement code block
if value =='P':
#if (x == 25 and y == 25) or (x == 25 and y == 26) or (x == 26 and y == 25) or (x == 26 and y == 26):
#color = 'red'
#else:
#color = 'green' ## comment out block to make measurement area visible for scenario 4
color = 'green'
elif value =='T':
color = 'red'
elif value =='O':
color = 'black'
else:
value =''
color = 'white'
canvas.create_text((x+0.5) * line_distance, (y+0.5) * line_distance, text=value, fill=color)
# canvas.create_rectangle(..) as possible improvement for visualization
def main():
'''
Setup of the interface and definition of the simulation loop.
When a scenario is selected via a button, it appears next to the buttons in the canvas.
When clicking on start button, the current scenario is simulated, i.e. for every timestep=1s each pedestrian moves and the drawing is renewed.
'''
window = tk.Tk()
window.title('Exercise 1')
scenario = Scenario()
#(TODO:) DONE (more or less) make line_distance dependend on scenario columns/rows for nicer layout. Same for fontsize?
line_distance = scenario.line_distance #should be passed to draw() in Grid
canvas_width = line_distance * scenario.columns
canvas_height = line_distance * scenario.rows
canvas = tk.Canvas(master=window, width=canvas_width, height=canvas_height)
canvas.pack(side='left')
label = tk.Label(master=window, text='0 s', width='18')
label.pack()
btnScenario1 = tk.Button(master=window, text='1 Pedestrian', bg='#26f9ad', width='18', command=lambda: scenario.setTo1())
btnScenario1.pack()#(side=RIGHT)
btnScenario2 = tk.Button(master=window, text='Pedestrians in a circle', bg='#26f9ad', width='18', command=lambda: scenario.setTo2())
btnScenario2.pack()#(side=RIGHT)
btnScenario3 = tk.Button(master=window, text='Bottleneck', bg='#26f9ad', width='18', command=scenario.setTo3)
btnScenario3.pack()#(side=RIGHT)
btnScenario3wD = tk.Button(master=window, text='Bottleneck w/ Dijkstra', bg='#26f9ad', width='18', command=scenario.setTo3wD)
btnScenario3wD.pack()#(side=RIGHT)
btnScenario4 = tk.Button(master=window, text='Chickentest', bg='#26f9ad', width='18', command=scenario.setTo4)
btnScenario4.pack()#(side=RIGHT)
btnScenario4wD = tk.Button(master=window, text='Chickentest w/ Dijkstra', bg='#26f9ad', width='18', command=scenario.setTo4wD)
btnScenario4wD.pack()#(side=RIGHT)
#btnScenario5 = tk.Button(master=window, text='Scenario 5', bg='#26f9ad', width='18', command=scenario.setTo5)
#btnScenario5.pack()#(side=RIGHT)
btnScenarioR1 = tk.Button(master=window, text='RiMEA Scenario 1', bg='#26f9ad', width='18', command=scenario.rimea1)
btnScenarioR1.pack()#(side=RIGHT)
btnScenarioR4 = tk.Button(master=window, text='RiMEA Scenario 4', bg='#26f9ad', width='18', command=scenario.rimea4)
btnScenarioR4.pack()#(side=RIGHT)
btnScenarioR6 = tk.Button(master=window, text='RiMEA Scenario 6', bg='#26f9ad', width='18', command=scenario.rimea6)
btnScenarioR6.pack()#(side=RIGHT)
btnScenarioR7 = tk.Button(master=window, text='RiMEA Scenario 7', bg='#26f9ad', width='18', command=scenario.rimea7)
btnScenarioR7.pack()#(side=RIGHT)
def start(window, canvas, scenario, time_limit=scenario.time_limit):
if scenario.rows or scenario.colummns <= 75:
canvas_width = scenario.line_distance * scenario.columns
canvas_height = scenario.line_distance * scenario.rows
else:
canvas_width = 40 * scenario.columns
canvas_height = 40 * scenario.rows
canvas.configure(width=canvas_width, height=canvas_height)
canvas.delete('all')
window.after(10, lambda: scenario.grid.draw(canvas, scenario.line_distance))
time_limit = scenario.time_limit
timesteps = 0
while timesteps < time_limit:
scenario.move()
canvas.delete('all')
time.sleep(1)#1 below 1000
window.after(0, lambda: scenario.grid.draw(canvas, scenario.line_distance))
window.update()
timesteps += 1
label.configure(text=str(timesteps)+' s')
#print(measurement) #for rimea scen 4
btnStart = tk.Button(master=window, text="Start", bg='lightblue', width='18', command=lambda: start(window, canvas, scenario))
btnStart.pack()
window.mainloop()
if __name__ == "__main__":
main()
#Function for Measurement of density and flow for scenario 4 - # speed is pedestrian still input speed, not real speed (TODO)
empty = 0
pedestrian = 0
for e in measurement:
if e == 'E':
empty = empty + 1
else:
pedestrian = pedestrian + 1
#avg_density = pedestrian/(((pedestrian + empty)/16))/4 # for 2p/sqm densities
#timestep = (pedestrian + empty)/16
avg_density = pedestrian/(((pedestrian + empty)/4))/4 #for 0.5p/sqm & 1p/sqm density
timestep = (pedestrian + empty)/4
#print(measurement)
#print(pedestrian)
#print(empty)
#print(avg_density)
print("the average density at the measurement point after " + str(timestep) + "s is: " + str(avg_density) + " p/sqm")
#result not accurate due to use of input speed instead of real speed
avg_flow = avg_density * 1.2
print("the average flow at the measurement point after " + str(timestep) + "s is: " + str(avg_flow) + " p/ms")
###Output
the average flow at the measurement point after 101.0s is: 0.5732673267326732 p/ms
|
Bike_sharing_linear_regression/Project_bike.ipynb | ###Markdown
Bike Sharing DemandData Fields datetime - hourly date + timestamp season - 1 = spring, 2 = summer, 3 = fall, 4 = winter holiday - whether the day is considered a holiday workingday - whether the day is neither a weekend nor holiday weather - 1: Clear, Few clouds, Partly cloudy, Partly cloudy 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog temp - temperature in Celsius atemp - "feels like" temperature in Celsius humidity - relative humidity windspeed - wind speed casual - number of non-registered user rentals initiated registered - number of registered user rentals initiated count - number of total rentals
###Code
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.feature_selection import RFE
import pandas as pd
df = pd.read_csv('train_bike.csv', parse_dates=True, index_col=0)
df
df.index.year
df.index.hour
###Output
_____no_output_____
###Markdown
Exploratory data analyses
###Code
df.shape
df.describe()
#sns.pairplot(df)
df.corr()
sns.heatmap(df.corr().abs())
pd.DataFrame(df.corr()['count'].sort_values(ascending=False)).drop(['count','casual','registered']).plot(kind="bar")
pd.DataFrame(df.corr()[['casual','registered']].sort_values(ascending=False,by='registered')).drop(['count','casual','registered']).plot(kind="bar")
#df['count'].loc['May 2, 2011':'May 8, 2011'].plot()
df['count'].loc['May 3, 2012'].plot()
df['count'].loc['May 5, 2012'].plot()
pd.DataFrame(df.loc['May 1, 2011'].corr()['count'].sort_values(ascending=False)).drop(['count','casual','registered']).plot(kind="bar")
df.resample('1M').mean().head()
df['count'].resample('1M').mean().plot
import numpy as np
import matplotlib.pyplot as plt
data1 = df['count'].resample('1M').mean()
data2 = df['season'].resample('1M').mean()
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('date/time')
ax1.set_ylabel('count', color=color)
ax1.plot(data1, color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('season', color=color)
ax2.plot(data2,color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
plt.show()
data1 = df['count'].resample('1M').mean()
data2 = df['temp'].resample('1M').mean()
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('date/time')
ax1.set_ylabel('count', color=color)
ax1.plot(data1, color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('temp', color=color)
ax2.plot(data2,color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
plt.show()
data1 = df['count'].resample('1M').mean()
data2 = df['windspeed'].resample('1M').mean()
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('date/time')
ax1.set_ylabel('count', color=color)
ax1.plot(data1, color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('windspeed', color=color)
ax2.plot(data2,color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Train-test split
###Code
X = df.drop(['count'], axis=1)
y = df['count']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=10)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Feature engineering
###Code
#check for missing values
df.isna().sum()
def feature_engineer(df):
# drop atemp due to high correlation with temp
if 'casual' in df.columns:
df.drop(['casual'], axis=1, inplace=True)
# drop humidity due to high correlation with weather
if 'registered' in df.columns:
df.drop(['registered'], axis=1, inplace=True)
# drop atemp due to high correlation with temp
if 'atemp' in df.columns:
df.drop(['atemp'], axis=1, inplace=True)
# drop humidity due to high correlation with weather
#if 'humidity' in df.columns:
#df.drop(['humidity'], axis=1, inplace=True)
#df['temp'] = ['under_15' if x<15.0 else 'over_15' for x in df['temp']]
# create a feature day hours
df.reset_index(inplace=True)
df['hour'] = pd.DatetimeIndex(df['datetime']).hour
#df['day'] = pd.DatetimeIndex(df['datetime']).day_name()
df.set_index(['datetime'], inplace=True)
#df['hour'] = [1 if 8.00<=x<=20.00 else 0 for x in df['hour']]
hours=[]
for i, row in df.iterrows():
day = row['workingday']
hour = row['hour']
if day == 0:
if 9.00<=hour<=20.00:
hr='peak'
else:
hr='night'
else:
if (8.00<=hour<=9.00) or (17.00<=hour<=19.00):
hr='peak'
elif 9.00<hour<17.00:
hr='day'
else:
hr='night'
hours.append(hr)
df['hour']=hours
# Rename weather groups for str
df['weather'].replace({1:'class_1', 2:'class_2', 3:'class_3', 4:'class_4'}, inplace=True)
# Rename season groups for str
df['season'].replace({1:'spring', 2:'summer', 3:'fall', 4:'winter'}, inplace=True)
#df[['season', 'workingday', 'weather', 'temp', 'windspeed','hour', 'humidity']]
df_final = df[['season','weather', 'temp', 'windspeed','hour']]
# Label encoding / one-hot encoding, create binary columns out of original categories
df_final = pd.get_dummies(df_final, drop_first=True)
if 'weather_class_4' in df_final.columns:
df_final.drop(['weather_class_4'], axis=1, inplace=True)
pf = PolynomialFeatures(degree=3)
df_final=pd.DataFrame(pf.fit_transform(df_final))
return df_final
###Output
_____no_output_____
###Markdown
Train a model
###Code
m = LinearRegression()
sc = MinMaxScaler()
X_train_fe = feature_engineer(X_train)
sc.fit(X_train_fe)
X_train_scaled = pd.DataFrame(sc.transform(X_train_fe))
X_test_fe = feature_engineer(X_test)
X_test_scaled = pd.DataFrame(sc.transform(X_test_fe))
X_train_scaled.head(3)
X_test_scaled.head(3)
m.fit(X_train_scaled, np.log1p(y_train))
#m.coef_, m.intercept_
ypred = m.predict(X_train_scaled)
#plt.plot(X_train_scaled, ypred)
###Output
_____no_output_____
###Markdown
Optimize the model Ridge / Lasso Regularization
###Code
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
m_ridge = Ridge(alpha=0.001)
m_lasso = Lasso(alpha=0.001)
m_ridge.fit(X_train_scaled, np.log1p(y_train))
m_lasso.fit(X_train_scaled, np.log1p(y_train))
ypred = m.predict(X_train_scaled)
ypred_ridge = m_ridge.predict(X_train_scaled)
ypred_lasso = m_lasso.predict(X_train_scaled)
#m_ridge.coef_
#m_lasso.coef_
#plt.scatter(X_train_scaled, y_train)
#plt.plot(X_train_scaled, ypred)
#plt.plot(X_train_scaled, ypred_ridge)
#plt.plot(X_train_scaled, ypred_lasso)
#plt.legend(['No regularization', 'Ridge Regularization', 'Lasso Regularization'])
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
from pprint import pprint
from sklearn.metrics import mean_squared_log_error
from sklearn import metrics
from sklearn.metrics import make_scorer
model = Lasso()
paramgrid={'alpha':[5.0, 3.0, 1.0, 0.1, 0.01, 0.001]}
scorer = { 'RMSLE': metrics.make_scorer(np.sqrt(mean_squared_log_error))} #(y_train, np.exp(ypred_lasso)
grid = GridSearchCV(model, param_grid=paramgrid, scoring= scorer)
grid.fit(X_train_scaled, np.log1p(y_train))
ypred_grid = grid.predict(X_train_scaled)
print("Best parameters:", grid.best_params_)
#print("RMSLE:", RMSLE((y_train, np.exp(ypred_grid)))
#print("Best score:", grid.best_score_)
#print("all scores :")
#pprint(grid.cv_results_)
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
from pprint import pprint
from sklearn.metrics import mean_squared_log_error
from sklearn import metrics
from sklearn.metrics import make_scorer
model = Ridge()
paramgrid={'alpha':[5.0,3.0,1.0, 0.1, 0.01, 0.001]}
grid = GridSearchCV(model, param_grid=paramgrid, scoring='r2')
grid.fit(X_train_scaled,y_train)
preds = grid.predict(X_train_scaled)
print("Best parameters:", grid.best_params_)
#print("RMSLE:", rmsle(y_train, ypred_lasso))
print("Best score:", grid.best_score_)
print("all scores :")
pprint(grid.cv_results_)
df_scr = pd.DataFrame()
df_scr['alpha']=[5.0,3.0,1.0, 0.1, 0.01, 0.001]
df_scr['score']=grid.cv_results_['mean_test_score']
df_scr
plt.figure(figsize=(12,6))
x = df_scr['alpha']
y = df_scr['score']
plt.xlabel('alpha', fontsize=20)
plt.ylabel('R^2', fontsize=20)
plt.tick_params(axis="x", labelsize=14)
plt.tick_params(axis="y", labelsize=14)
plt.grid()
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
Recursive Feature Elimination
###Code
#model=LinearRegression()
#rfe = RFE(estimator=model)
#rfe.fit(X_train_scaled, y_train)
#df_final = df[['season', 'workingday', 'weather', 'temp', 'windspeed','hour']]
#print(rfe.n_features_)
#print(rfe.support_)
#print(rfe.ranking_)
#print(rfe.estimator_.feature_importances_)
#df_rf1 = pd.DataFrame()
#df_rf1['No']=np.where(rfe.support_ == True)[0]
#df_rf1['value']=rfe.estimator_.feature_importances_
#df_rf2 = pd.DataFrame()
#df_rf2['feature'] = X_train_scaled.columns
#df_rf2['RFE_support'] = rfe.support_
#df_rfe=pd.merge(df_rf1, df_rf2, left_on=['No'], right_index=True, how = 'inner')
#df_rfe.sort_values(by='value', ascending=False, inplace=True)
#df_rfe
#plt.figure(figsize=(10, 5))
#plt.barh(y=df_rfe['feature'], width=df_rfe['value'])
#plt.title('RFE - Feature Importances', fontsize=15, fontweight='bold', pad=10)
#plt.xlabel('Importance', fontsize=15, labelpad=20)
#plt.show()
###Output
_____no_output_____
###Markdown
Cross-validation
###Code
from sklearn.model_selection import cross_val_score
# scoring parameter - model.score() method - by default R-squared
cv_results = cross_val_score(m, X_train_scaled, y_train, cv=5)
cv_results
cv_results.mean()
# we would expect that this roughly corresponds to the final testing score.
# if training score >> mean(validation scores), then you are overfitting!
cv_results.std()
# To check your model's variance, compute the standard deviation of all the scores.
# If this variance is HIGH, it means your model's performance varies a lot / depends a lot on the sampling.
###Output
_____no_output_____
###Markdown
Calculate a test score
###Code
from sklearn.metrics import mean_squared_log_error
#print('Linear regression train score:', m.score(X_train_scaled, y_train))
#print('Linear regression test score:', m.score(X_test_scaled, y_test))
m.fit(X_train_scaled, np.log1p(y_train))
ypred = m.predict(X_train_scaled)
print('RMSLE Linear:', np.sqrt(mean_squared_log_error( y_train,np.exp(ypred) )))
#print('Ridge regression train score:', m_ridge.score(X_train_scaled, y_train))
#print('Ridge regression test score:', m_ridge.score(X_test_scaled, y_test))
m_ridge.fit(X_train_scaled, np.log1p(y_train))
ypred_ridge = m_ridge.predict(X_train_scaled)
print('RMSLE Ridge:', np.sqrt(mean_squared_log_error( y_train,np.exp(ypred_ridge))))
#print('Lasso regression train score:', m_lasso.score(X_train_scaled, y_train))
#print('Lasso regression test score:', m_lasso.score(X_test_scaled, y_test))
m_lasso.fit(X_train_scaled, np.log1p(y_train))
ypred_lasso = m_lasso.predict(X_train_scaled)
print('RMSLE Lasso:', np.sqrt(mean_squared_log_error( y_train,np.exp(ypred_lasso))))
###Output
RMSLE Lasso: 0.9734277781296415
###Markdown
OLS Statsmodels
###Code
import statsmodels.api as sm
#OLS = sm.OLS(y_train, X_train_scaled)
#f = OLS.fit()
#print(f.params)
#print(f.summary())
###Output
_____no_output_____
###Markdown
Submit to Kaggle
###Code
X_kaggle=pd.read_csv('test_bike.csv', parse_dates=True, index_col=0)
X_kaggle.head()
X_kaggle.shape
X_kaggle_fe = feature_engineer(X_kaggle)
X_kaggle_scaled = pd.DataFrame(sc.transform(X_kaggle_fe))
X_kaggle_scaled.head()
ypred = np.exp(m_lasso.predict(X_kaggle_scaled))
ypred.shape
X_kaggle['count'] = ypred
df_final_bike = X_kaggle['count']
df_final_bike
df_final_bike.to_csv('df_final_bike_andrey')
###Output
_____no_output_____ |
Lab1/Setup.ipynb | ###Markdown
SetupCS1302 Introduction to Computer Programming___ JupyterHub How to access the JupyterHub Server? 1. Enter the url of the Jupyterhub server [ltjh.cs.cityu.edu.hk](https://ltjh.cs.cityu.edu.hk) in a web browser.1. Enter your [EID](https://www.cityu.edu.hk/esu/eid.htm) and Password in the fields `Username` and `Password` respectively.1. Click the `Sign In` button. **Tips**- If the browser is stuck at the following page loading the server, `refresh` your browser. - If you see the following page with ``My Server`` button, click on that button. - If you see the ``Start My Server`` button instead, click on that button to start your server. - For other issues, try logging out using the `Logout` button at the top right-hand corner, and then logging in again. You may also click the `Control Panel` button and restart your server. How to access course materials? 1. Click on the `Assignments` tab, and ensure `cs1302` is chosen in the drop down list.1. In the `Released assignments panel`, click the button `Fetch` to download `Lab1`. 1. `Lab1` should appear in the `Downloaded assignments panel`. 1. Click on the little arrow next to `Lab1` to show its content. 1. Ctrl-Click on `Lab1` to open the assignment folder on a new browser tab. 1. On the new browser, click the folder `cs1302` to navigate to the notebook `Setup.ipynb`.1. Click on `Setup.ipynb` to open the notebook. **Tips**1. Note that all the downloaded course materials will be placed under the `cs1302` folder of your home directory by default, so you need not go to the `Assignments` tab again to open the downloaded materials. E.g., you can access the `Setup.ipynb` notebook as follows: 1. Going to the `File` tab, which is the default JupyterHub homepage after login or when you click the logo on the top left-hand corner. 1. Enter the notebook URL [ltjh.cs.cityu.edu.hk/user-redirect/tree/cs1302/Lab1/Setup.ipynb](https://ltjh.cs.cityu.edu.hk/user-redirect/tree/cs1302/Lab1/Setup.ipynb). (See the [documentation](https://jupyterhub.readthedocs.io/en/stable/reference/urls.html) for details.)1. If for any reason you want to Fetch `Lab1` again, you have to first rename your `Lab1` folder to a different name such as `Lab1_orig`. You can do so by selecting the folder and click rename. You can also remove the folder by evaluating `!rm -rf ~/cs1302/Lab1` in a code cell. (Be very cautious as removed folders cannot be recovered.) Jupyter Notebook How to complete a lab assignment? After opening the `Lab1` notebook `Setup.ipynb`:1. Click `Help->User Interface Tour` to learn the jupyter notebook interface. 1. Click `Help->Notebook Help` and skim through the tutorials on `Running Code` and `Working with Markdown Cells`. **Exercise** In learning a new computer language, the first program to write is often the ["Hello, World!"](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program) program, which says Hello to the world. Type the program `print('Hello, World!')` below and run it with `Shift+Enter`.
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
We often ask you to write a code in a particular cell. Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE". In order to check your work thoroughly, there will visible and hidden test cases. The following is a visible test you can run to check your answer: The test returns an assertion error only if your program does not print the correct message.
###Code
# Run this test cell right after running your "Hello, World!" program.
import sys, io
old_stdout, sys.stdout = sys.stdout, io.StringIO()
exec(In[-2])
printed = sys.stdout.getvalue()
sys.stdout = old_stdout
assert printed == 'Hello, World!\n'
###Output
_____no_output_____
###Markdown
**Tips**1. You can repeatedly modify your solution and run the test cell until your solution passes the test. You are not required to know how the test cell is written. 1. To assess your solution thoroughly, we often run new tests hidden from you after you have submitted your notebook. There is no partial credit for a partially correct solution that works for the visible test but fails for the hidden test. Therefore, *you should ensure your solution works in general rather than just the visible tests*.1. You can click the `Validate` button to run all the visible tests.1. If you open the same notebook multiple times in different browser windows, be careful in making changes in different windows. Inconsistent changes may lead to conflicts or loss of your data.1. If your notebook fails to run any code, the Kernel might have died. You can restart the kernel with `Kernel->Restart`. If restarting fails, check your code cells to see if there is any code that breaks the kernel. How to submit a notebook - Although Lab1 does not count towards your final grade, you are required to submit it, to get familiar with the procedure.- Before you submit, make sure everything runs as expected: 1. **Restart the kernel**: `Kernel->Restart` 1. **run all cells**: `Cell->Run All` To submit your notebook:1. Go to `Assignment` tab of JupyterHub where you fetched the Lab assignment. 1. Expand the Lab1 folder and click the `validate` button next to the notebook(s) to check if all visible tests pass.1. Click `Submit` to submit your notebook. 1. You may submit as many times as you wish before the due date as we collect your latest submission for grading. - *No late submission* will be collected without valid justifications. - *Double check* that you have submitted the correct Lab assignment. - You are responsible for *recording your submission attempt* with a valid timestamp in case of technical issues. **Tips**1. You normally have at least 5 days to work on the lab after your lab session. 1. You can check the due dates of all the labs from the course homepage. 1. You may seek help from us or your classmates. However, you must write your own solution and indicate who your collaborators are using the code:
###Code
COLLABORATORS = ['WONG Xiu Fong', 'LEE Man Kit']
###Output
_____no_output_____
###Markdown
SetupCS1302 Introduction to Computer Programming___ JupyterHub How to access the JupyterHub Server? 1. Enter the url of the Jupyterhub server [ltjh.cs.cityu.edu.hk](https://ltjh.cs.cityu.edu.hk) in a web browser.1. Enter your [EID](https://www.cityu.edu.hk/esu/eid.htm) and Password in the fields `Username` and `Password` respectively.1. Click the `Sign In` button. **Tips**- If the browser is stuck at the following page loading the server, `refresh` your browser. - If you see the following page with ``My Server`` button, click on that button. - If you see the ``Start My Server`` button instead, click on that button to start your server. - For other issues, try logging out using the `Logout` button at the top right-hand corner, and then logging in again. You may also click the `Control Panel` button and restart your server. How to access course materials? 1. Click on the `Assignments` tab, and ensure `cs1302` is chosen in the drop down list.1. In the `Released assignments panel`, click the button `Fetch` to download `Lab1`. 1. `Lab1` should appear in the `Downloaded assignments panel`. 1. Click on the little arrow next to `Lab1` to show its content. 1. Ctrl-Click on `Lab1` to open the assignment folder on a new browser tab. 1. On the new browser, click the folder `cs1302` to navigate to the notebook `Setup.ipynb`.1. Click on `Setup.ipynb` to open the notebook. **Tips**1. Note that all the downloaded course materials will be placed under the `cs1302` folder of your home directory by default, so you need not go to the `Assignments` tab again to open the downloaded materials. E.g., you can access the `Setup.ipynb` notebook as follows: 1. Going to the `File` tab, which is the default JupyterHub homepage after login or when you click the logo on the top left-hand corner. 1. Enter the notebook URL [ltjh.cs.cityu.edu.hk/user-redirect/tree/cs1302/Lab1/Setup.ipynb](https://ltjh.cs.cityu.edu.hk/user-redirect/tree/cs1302/Lab1/Setup.ipynb). (See the [documentation](https://jupyterhub.readthedocs.io/en/stable/reference/urls.html) for details.)1. If for any reason you want to Fetch `Lab1` again, you have to first rename your `Lab1` folder to a different name such as `Lab1_orig`. You can do so by selecting the folder and click rename. You can also remove the folder by evaluating `!rm -rf ~/cs1302/Lab1` in a code cell. (Be very cautious as removed folders cannot be recovered.) Jupyter Notebook How to complete a lab assignment? After opening the `Lab1` notebook `Setup.ipynb`:1. Click `Help->User Interface Tour` to learn the jupyter notebook interface. 1. Click `Help->Notebook Help` and skim through the tutorials on `Running Code` and `Working with Markdown Cells`. **Exercise** In learning a new computer language, the first program to write is often the ["Hello, World!"](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program) program, which says Hello to the world. Type the program `print('Hello, World!')` below and run it with `Shift+Enter`.
###Code
### BEGIN SOLUTION
print('Hello, World!')
### END SOLUTION
###Output
Hello, World!
###Markdown
We often ask you to write a code in a particular cell. Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE". In order to check your work thoroughly, there will visible and hidden test cases. The following is a visible test you can run to check your answer: The test returns an assertion error only if your program does not print the correct message.
###Code
# Run this test cell right after running your "Hello, World!" program.
import sys, io
old_stdout, sys.stdout = sys.stdout, io.StringIO()
exec(In[-2])
printed = sys.stdout.getvalue()
sys.stdout = old_stdout
assert printed == 'Hello, World!\n'
###Output
_____no_output_____
###Markdown
**Tips**1. You can repeatedly modify your solution and run the test cell until your solution passes the test. You are not required to know how the test cell is written. 1. To assess your solution thoroughly, we often run new tests hidden from you after you have submitted your notebook. There is no partial credit for a partially correct solution that works for the visible test but fails for the hidden test. Therefore, *you should ensure your solution works in general rather than just the visible tests*.1. You can click the `Validate` button to run all the visible tests.1. If you open the same notebook multiple times in different browser windows, be careful in making changes in different windows. Inconsistent changes may lead to conflicts or loss of your data.1. If your notebook fails to run any code, the Kernel might have died. You can restart the kernel with `Kernel->Restart`. If restarting fails, check your code cells to see if there is any code that breaks the kernel. How to submit a notebook - Although Lab1 does not count towards your final grade, you are required to submit it, to get familiar with the procedure.- Before you submit, make sure everything runs as expected: 1. **Restart the kernel**: `Kernel->Restart` 1. **run all cells**: `Cell->Run All` To submit your notebook:1. Go to `Assignment` tab of JupyterHub where you fetched the Lab assignment. 1. Expand the Lab1 folder and click the `validate` button next to the notebook(s) to check if all visible tests pass.1. Click `Submit` to submit your notebook. 1. You may submit as many times as you wish before the due date as we collect your latest submission for grading. - *No late submission* will be collected without valid justifications. - *Double check* that you have submitted the correct Lab assignment. - You are responsible for *recording your submission attempt* with a valid timestamp in case of technical issues. **Tips**1. You normally have at least 5 days to work on the lab after your lab session. 1. You can check the due dates of all the labs from the course homepage. 1. You may seek help from us or your classmates. However, you must write your own solution and indicate who your collaborators are using the code:
###Code
COLLABORATORS = ['WONG Xiu Fong', 'LEE Man Kit']
###Output
_____no_output_____ |
6_1_rnnn_text_generation.ipynb | ###Markdown
RNNs para Martín Fierro [Versión 2020]El objetivo de los ejercicios en este tutorial es mostrar el impacto de algunas decisiones de diseño en la implementación de las redes neuronales, particularmente las recurrentes. Como ejemplo veremos una implementación de la red RNN para generación de lenguaje basada en caracteres de [Karpathy](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Para entrenarla utilizaremos un fragmento del Martín Fierro que pueden descargar [aquí](https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/martin_fierro.txt). Para un entrenamiento más complejo, pueden utilizar las obras completas de borges, disponibles en [este link](https://drive.google.com/file/d/0B4remi0ZCiqbUFpTS19pSmVFYkU/view?usp=sharing).
###Code
!pip install wget
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import random
import re
import sys
import os
import wget
import unicodedata
###Output
_____no_output_____
###Markdown
Primero leeremos el dataset del archivo de texto y lo preprocesaremos para disminuir la viariación de caracteres. Normalizaremos el formato unicos, elminaremos espacios y transformaremos todo a minúsculas.
###Code
if not os.path.exists('martin_fierro.txt'):
wget.download('https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/martin_fierro.txt')
with open('./martin_fierro.txt', 'r') as finput:
text = unicodedata.normalize('NFC', finput.read()).lower()
text = re.sub('\s+', ' ', text).strip()
print('Corpus length: %d' % len(text))
###Output
_____no_output_____
###Markdown
Luego, contaremos la cantidad de caracteres únicos presentes en el texto, y le asignaremos a cada uno un índice único y secuencial. Este índice será utilizado luego para crear las representaciones one-hot encoding de los caracteres.
###Code
chars = sorted(list(set(text)))
print('Total chars: %d' % len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
###Output
_____no_output_____
###Markdown
Language ModelingLa tarea de Language Modiling (**LM**) es aprender $P_{\theta}$ parametrizada por $\theta$ para determinar $P_{\theta}(x|x_1,...x_t)$, donde $x$ puede ser caracter o palabra.$LM(x, x_1, ..., x_t) = P_{\theta}(x | x_1,...x_t)$Para la tarea de generación, la tarea de **LM** es la ideal: dada una secuencia de entrada $x_1, ..., x_t$, podemos predecir la palabra de mayor probabilidad según nuestra probabilidad $P$.$GenerationLM(x_1, ..., x_t) = max_{x} P_{\theta, x_1,...x_t}(x)$Otra opción, sería sortear la siguiente palabra con respecto a la distribución generada, es decir, samplear $x \sim P_{\theta, x_1,...x_t}$. Parte 1: Esqueleto de la red neuronalLo primero que debemos pensar es cómo será la arquitectura de nuestra red para resolver la tarea deseada. En esta sección crearemos el modelo sequencial con PyTorch que representará nuestra red. En los pasos siguientes, implementaremos las transformaciones del corpus, por lo que en este paso pueden asumir cualquier formato en los datos de entrada.Para poder implementar el modelo debemos responder las siguientes preguntas: - ¿Es una red one-to-one, one-to-many, many-to-one o many-to-many? - ¿Cuál es el formato de entrada y de salida de la red? ¿Cuál es el tamaño de las matrices (tensores) de entrada y de salida? - Luego de que la entrada pasa por la capa recurrente, ¿qué tamaño tiene el tensor? - ¿Cómo se conecta la salida de la capa recurrente con la capa densa que realiza la clasificación? - ¿Cuál es el loss apropiado para este problema? Primero importamos los módulos que necesitaremos para implementar nuestra red: - torch: acceso a todo el framework - torch.nn: nos da acceso a capas ya implementadas y a la clase Module para instanciar y crear nuestra red
###Code
import torch
import torch.nn as nn
# Check if we have a GPU available
use_cuda = torch.cuda.is_available()
device = torch.device('cuda') if use_cuda else torch.device('cpu')
class MyModel(nn.Module):
def __init__(self, vocab_size, input_size, hidden_layer,
num_layers=1, dropout=0., bias=True,
bidirectional=False):
super(MyModel, self).__init__()
# Set our LSTM parameters
self.lstm_config = {'input_size': input_size,
'hidden_size': hidden_layer,
'num_layers': num_layers,
'bias': bias,
'batch_first': True,
'dropout': dropout,
'bidirectional': bidirectional}
# Set our FC layer parameters
self.linear_config = {'in_features': hidden_layer,
'out_features': vocab_size,
'bias': bias}
# Instanciate the layers
self.encoder = nn.LSTM(**self.lstm_config)
self.decoder = nn.Sequential()
self.decoder.add_module('linear', nn.Linear(**self.linear_config))
self.decoder.add_module('softmax',nn.LogSoftmax(dim=-1))
def forward(self, inputs):
outputs, _ = self.encoder(inputs)
predictions = self.decoder(outputs)
return predictions
model = MyModel(len(chars), len(chars), 128)
print(model)
###Output
_____no_output_____
###Markdown
Parte 2: Transformación del inputUna vez que definimos la arquitectura de la red, sabemos con exactitud cuál es el input que necesitamos utilizar. En esta sección transformaremos el texto que leimos del archivo en ejemplos de entrenamiento para nuestra red. El resultado será una matrix que representa las secuencias de caracteres y una matriz que representa las etiquetas correspondientes. - ¿Cómo debemos representar cada ejemplo? - ¿Cómo debemos representar cada etiqueta? Usaremos la clase torch.Dataset para implementar nuestro. Necesitamos implementar dos métodos para esto: - \__len__: retorna el largo del dataset - \__getitem__: dado un id, retorna el elemento asociado en el dataset con ese id
###Code
from torch.utils.data import Dataset, DataLoader
class MartinFierroDataset(Dataset):
def __init__(self, textdata, maxlen):
self.maxlen = maxlen
# cut the text in sequences of maxlen characters
sentences = []
next_chars = []
for i in range(0, len(textdata) - maxlen - 1, maxlen):
sentences.append(textdata[i: i + maxlen])
next_chars.append(textdata[i + 1: i + maxlen + 1])
self.length = len(sentences)
self.X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.float32)
self.y = np.zeros((len(sentences), maxlen), dtype=np.float32)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
self.X[i, t, char_indices[char]] = 1
self.y[i, t] = char_indices[next_chars[i][t]]
print('NB sequences:', self.length)
def __len__(self):
return self.length
def __getitem__(self, idx):
output = {'X': self.X[idx],
'y': self.y[idx]}
return output
data = MartinFierroDataset(text, 50)
###Output
_____no_output_____
###Markdown
Parte 3: Entrenamiento de la redEn esta sección entrenaremos nuestra red. Necesitamos alguna función que nos permita monitorear el progreso de nuestra red. Para eso vamos a imprimir una muestra del texto generado por la red cada cierta cantidad de epochs.Utilizaremos dos funciones que toman una porción de texto aleatorio y generan nuevos caracteres con el modelo dado. - ¿Cómo podemos interpretar la salida de la red? ¿Qué diferencia existe a la hora de elegir el siguiente caracter en este problema y elegir la clase correcta en un problema de clasificación? - ¿Qué hacen estas funciones? ¿Para qué se utiliza la variable diversity?
###Code
def temperature_sample(preds, temperature=1.0):
# helper function to sample an index from a probability array\n"
temp_preds = np.asarray(preds[:,-1,:]).astype('float64') / temperature
exp_preds = np.exp(temp_preds)
new_probs = (exp_preds / np.sum(exp_preds)).squeeze()
probas = np.random.multinomial(1, new_probs, 1)
return np.argmax(probas)
def print_sample(model, device, maxlen=50):
with torch.no_grad():
model.eval()
sample_size = 200
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
sentence = text[start_index: start_index + maxlen]
#sentence = 'el bien perdido'
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(sentence)
# Printing the sample
for i in range(sample_size):
x = np.zeros((1, maxlen, len(chars)), dtype=np.float32)
# Build the one-hot encoding for the sentence
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
input_tensor = torch.tensor(x).to(device)
logprob_preds = model(input_tensor)
next_index = temperature_sample(logprob_preds.cpu().numpy(), diversity)
next_char = indices_char[next_index]
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
###Output
_____no_output_____
###Markdown
EntrenamientoPrimero configuramos los hiperparámetros de la red. En este momento determinamos lo siguiente: - learning_rate - epochs - función de pérdida - optimizador También definimos los parámetros para torch.DataLoader, clase que implementa un manejador del dataset que nos dividirá los datos en batches (y los distribuirá entre distintos nodos de cómputo, en caso de contar con multi GPU).
###Code
import torch.optim as optim
learning_rate = 0.001
epochs = 22
loss_function = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), learning_rate)
dataloader_config = {'dataset': data,
'batch_size': 32,
'shuffle': True,
'num_workers': 0,
'pin_memory': use_cuda}
# Send the model to GPU if there is one available
model.to(device)
from time import time
historical_loss = torch.FloatTensor()
# Set the model on train mode
model.train()
for epoch in range(1,epochs+1):
loss = 0
start = time()
# Show samples every 20 epochs
if epoch % 20 == 0:
print_sample(model, device)
model.train()
train_loss = torch.FloatTensor().to(device)
dataloader = DataLoader(**dataloader_config)
for i_batch, sample in enumerate(dataloader):
inputs, gt_out = sample['X'].to(device), sample['y'].to(device)
preds = model(inputs)
bs, seqlen, cat = preds.size()
# preds: batch_size x max_seq_length x len(chars)
# gt_out: batch_size x max_seq_length
# NLLLoss expects inputs of the form:
# N x C x d1 x ... x dt for the input
# N x d1 x ... x dt for the targets
# So we transform define N = batch_size x max_seq_length
# and we keep C = len(chars)
loss = loss_function(preds.view(bs*seqlen, -1),
gt_out.view(bs*seqlen).type(torch.long)).unsqueeze(0)
# Set gradients to 0, backpropagate, make an uptimization
# update and store the loss for logging purposes
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss = torch.cat([train_loss, loss])
print("Epoch %03d, Time taken %.2f, Training-Loss %.5f" % (epoch, time()-start, torch.mean(train_loss)))
with torch.no_grad():
historical_loss = torch.cat([historical_loss, torch.mean(train_loss.cpu()).view(1)])
###Output
_____no_output_____
###Markdown
Ejercicios extrasUna vez que hemos implementado la arquitectura básica de la red, podemos comenzar a experimentar con distintas modificaciones para lograr mejores resultados. Algunas tareas posibles son: - Agregar más capas recurrentes - Probar otros largos de secuencias máximas - Agregar capas de regularización y/o dropout - Agregar métricas de performance como perplexity y word error rate ComprobacionesPara asegurarnos de que el modelo esté efectivamente entrenando, podemos graficar la función de pérdida en el corpus de validación.
###Code
import matplotlib.pyplot as plt
import numpy as np
import seaborn
plt.figure(figsize=(20,10))
loss_values = historical_loss.detach().numpy()
seaborn.lineplot(x=range(loss_values.shape[0]), y=loss_values)
seaborn.despine()
###Output
_____no_output_____ |
practicos/p2/p2.ipynb | ###Markdown
Introducción a la Ciencia de Datos Práctico 2: Data visualization Tabla de contenidos- [Importe las librerías necesarias](0)- [Subplots usando Matplotlib (*)](1)- [Parte 1: Múltiples gráficas en una. (*)](2a)- [Parte 2: Ajustar colores (*)](2b)- [Ajustar los límites horizontales y verticales de las gráficas (*)](3)- [Anotaciones de texto (***)](4)- [Scatter plots (**)](5)- [Pie charts (****)](6)- [Doughnut charts (***)](7)- [Horizontal bar plots (***)](8)- [Vertical grouped bar plots (****)](9)- [Lineplots parte 1 (**)](10)- [Lineplots parte 2 (***)](11)- [Back-to-back bars (***)](12)- [Violin plots (*)](13)- [Intervalos de confianza (*)](14)- [Error bars (**)](15) Al finalizar el práctico usted aprenderá:- La anatomía de una gráfica de matplotlib.- Manipular distintas características de una gráfica, como ejes, anotaciones, colores, tamaño, textos, leyendas, ticks, posiciones, etc.- Graficar un gran número de gráficas de uso cotidiano, como Line Plots, Scatter Plots, Pie Charts, Doughnut Charts, Back-to-back bars, Violin Plots, Confidence Intervals, Error bars, etc.A lo largo del práctico usted notará asteriscos (*) al lado de cada ejercicio. Estos indican el nivel de dificultad. Se sugiere realizar el trabajo en conjunto y no por separado. 0) Importe las librerías necesarias
###Code
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import make_blobs
###Output
_____no_output_____
###Markdown
Elementos de una gráfica de Matplotlib- **Figure:** hace referencia a la figura en la cual estamos trabajando. Es como la "ventana" que contiene nuestras gráficas/plots. Las figuras contienen axes, lo que solemos interpretar cuando comúnmente hablamos de gráficas. Una figura puede tener muchos axes.- **Axes:** axes es lo que comúnmente solemos interpretar cuando hablamos de una gráfica. Cuando hablamos de gráficas hablamos de axes. - **Axis:** establece los límites de una gráfica. Anatomía de una gráfica[Fuente](https://matplotlib.org/examples/showcase/anatomy.html) de la imagen Ejemplo:
###Code
x = np.arange(0, math.pi*4, 0.1)
y = np.cos(x)
plt.xlabel("ángulo en radianes")
plt.ylabel("coseno")
plt.title('Gráfica coseno')
plt.grid(True, which='both')
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
1) Subplots usando Matplotlib (*) Subplots permite agregar múltiples sub-gráficas en una sola figura. Sin embargo, no nos permite la opción de agregar las distintas sub-gráficas donde querramos dentro de la figura, sino que las posiciones de las sub-gráficas están predeterminadas mediante una grilla. La manera más fácil de especificar la posición de las distintas sub-gráficas es mediante la notación de 3 números enteros. Para posicionar subplots usamos el comando `ax = fig.add_subplot(RCP)` donde:- `R` (rows) = cantidad de filas de la grilla- `C` (columns) = cantidad de columnas de la grilla- `P` (position) = qué posición ocupara la subplot en la grillaGrafique usando subplots: - cos(x) en azul- sin(x) en verde- cos(x) + sin(x) en rojo- Establezca el tamaño de la gráfica en 16 pulgadas de ancho y 8 de alto. **Tip:** `figsize()`- Distancie las distintas subplot horizontalmente con una distancia de 0.5 para que los títulos de las subplots no se toquen entre sí. **Tip:** `plt.subplots_adjust()`- Agréguele títulos a cada subplot con la función trigonométrica que esta graficando.- Agréguele un título a la figura que diga 'Gráficas trigonométricas'.Su gráfica debería verse similar a la siguiente:
###Code
x = np.arange(0, math.pi*4, 0.1)
y1 = np.cos(x)
y2 = np.sin(x)
y3 = y1 + y2
###Output
_____no_output_____
###Markdown
2) Parte 1: Múltiples gráficas en una. (*) Frecuentemente, queremos graficar datos usando una sola gráfica para mejor comparación. Para esto, no es necesario especificar axes. Simplemente podemos usar [plt.plot()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html) múltiples veces. Grafique en una sola gráfica:- cos(x) en verde, línea sólida.- sin(x) en naranja, línea punteada.- cos(x) + sin(x) en rojo, línea punto-raya (-.-.-.-).- Añándale un título como la gráfica anterior y labels en los ejes.- Agréguele una leyenda con labels de cada función.Su gráfica debería verse similar a la siguiente: 2) Parte 2: Ajustar colores (*) El color de una gráfica puede ser ajustado de distintas maneras. 1. Especificando el nombre del color. Método simple pero muy acotado a un número bajo de colores. Ejemplo: 'red', 'green', 'blue', etc. 2. Especificar mediante un [código hexadecimal](https://htmlcolorcodes.com/) los tres canales RGB de la siguiente manera: 'RRGGBB' donde los valores van desde 00 hasta FF. Recordar que los dígitos disponibles en base 16 son 0123456789ABCDEF.3. Una tupla de 3 valores numéricos entre 0 y 1 que especifican las intensidades de los canales RGB. Grafique nuevamente la gráfica anterior pero cambiando los colores mediante los 3 métodos anteriormente mencionados. Cada función debe ser mediante un método distinto. Los colores son a libre elección. 3) Ajustar los límites horizontales y verticales de las gráficas (*) La manera de ajustar los límites de nuestras plots usando matplotlib es mediante los comandos [plt.xlim()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.xlim.html) and [plt.ylim()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.ylim.html). Grafique x entre 0 y 12 radianes y tan(x) entre -20 y 20. No se olvide de las labels en los ejes ni la leyenda.
###Code
y4 = np.tan(np.arange(0, math.pi*2, 0.05))
###Output
_____no_output_____
###Markdown
4) Anotaciones de texto (***) Dentro de una gráfica se pueden visualizar y posicionar textos con distintos formatos, tamaños, fuentes, etc. A continuación, usted graficara usando un **scatterplot** los tamaños de tumores de pacientes y generará anotaciones de texto dentro de la gráfica.- Ajuste el tamaño de la imagen a (12,8)- Agréguele un título a la figura que sea 'Detección de outliers', `fontsize` = 24, `fontweight` = bold.- Agregue una única subplot.- Ajústela con `top` = 0.9. Use [fig.subplots_adjust()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots_adjust.html)- Grafique los datos de los pacientes usando [plt.scatter()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html), largo del tumor en el eje x, ancho del tumor en el eje y.- Añadale el siguiente texto usando [ax.text()](https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.text.html): *'Población general $\mu$ = 100mm, $\sigma$ = 10mm'* en la posición (125, 150). Use bbox={'facecolor':'red', 'alpha':0.5, 'pad':10}. - Añadale el siguiente texto usando `ax.text()`: *'Los datos visualizados son propiedad privada'*. Esta vez, posicione el texto indicando la posición como un porcentaje del largo de los ejes. El texto debe estar en 0.99 del eje x y 0.01 del eje y. Por ejemplo, (0.5,0.5) sería la mitad del eje x y la mitad del eje y. Use `color` = 'red' y `fontsize` = 10. - Agrégue una flecha apuntando al grupo de outliers que en la punta diga 'outliers'- Establezca los límites de la gráfica en (0,200) para ambos ejes.**Tip:** básese en el siguiente [ejemplo](https://matplotlib.org/3.1.1/gallery/pyplots/text_commands.htmlsphx-glr-gallery-pyplots-text-commands-py) de MatplotlibLa gráfica debería verse similar a la siguiente:
###Code
sample1 = make_blobs(n_samples=1000,centers=[(100,100)], n_features=2,random_state=0,cluster_std=10)
sample2 = make_blobs(n_samples=10,centers=[(50,150)], n_features=2,random_state=0,cluster_std=1)
samples = np.concatenate([sample1[0], sample2[0]])
samples = pd.DataFrame(samples, columns = ['largo', 'ancho'])
samples.head()
###Output
_____no_output_____
###Markdown
5) Scatter plots (**) Las scatter plots son utilizadas para visualizar puntos en un espacio N-dimensional (N acotado, generalmente hasta máximo 6 dimensiones si se usa tamaño, color y forma). A continuación, ejecute la siguiente celda para obtener el iris dataset que contiene datos de flores.- Grafique el largo de las flores en función del ancho.- Grafique con un marcador de estilo `X`.- Establezca el parámetro de transparencia/opacidad alpha en 0.4- Establezca el tamaño del marcador como: (100 x ancho de pétalo).- Que el color dependa de la especie de flor. **Tip:** use la columna target (ordinal encoder de species).- Establezca el colormap como `jet`.- Nombre los ejes y establezca el título de la gráfica.La gráfica debería verse similar a la siguiente:
###Code
iris = pd.read_csv('./data/iris.csv', sep=',')
ordinal_mapping = {
'setosa': 0,
'versicolor': 1,
'virginica': 2
}
iris['target'] = iris['species'].map(ordinal_mapping)
iris.head()
###Output
_____no_output_____
###Markdown
6) Pie charts (****) Las pie charts, aunque criticadas por no ser buenas para comunicar información, son frecuentemente utilizadas. **Las pie charts no son una buena forma de comunicar resultados pues el cerebro humano no es bueno para comparar tamaños de ángulos.** De todas formas, a continuación crearemos una pie chart que visualizará la cantidad de usuarios de la empresa Spotify según el sistema operativo que utiliza.- Referénciese con este [ejemplo](https://matplotlib.org/3.1.1/gallery/pie_and_polar_charts/pie_and_donut_labels.html), el mismo le servirá de guía para realizar este ejercicio.- También le será de utilidad leer qué significan los distintos [parámetros](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.pie.html) del pie chart: - Establezca el tamaño de la figura en (15,8)- Cree una tupla explode. El parámetro `explode` determina el distanciamiento entre las distintas secciones del pie chart. Cómo usted trabaja para Microsoft le interesa que Windows tenga un explode de 0.1 y el resto 0 para visualizarlo mejor.- Establezca la lista `os` como las labels del pie chart.- Utilize los colores que le son dados.- Establezca el ratio entre el centro del pie chart y el comienzo del texto generado por `autopct` en 1.2- Establezca el ratio entre el centro del pie chart y el comienzo de las labels en 1.4- Establezca el título de la leyenda como "Operating Systems" y posiciónelo "lower right".- Establezca el título de la gráfica como"Operating systems users on Spotify".La gráfica debería verse similar a la siguiente:
###Code
users = [80, 40, 1000, 300, 50, 80, 10]
os = ['MacOS', 'Chrome', 'Windows', 'Linux', 'Devian', 'Ubuntu', 'Arch Linux']
cmap = plt.get_cmap("tab20c")
colors = cmap([2,4,6,8,10,12,14])
###Output
_____no_output_____
###Markdown
7) Doughnut charts (***) En clase ha visto la doughnut chart con un ejemplo del porcentaje de mercado de distintas marcas. Reproduzca el ejemplo. Para eso:- Referénciese con este [ejemplo](https://matplotlib.org/3.1.1/gallery/pie_and_polar_charts/nested_pie.htmlsphx-glr-gallery-pie-and-polar-charts-nested-pie-py), el mismo le servirá de guía para realizar este ejercicio:- También le será de utilidad leer qué significan los distintos [parámetros](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.pie.html) del pie chart: - Establezca el tamaño de la figura en (15,8)- Establezca la lista `brands` como las labels del doughnut chart.- Utilize los colores que le son dados.- Establezca el ratio entre el centro del pie chart y el comienzo de las labels para los valores del 2013 en 0.7- Establezca textprops en color blanco para los valores del 2013.- Establezca el título de la gráfica como"Smartphone market share".La gráfica debería verse similar a la siguiente:
###Code
brands = ['Apple', 'Samsung', 'LG', 'Motorola', 'HTC']
outer_vals = [0.48, 0.26, 0.09, 0.09, 0.08] # porcentaje de mercado año 2014
inner_vals = [0.55, 0.25, 0.07, 0.07, 0.06] # porcentaje de mercado año 2013
cmap = plt.get_cmap("Blues")
outer_colors = cmap(np.array([100,200,300,400,500]))
inner_colors = cmap(np.array([100,200,300,400,500]))
###Output
_____no_output_____
###Markdown
8) Horizontal bar plots (***) En clase ha visto las gráficas de barras horizontales con un ejemplo de estudiantes latinos. Reproduzca el ejemplo. Para eso:- Referénciese con este [ejemplo](https://matplotlib.org/3.3.1/gallery/lines_bars_and_markers/barh.html), el mismo le servirá de guía para realizar este ejercicio:- También le será de utilidad leer qué significan los distintos [parámetros](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.barh.html) del horizontal bar chart: - Utilize los valores porcentuales y las labels establecidas.- Establezca los `xticks_labels` y los `yticks_labels`.- Establezca los rótulos de los ejes.- Establezca el título de la gráfica.- Pinte la barra "Adult relatives" de color gris. **Tip:** guíese por este [ejemplo](https://stackoverflow.com/questions/18973404/setting-different-bar-color-in-matplotlib-python)- Añádale líneas verticales grises `dashed`.La gráfica debería verse similar a la siguiente:
###Code
values = [1, 0.59, 0.57, 0.5, 0.18, 0.15, 0.10]
people = ('Parents', 'Friends', 'Adult relatives', 'Teachers', 'Mentors', 'Employers', 'Others adults')
x = ['Parents', 'Friends', 'Adult relatives', 'Teachers', 'Mentors', 'Employers', 'Others adults']
###Output
_____no_output_____
###Markdown
9) Vertical grouped bar plots (****) En clase ha visto las gráficas de barras verticales con un ejemplo de regiones. Reproduzca el ejemplo. Para eso:- Referénciese con este [ejemplo](https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/barchart.html), el mismo le servirá de guía para realizar este ejercicio.- Deberá graficar 7 gráficas ax.bar(), una para cada año.- Establezca los `xticks_labels` (regiones) y los `yticks_labels` (porcentajes).- Establezca el título de la gráfica como 'Porcentajes por región'.- Establezca la posición de la leyenda debajo y afuera de la gráfica. **Tip:** guíeses por este [ejemplo](https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot)- Pinte los años con distintos colores. Puede usar colores hexadecimales con la siguiente [página](https://htmlcolorcodes.com/).- La mayor dificultad de este problema es distanciar bien las barras para que no se superpongan. Para eso, usted tiene el array `x` al cual debe sumarle y restarle valores numéricos para distanciar las barras.- `width` = 0.8La gráfica debería verse similar a la siguiente:
###Code
df = pd.read_csv('./data/regions.csv', sep=',', index_col = 'index')
xlabels = df.index.tolist()
x = np.arange(len(xlabels))*10 # the label locations
df
###Output
_____no_output_____
###Markdown
10) Lineplots parte 1 (**) En clase ha visto lineplots con un ejemplo de regiones. Estas pueden llevar a confusión. Reproduzca el ejemplo visto en clase. Para eso:- Grafique 7 lineplots una para cada año.- Utilize códigos hexadecimales y diferencie bien los colores para no causar confusión.- Establezca los `yticks` y `ylabels`.- Establezca los rótulos de los ejes y el título de la gráfica.La gráfica debería verse similar a la siguiente: 11) Lineplots parte 2 (***) Ahora grafique para una mejor visualización la gráfica anterior pero partiendo cada año en una subplot distinta.- Grafique 7 subplots una para cada año.- Utilize códigos hexadecimales y diferencie bien los colores para no causar confusión.- Establezca los `yticks` y `ylabels`.- Establezca los rótulos de los ejes y el título de la gráfica.La gráfica debería verse similar a la siguiente: 12) Back-to-back bars (***) En clase ha visto back-to-back bars con un ejemplo de causas de muerte para hombres y mujeres. Reproduzca el ejemplo visto en clase. Para eso:- Referénciese a este [ejemplo](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781849513265/1/ch01lvl1sec18/plotting-back-to-back-bar-charts), el cual le servirá como guía.- Determine los `x_ticks` y `x_labels`.- Determine las `y_ticks` y `y_labels`.- Grafique las barras horizontales back to back para hombres y mujeres.- Establezca los rótulos de cada eje, título y leyenda.- Establezca la grilla vertical gris `dashed`.La gráfica debería verse similar a la siguiente:
###Code
diseases = pd.read_csv('./data/diseases.csv', sep=',', index_col='index')
diseases
###Output
_____no_output_____
###Markdown
13) Violin plots (*) Las violin plots suelen usarse como una alternativa más descriptiva a las boxplots. Usualmente es utilizada para visualizar distribuciones de datos. Las violin plots además de ofrecer estadísticas descriptivas generales como la media, mediana y cuartiles ofrece la distribución misma de los datos. A continuación, utilize el dataset que se encuentra en la siguiente celda, el cual contiene 4 distribuciones normales y grafíquelas mediante violin plots en una sola gráfica.- Documentación de matplotlib sobre [violin plots](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.axes.Axes.violinplot.html).La gráfica debería verse similar a la siguiente:
###Code
data = pd.read_csv('./data/distributions.csv', sep=',')
data
###Output
_____no_output_____
###Markdown
14) Intervalos de confianza (*) Al realizar estimaciones es frecuente añadir un intervalo de confianza en nuestras mediciones. A continuación, corra la siguiente celda, la cual contiene un dataset de las ventas estimadas en el correr del tiempo por medio de un algoritmo (como podrá ver los días no son discretos). A estas estimaciones deben serles añadidas un intervalo de confianza por medio de la siguiente fórmula:$$ I.C = 1,96 . \frac{\sigma}{\mu} $$- Grafique las ventas estimadas en el correr del tiempo.- Añada en sombreado el intervalo de confianza. **Tip:** `ax.fill_between()`- Añadale a la gráfica rótulos y el título.La gráfica debería verse similar a la siguiente:
###Code
estimates = pd.read_csv('./data/estimates.csv', sep=',')
estimates.head()
###Output
_____no_output_____
###Markdown
15) Error bars (**) Las gráficas con error bars son efectivas para presentar errores estimados, desviaciones, intervalos de confianza, etc. En clase, usted ha visto ejemplos de error bars para un caso de efectividad de 3 distintos programas. Reproduzca la gráfica vista en clase. Para eso:- Sírvase de los datos que le son ofrecidos a continuación en la siguiente celda.- Referénciese al siguiente [ejemplo](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.errorbar.html).- Agregue `xticks`, `xlabels`, `yticks` y `ylabels`.- Grafique cada punto por separado pero en la misma gráfica.- Utilize la función `ax.annotate()` para posicionar textos con los porcentajes en la gráfica.- Establezca los límites de la gráfica.- Establezca el título de la gráfica.- Establezca una grilla horizontal para la gráfica.La gráfica debería verse similar a la siguiente:
###Code
x = [1, 2, 3]
dy = [0.05, 0.07, 0.05]
y = [0.35, 0.30, 0.40]
###Output
_____no_output_____ |
intrinsics_to_cartesian/python/intrinsics_to_cartesian.ipynb | ###Markdown
Using Intrinsics and Unit VectorsThis Jupyter notebook demonstrates how to use the intrinsic calibration of the camera to calculate the unit vectors.It also demonstrates how to use the unit vectors (whether they were calculated or obtained directly from the camera) to convert radial distance to cartesian information in the camera coordinate frame.The O3D303 camera model is based on the model described in this document:http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
###Code
# Set up the ifm3d objects and data structures
cam = ifm3dpy.Camera()
fg = ifm3dpy.FrameGrabber(cam, ifm3dpy.IMG_AMP | ifm3dpy.INTR_CAL | ifm3dpy.IMG_RDIS | ifm3dpy.IMG_CART)
im = ifm3dpy.ImageBuffer()
# Capture a frame and save references to the components
fg.wait_for_frame(im)
im_xyz = im.xyz_image()
im_rdis = im.distance_image()
im_amp = im.amplitude_image()
im_conf = im.confidence_image()
# Have a look at the scene via the amplitude image
fig = plt.figure()
plt.imshow(im_amp)
###Output
_____no_output_____
###Markdown
Deriving Unit Vectors from IntrinsicsThe unit vector matrix is used to determine the direction of light passing through the camera lens for each pixel in the imager array. These vectors (direction) coupled with the radial distance measurement (magnitude) combine to produce cartesian depth information (xyz).This section will demonstrate how the unit vector matrix can be computed using the camera's intrinsic calibration parameters combined with the [camera model](http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html) which accounts for lens distortion.Note: It is not necessary to compute the unit vectors in application. The unit vector matrix is available pre-computed from the camera by providing the `ifm3dpy.IMG_UVEC` flag in the `ifm3dpy.FrameGrabber`. Since these unit vectors are static (but unique per camera) one may choose to cache the unit vector matrix on startup and then re-instantiate the `ifm3dpy.FrameGrabber` without including the `ifm3dpy.IMG_UVEC` flag in order to minimize network load.
###Code
# Unpack intrinsics values - refer to the camera model documentation for details
fx, fy, mx, my, alpha, k1, k2, k5, k3, k4, *_ = im.intrinsics()
# Create index matrix of the imager size as a starting point
yy,xx = np.indices(im.amplitude_image().shape)
# The intrinsic information is based on the full resolution imager.
# If the current application is using the 23k imager (binning)
# the indices must be scaled up accordingly.
settings = cam.to_json()
if settings['ifm3d']['Apps'][cam.active_application() - 1]['Imager']['Resolution'] == '0':
yy = yy * 2 + 0.5
xx = xx * 2 + 0.5
# Re-center and scale the indices according to the principal point and focal lengths
oy = (yy + 0.5 - my)/fy
ox = (xx + 0.5 - mx)/fx
# Apply the radial and tangential distortion according to the model
ox -= alpha * oy
rd2 = oy**2 + ox**2
radial = 1 + (rd2*(k1 + (rd2*(k2 + (rd2*k5)))))
h = 2*ox*oy
tangx = (k3*h) + (k4*(rd2 + (2*ox*ox)))
tangy = (k3*(rd2 + (2*oy*oy))) + (k4*h)
dx = (ox*radial) + tangx
dy = (oy*radial) + tangy
# Populate and normalize the unit vector matrix
uv = np.zeros(im.amplitude_image().shape + (3,))
uv[:,:,0] = dx
uv[:,:,1] = dy
uv[:,:,2] = 1
uv /= np.linalg.norm(uv, axis=-1, keepdims=True)
# Plot the unit vectors
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
uv[:,:,0].flatten(),
uv[:,:,1].flatten(),
uv[:,:,2].flatten(),
s = 0.1,
marker = ',',
c = uv[:,:,2].flatten())
###Output
_____no_output_____
###Markdown
Computing Cartesian InformationNow that we've derived the unit vectors from the intrinsic calibration and the camera model (or simply read out from the camera as previously discussed), we can use those unit vectors to project the radial depth into cartesian space.The following cells illustrate the process of:1. Combining the unit vectors and radial distance to get cartesian information2. Performing an extrinsic calibration to warp from the sensor frame (theoretical optical center) to the camera frame (center of the glass over the lens)3. Converting this data from the O3D303 sensor coordinate frame (right hand, z-axis out) to the ifm3d coordinate frame (right hand, x-axis out)4. Validating that the computed cartesian data matches the pre-computed values read directly from the sensor
###Code
# 1. Apply the radial distance measurement to the unit vectors to get
# cartesian data in the imager frame
xyz_im = uv*im_rdis[...,np.newaxis]
# Reshape to a kx3 matrix for further processing
xyz_im = xyz_im.reshape((-1,3))
# 2. Use the extrinsics to warp from imager frame to camera frame
# NOTE: If the user has written custom extrinsic information to the device,
# this transformation will be included here.
im_ext = im.extrinsics() # [x y z roll pitch yaw], units are mm and deg
r = R.from_euler('xyz', im_ext[3:], degrees=True).as_matrix()
t = np.array([im_ext[0:3]]).T
xyz_camera = (r.dot(xyz_im.T) + t).T
# Invalid pixels are marked by bit zero in the confidence image.
# Their radial depth measurement is zeroed, but the extrinsic transformation
# in the previous step has induced values. Re-zero these pixels now.
mask = (im_conf.flatten() & 0x1) == 0x1
xyz_camera[mask,:] = 0
# 3. Convert to int16 and transform to ifm3d coordinate frame
x_computed = np.around(xyz_camera[:,2]).astype('int16')
y_computed = np.around(-xyz_camera[:,0]).astype('int16')
z_computed = np.around(-xyz_camera[:,1]).astype('int16')
# Render the scene
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
x_computed,
y_computed,
z_computed,
s = 0.1,
marker = ',',
c = x_computed)
# 4. Check for correctness against the pre-computed values read from the sensor
#
# Reshape as an mxnx3 array to compare against the cartesian data returned
# by the camera (consider as 'truth') -- we observe a 1mm error (rounding)
xyz_computed = np.vstack((x_computed, y_computed, z_computed)).T.reshape(im_xyz.shape)
np.max(np.abs(xyz_computed - im_xyz))
###Output
_____no_output_____ |
mini-projects/human_temp/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ###Markdown
What is the True Normal Human Body Temperature? BackgroundThe mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct? ExercisesIn this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.Answer the following questions in this notebook below and submit to your Github account. Is the distribution of body temperatures normal? Although this is not a requirement for CLT to hold (read CLT carefully), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population. Is the sample size large? Are the observations independent? Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply. Is the true population mean really 98.6 degrees F? Would you use a one-sample or two-sample test? Why? In this situation, is it appropriate to use the $t$ or $z$ statistic? Now try using the other test. How is the result be different? Why? Draw a small sample of size 10 from the data and repeat both tests. Which one is the correct one to use? What do you notice? What does this tell you about the difference in application of the $t$ and $z$ statistic? At what temperature should we consider someone's temperature to be "abnormal"? Start by computing the margin of error and confidence interval. Is there a significant difference between males and females in normal temperature? What test did you use and why? Write a story with your conclusion in the context of the original problem. You can include written notes in notebook cells using Markdown: - In the control panel at the top, choose Cell > Cell Type > Markdown - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet Resources+ Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet****
###Code
import pandas as pd
import numpy as np
from scipy import stats
from scipy.stats.mstats import normaltest
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
df = pd.read_csv('data/human_body_temperature.csv')
normaltest(df.temperature).pvalue > 0.01
sns.distplot(df.temperature)
plt.show()
print('Observation Count (#):', df.temperature.count(), '\n')
display(df.gender.value_counts())
#Todo: Are the observations independent?
sample_size = df.temperature.count()
sample_mean = df.temperature.mean()
sample_std = df.temperature.std()
std_error = sample_std / np.sqrt(sample_size)
alpha = 0.005
pop_test_mean = 98.6
p_value = stats.norm.cdf(sample_mean, pop_test_mean, std_error)
print('P-value:', p_value)
print('alpha:', alpha)
if p_value > alpha:
outcome = 'fail to reject the null hypothesis.'
result = 'is greater than'
else:
outcome = 'can reject the null hypothesis.'
result = 'is less than'
#test_outcome = 'Since P-value (' + p_value + ') is '
print(' ', sep='')
print('Test Outcome:', '\n',
'Since p_value ', result,
' our significance level (alpha=',
alpha, '), \n', 'we ', outcome, sep='')
# Todo: Clean up the answer.
#
# Question: Is it appropriate to use the t or z statistic?
# Answer: z statistic since n_samples > 30
# Todo: Answer the question by performing a t test.
#
# Question: Now try using the other test.
# --------- How is the result be different? Why?
# Todo: Draw a small sample of size 10 from the data and repeat both tests.
# Todo: Answer the following questions.
# Which one is the correct one to use?
# What do you notice?
# What does this tell you about the difference in application of the and statistic?
m_temps = df[df.gender=='M']['temperature']
f_temps = df[df.gender=='F']['temperature']
diff_means = np.abs(m_temps.mean() - f_temps.mean())
std_error = np.sqrt((m_temps.var() / m_temps.count()) +
(f_temps.var() / f_temps.count()))
alpha = 0.005
p_value = stats.norm.cdf(0, diff_means, std_error) * 2
print('P-value:', p_value)
print('alpha:', alpha)
if p_value > alpha:
outcome = 'fail to reject the null hypothesis.'
result = 'is greater than'
else:
outcome = 'can reject the null hypothesis.'
result = 'is less than'
#test_outcome = 'Since P-value (' + p_value + ') is '
print(' ', sep='')
print('Test Outcome:', '\n',
'Since p_value ', result,
' our significance level (alpha=',
alpha, '), \n', 'we ', outcome, sep='')
###Output
P-value: 0.0222873607607
alpha: 0.005
Test Outcome:
Since p_value is greater than our significance level (alpha=0.005),
we fail to reject the null hypothesis.
|
examples/docs/06_named_nodes.ipynb | ###Markdown
Named NodesNamed Nodes allow us to use the same Node multiple times in a single graph at e.g. different steps. Therefore, we can pass a `name` argument to the `__init__` of our Node.Notice that this is one of only very few scenarios where we want to pass an argument directly to the `__init__`
###Code
class HelloWorld(Node):
inputs = zn.params()
outputs = zn.outs()
def __init__(self, inputs=None, **kwargs):
super().__init__(**kwargs)
self.inputs = inputs
def run(self):
self.outputs = self.inputs
HelloWorld(inputs=3).write_graph(no_exec=False)
HelloWorld(name="Test01", inputs=17).write_graph(no_exec=False)
HelloWorld(name="Test02", inputs=42).write_graph(no_exec=False)
!dvc dag
###Output
+------------+
| HelloWorld |
+------------+
+--------+
| Test01 |
+--------+
+--------+
| Test02 |
+--------+
[0m
###Markdown
We can now also build a Node that depends on multiple of the same Nodes
###Code
class FindMaximum(Node):
deps = zn.deps(
[
HelloWorld.load(),
HelloWorld.load(name="Test01"),
HelloWorld.load(name="Test02"),
]
)
maximum = zn.outs()
def run(self):
self.maximum = 0
for node in self.deps:
if node.outputs > self.maximum:
self.maximum = node.outputs
print(f"New maximum found {node.outputs}.")
FindMaximum().write_graph(run=True)
!dvc dag
###Output
+------------+ +--------+ +--------+
| HelloWorld | | Test01 | | Test02 |
+------------+** +--------+ ***+--------+
*** * ***
**** * ****
** * **
+-------------+
| FindMaximum |
+-------------+
[0m
###Markdown
Using this combined Node we can e.g. find the maximum of the generated values.
###Code
FindMaximum.load().maximum
# Running it manually to highlight the print statements
FindMaximum.load().run()
###Output
New maximum found 3.
New maximum found 17.
New maximum found 42.
###Markdown
In addition to the introduced classmethod `Node.load(name="nodename")` it is also possible to use `Node["nodename"]`. Note that this only works for `Node["nodename"]` and not for `Node()["nodename"]`. Using this we can also write the following:
###Code
print(HelloWorld["Test01"].outputs)
print(HelloWorld["Test01"].node_name)
###Output
17
Test01
###Markdown
this is equivalent to the classmethod `load()`. It is also possible to pass a dictionary as kwargs which will be passed to `load(**kwargs)`.
###Code
print(HelloWorld.load("Test02").outputs)
print(HelloWorld.load("Test02").node_name)
print(HelloWorld[{"name": "Test02"}].outputs)
temp_dir.cleanup()
###Output
_____no_output_____ |
notebooks/tbeucler_devlog/026_Test_generalization.ipynb | ###Markdown
1) Inialization 1.1) Import utilities
###Code
from cbrain.imports import *
from cbrain.data_generator import *
from cbrain.cam_constants import *
from cbrain.losses import *
from cbrain.utils import limit_mem
from cbrain.layers import *
import tensorflow as tf
import tensorflow.math as tfm
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
import xarray as xr
import numpy as np
from cbrain.model_diagnostics import ModelDiagnostics
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as imag
TRAINDIR = '/local/Tom.Beucler/SPCAM_PHYS/'
DATADIR = '/project/meteo/w2w/A6/S.Rasp/SP-CAM/fluxbypass_aqua/'
PREFIX = '8col009_01_'
%cd /filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM
# Otherwise tensorflow will use ALL your GPU RAM for no reason
limit_mem()
# Indices of different variables
PHQ_idx = slice(0, 30)
PHCLDLIQ_idx = slice(30, 60)
PHCLDICE_idx = slice(60, 90)
#TPHYSTND_idx = slice(90, 120)
TPHYSTND_idx = slice(30,60)
###Output
_____no_output_____
###Markdown
1.2) Define models
###Code
# Config and data files
# config_fn = '/filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/8col_rad_tbeucler_local_PostProc.yml'
# data_fn_a = ['/local/Tom.Beucler/SPCAM_PHYS/8col009_01_valid.nc',
# '/local/Tom.Beucler/SPCAM_PHYS/8col009_14_valid.nc',
# '/local/Tom.Beucler/SPCAM_PHYS/8col009_31_valid.nc']
# data_ref = ['','4K','3Kw1']
# dict_lay = {'SurRadLayer':SurRadLayer,'MassConsLayer':MassConsLayer,'EntConsLayer':EntConsLayer,\
# 'weak_loss_0':mse,'weak_loss_1':mse}
# NNarray = ['JNNL','JNNC','MLRL0','JNNL0.01']
# Config and data files for POG experiment
config_fn = ['/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/101_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/104_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/107_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/110_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/113_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/119_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/122_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/125_PostProc.yml']
data0K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/101_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/104_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/107_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/110_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/113_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/119_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/122_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/125_valid.nc']
data4K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/102_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/105_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/108_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/111_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/114_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/120_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/123_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/126_valid.nc']
NNarray = ['POG101','POG104','POG107','POG110','POG113','POG119','POG122','POG125']
NNname = ['q T','RH T','q Tma','RH Tma','q Carnotmax','q TTs','RH TTs','RH Carnotmax']
###Output
_____no_output_____
###Markdown
2) Test where the generalization error for convective heating and moistening is most obvious for different types of networksGlobal mean
###Code
NN = {}; md0 = {}; md4 = {};
%cd $TRAINDIR/HDF5_DATA
for i,NNs in enumerate(NNarray):
print('NN name is ',NNs)
path = TRAINDIR+'HDF5_DATA/'+NNs+'.hdf5'
#NN[NNs] = load_model(path,custom_objects=dict_lay)
NN[NNs] = load_model(path)
md0[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data0K_fn[i])
md4[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data4K_fn[i])
#lat_ind = np.arange(32,36) # index over which we evaluate generalization performances
lat_ind = np.arange(15,20)
iini = 2700
iend = 2715
for isim in range(2):
print('isim=',isim)
if isim==0: md = md0
elif isim==1: md = md4
diagno = {} # Diagnostics structure
diagno['truth'] = {} # Diagnostics structure for the truth
for i,NNs in enumerate(NNarray):
diagno[NNs] = {} # Diagnostics structure for each NN
for itime in tqdm(np.arange(iini,iend)):
# Get input, prediction and truth from NN
inp, p, truth = md[NNs].get_inp_pred_truth(itime) # [lat, lon, var, lev]
# Get convective heating and moistening for each NN
if itime==iini:
if i==0:
diagno['truth']['PHQ'] = md[NNs].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno['truth']['TPHYSTND'] = md[NNs].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['PHQ'] = md[NNs].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['TPHYSTND'] = md[NNs].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
else:
for istr,field in enumerate(['PHQ','TPHYSTND']):
if field=='PHQ': ind_field = PHQ_idx
elif field=='TPHYSTND': ind_field = TPHYSTND_idx
diagno[NNs][field] = np.concatenate((diagno[NNs][field],
md[NNs].reshape_ngeo(p[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if i==0:
diagno['truth'][field] = np.concatenate((diagno['truth'][field],
md[NNs].reshape_ngeo(truth[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if isim==0: diagno0 = diagno
elif isim==1: diagno4 = diagno
###Output
isim= 0
###Markdown
Idea = Systematic biases should be visible in the mean
###Code
# Load coordinates
coor = xr.open_dataset("/project/meteo/w2w/A6/S.Rasp/SP-CAM/fluxbypass_aqua/AndKua_aqua_SPCAM3.0_sp_fbp_f4.cam2.h1.0000-01-01-00000.nc",\
decode_times=False)
lat = coor.lat; lon = coor.lon; lev = coor.lev;
coor.close();
# Plot characteristics
fz = 15
lw = 2
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=fz)
mpl.rcParams['lines.linewidth'] = lw
plt.close('all')
# NNplot = NNarray
# NNplotname = NNname
#NNarrayplot = ['POG101','POG104','POG107','POG110','POG113','POG119','POG122','POG125']
#NNplotname = ['q T','RH T','q Tma','RH Tma','q Carnotmax','q TTs','RH TTs','RH Carnotmax']
NNarrayplot = ['POG101','POG104']
NNplotname = ['q T','RH T']
option = 'full' # Full profile vs profile bias
simu = '0K'
if simu=='0K': diagno = diagno0
elif simu=='+4K': diagno=diagno4
f = plt.figure(num=None, figsize=(15,4), dpi=80, facecolor='w', edgecolor='k')
for ifig,field in enumerate(['PHQ','TPHYSTND']):
print('ifig=',ifig,' and field=',field)
ax = f.add_subplot(1,2,ifig+1)
if option=='full': plt.plot(np.mean(diagno['truth'][field],axis=(0,1,3)),lev,label='Truth',color='k')
elif option=='bias': plt.plot(0*lev**0,lev,label='Truth',color='k')
for i,NNs in enumerate(NNarrayplot):
#for i,NNs in enumerate(NNarray):
if option=='full': plt.plot(np.mean(diagno[NNs][field],axis=(0,1,3)),lev,label=NNplotname[i])
elif option=='bias': plt.plot(np.mean(diagno[NNs][field]-diagno['truth'][field],axis=(0,1,3)),lev,label=NNplotname[i])
if ifig==1: plt.legend()
# plt.ylim((0,200))
# plt.xlim((-10,10))
plt.gca().invert_yaxis()
plt.ylabel('Pressure [hPa]')
plt.xlabel(field+' [W/m2]')
if option=='full': plt.title('Mean '+simu+' lat='+'%02.1f'%coor.lat[lat_ind[0]]+','+'%02.1f'%coor.lat[lat_ind[-1]]+\
' t='+str(iini)+','+str(iend))
elif option=='bias': plt.title('Mean bias '+simu+' lat='+'%02.1f'%coor.lat[lat_ind[0]]+','+'%02.1f'%coor.lat[lat_ind[-1]]+\
' t='+str(iini)+','+str(iend))
###Output
ifig= 0 and field= PHQ
ifig= 1 and field= TPHYSTND
###Markdown
3) Select network with the least global mean bias to test the Variance and tropopause shift
###Code
# Config and data files for POG experiment
config_fn = ['/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/101_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/104_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/110_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/122_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/125_PostProc.yml']
data0K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/101_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/104_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/110_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/122_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/125_valid.nc']
data4K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/102_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/105_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/111_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/123_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/126_valid.nc']
NNarray = ['POG101','POG104','POG110','POG122','POG125']
NNname = ['q T','RH T','RH Tma','RH TTs','RH Carnotmax']
NN = {}; md0 = {}; md4 = {};
%cd $TRAINDIR/HDF5_DATA
for i,NNs in enumerate(NNarray):
print('NN name is ',NNs)
path = TRAINDIR+'HDF5_DATA/'+NNs+'.hdf5'
NN[NNs] = load_model(path)
md0[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data0K_fn[i])
md4[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data4K_fn[i])
lat_ind = np.arange(32,36) # index over which we evaluate generalization performances
#lat_ind = np.arange(20,25) # Based on notebook 027
iini = 2500
iend = 2520
for isim in range(2):
print('isim=',isim)
if isim==0: md = md0
elif isim==1: md = md4
diagno = {} # Diagnostics structure
diagnotot = {}
R2latp = {}
diagno['truth'] = {} # Diagnostics structure for the truth
diagnotot['truth'] = {}
R2latp['truth'] = {}
for i,NNs in enumerate(NNarray):
diagno[NNs] = {} # Diagnostics structure for each NN
diagnotot[NNs] = {}
R2latp[NNs] = {}
for itime in tqdm(np.arange(iini,iend)):
# Get input, prediction and truth from NN
inp, p, truth = md[NNs].get_inp_pred_truth(itime) # [lat, lon, var, lev]
# Get convective heating and moistening for each NN
if itime==iini:
if i==0:
diagno['truth']['PHQ'] = md[NNs].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno['truth']['TPHYSTND'] = md[NNs].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
diagnotot['truth']['PHQ'] = md[NNs].reshape_ngeo(truth[:,PHQ_idx])[:,:,:,np.newaxis]
diagnotot['truth']['TPHYSTND'] = md[NNs].reshape_ngeo(truth[:,TPHYSTND_idx])[:,:,:,np.newaxis]
diagno[NNs]['PHQ'] = md[NNs].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['TPHYSTND'] = md[NNs].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
diagnotot[NNs]['PHQ'] = md[NNs].reshape_ngeo(p[:,PHQ_idx])[:,:,:,np.newaxis]
diagnotot[NNs]['TPHYSTND'] = md[NNs].reshape_ngeo(p[:,TPHYSTND_idx])[:,:,:,np.newaxis]
else:
for istr,field in enumerate(['PHQ','TPHYSTND']):
if field=='PHQ': ind_field = PHQ_idx
elif field=='TPHYSTND': ind_field = TPHYSTND_idx
diagno[NNs][field] = np.concatenate((diagno[NNs][field],
md[NNs].reshape_ngeo(p[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
diagnotot[NNs][field] = np.concatenate((diagnotot[NNs][field],
md[NNs].reshape_ngeo(p[:,ind_field])[:,:,:,np.newaxis]),
axis=3)
if i==0:
diagno['truth'][field] = np.concatenate((diagno['truth'][field],
md[NNs].reshape_ngeo(truth[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
diagnotot['truth'][field] = np.concatenate((diagnotot['truth'][field],
md[NNs].reshape_ngeo(truth[:,ind_field])[:,:,:,np.newaxis]),
axis=3)
for istr,field in enumerate(['PHQ','TPHYSTND']):
R2latp[NNs][field] = 1-(np.mean((diagnotot[NNs][field]-diagnotot['truth'][field])**2,axis=(1,3))/\
np.var(diagnotot['truth'][field],axis=(1,3)))
R2latp['truth'][field] = np.mean(diagnotot['truth'][field],axis=(1,3))**0
if isim==0: diagno0 = diagno; R2latp0 = R2latp
elif isim==1: diagno4 = diagno; R2latp4 = R2latp
del(diagnotot)
sqrtvar0 = {};
sqrtvar4 = {};
# Variance (sqrt) of the diagnosed fields
for isim in range(2):
print('isim=',isim)
for i,NNs in enumerate(np.concatenate((['truth'],NNarray))):
sqrtvar0[NNs] = {};
sqrtvar4[NNs] = {};
for istr,field in enumerate(['PHQ','TPHYSTND']):
sqrtvar0[NNs][field] = np.std(diagno0[NNs][field],axis=(0,1,3))
sqrtvar4[NNs][field] = np.std(diagno4[NNs][field],axis=(0,1,3))
# Load coordinates
coor = xr.open_dataset("/project/meteo/w2w/A6/S.Rasp/SP-CAM/fluxbypass_aqua/AndKua_aqua_SPCAM3.0_sp_fbp_f4.cam2.h1.0000-01-01-00000.nc",\
decode_times=False)
lat = coor.lat; lon = coor.lon; lev = coor.lev;
coor.close();
# Plot characteristics
fz = 15
lw = 2
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=fz)
mpl.rcParams['lines.linewidth'] = lw
plt.close('all')
option = 'full' # Full profile vs profile bias
simu = '0K'
VAR = 'sqrtvar'
if simu=='0K':
if VAR=='sqrtvar': toplot = sqrtvar0
elif simu=='+4K':
if VAR=='sqrtvar': toplot = sqrtvar4
f = plt.figure(num=None, figsize=(15,4), dpi=80, facecolor='w', edgecolor='k')
for ifig,field in enumerate(['PHQ','TPHYSTND']):
print('ifig=',ifig,' and field=',field)
ax = f.add_subplot(1,2,ifig+1)
if option=='full': plt.plot(toplot['truth'][field],lev,label='Truth',color='k')
elif option=='bias': plt.plot(0*lev**0,lev,label='Truth',color='k')
for i,NNs in enumerate(NNarray):
if option=='full': plt.plot(toplot[NNs][field],lev,label=NNname[i])
elif option=='bias': plt.plot(toplot[NNs][field]-toplot[NNs]['truth'],lev,label=NNname[i])
plt.legend()
plt.gca().invert_yaxis()
plt.ylabel('Pressure [hPa]')
plt.xlabel(field+' [W/m2]')
if option=='full': plt.title('Sqrt Var '+simu+' lat='+'%02.1f'%coor.lat[lat_ind[0]]+','+'%02.1f'%coor.lat[lat_ind[-1]]+\
' t='+str(iini)+','+str(iend))
elif option=='bias': plt.title('Sqrt Var bias '+simu+' lat='+'%02.1f'%coor.lat[lat_ind[0]]+','+'%02.1f'%coor.lat[lat_ind[-1]]+\
' t='+str(iini)+','+str(iend))
simu = '+4K'
lowbnd = 0
if simu=='0K': toplot = R2latp0
elif simu=='+4K': toplot = R2latp4
fig, axes = plt.subplots(np.size(NNarray),2, figsize=(15,7.5), sharey = True)
for ifig,field in enumerate(['PHQ','TPHYSTND']):
print('ifig=',ifig,' and field=',field)
for i,NNs in enumerate(NNarray):
im = axes[i,ifig].contourf(lat, lev, np.transpose(np.maximum(lowbnd,toplot[NNs][field])), 20, vmin = lowbnd, vmax = 1, extend='both')
if ifig==1: axes[i,ifig].set_ylabel(NNname[i]);
if i==0: axes[i,ifig].set_title(simu+' '+field+' R2')
axes[i,ifig].invert_yaxis()
cbar = plt.colorbar(im,ax=axes.ravel().tolist())
###Output
ifig= 0 and field= PHQ
ifig= 1 and field= TPHYSTND
###Markdown
4) Diagnose upwards shift of convective heating and moistening
###Code
# Config and data files for POG experiment
config_fn = ['/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/101_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/104_PostProc.yml']
data0K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/101_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/104_valid.nc']
data4K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/102_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/105_valid.nc']
NNarray = ['POG101','POG104']
NNname = ['q T','RH T']
NN = {}; md0 = {}; md4 = {};
%cd $TRAINDIR/HDF5_DATA
for i,NNs in enumerate(NNarray):
print('NN name is ',NNs)
path = TRAINDIR+'HDF5_DATA/'+NNs+'.hdf5'
NN[NNs] = load_model(path)
md0[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data0K_fn[i])
md4[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data4K_fn[i])
lat_ind = np.arange(32,36) # index over which we evaluate generalization performances
#lat_ind = np.arange(20,25) # Based on notebook 027
iini = 2500
iend = 2550
diagno = {} # Diagnostics structure
diagno['truth'] = {} # Diagnostics structure for the truth
for i,NNs in enumerate(NNarray):
diagno[NNs] = {} # Diagnostics structure for each NN
for itime in tqdm(np.arange(iini,iend)):
# Get input, prediction and truth from NN
inp0, p0, truth0 = md0[NNs].get_inp_pred_truth(itime) # [lat, lon, var, lev]
inp4, p4, truth4 = md4[NNs].get_inp_pred_truth(itime) # [lat, lon, var, lev]
# Get convective heating and moistening for each NN
if itime==iini:
if i==0:
diagno['truth']['PHQ'] = md4[NNs].reshape_ngeo(truth4[:,PHQ_idx])[lat_ind,:,:,np.newaxis]-\
md0[NNs].reshape_ngeo(truth0[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno['truth']['TPHYSTND'] = md4[NNs].reshape_ngeo(truth4[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]-\
md0[NNs].reshape_ngeo(truth0[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['PHQ'] = md4[NNs].reshape_ngeo(p4[:,PHQ_idx])[lat_ind,:,:,np.newaxis]-\
md0[NNs].reshape_ngeo(p0[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['TPHYSTND'] = md4[NNs].reshape_ngeo(p4[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]-\
md0[NNs].reshape_ngeo(p0[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
else:
for istr,field in enumerate(['PHQ','TPHYSTND']):
if field=='PHQ': ind_field = PHQ_idx
elif field=='TPHYSTND': ind_field = TPHYSTND_idx
diagno[NNs][field] = np.concatenate((diagno[NNs][field],
md4[NNs].reshape_ngeo(p4[:,ind_field])[lat_ind,:,:,np.newaxis]-\
md0[NNs].reshape_ngeo(p0[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if i==0:
diagno['truth'][field] = np.concatenate((diagno['truth'][field],
md4[NNs].reshape_ngeo(truth4[:,ind_field])[lat_ind,:,:,np.newaxis]-\
md0[NNs].reshape_ngeo(truth0[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
# Load coordinates
coor = xr.open_dataset("/project/meteo/w2w/A6/S.Rasp/SP-CAM/fluxbypass_aqua/AndKua_aqua_SPCAM3.0_sp_fbp_f4.cam2.h1.0000-01-01-00000.nc",\
decode_times=False)
lat = coor.lat; lon = coor.lon; lev = coor.lev;
coor.close();
diagno['truth']['TPHYSTND'].shape
field = 'PHQ'
plt.figure(figsize=(5,5))
plt.plot(np.mean(diagno['truth'][field],axis=(0,1,3)),coor.lev,label='truth')
plt.plot(np.mean(diagno['POG101'][field],axis=(0,1,3)),coor.lev,label='q T')
plt.plot(np.mean(diagno['POG104'][field],axis=(0,1,3)),coor.lev,label='RH T')
plt.legend()
plt.gca().invert_yaxis()
plt.xlabel('Pressure (hPa)')
plt.ylabel('4K-0K difference in '+field+' (W/m2)')
coor.lev
###Output
_____no_output_____
###Markdown
5) Diagnose lack of generalization at high latitudes to pick best normalized temperature coordinate
###Code
# Config and data files for POG experiment
config_fn = ['/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/101_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/104_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/110_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/122_PostProc.yml',
'/home/t/Tom.Beucler/SPCAM/CBRAIN-CAM/pp_config/125_PostProc.yml']
data0K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/101_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/104_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/110_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/122_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/125_valid.nc']
data4K_fn = ['/local/Tom.Beucler/SPCAM_PHYS/102_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/105_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/111_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/123_valid.nc',
'/local/Tom.Beucler/SPCAM_PHYS/126_valid.nc']
NNarray = ['POG102','POG105','POG111','POG123','POG126']
NNname = ['q T','RH T','RH Tma','RH TTs','RH Carnotmax']
NN = {}; md0 = {}; md4 = {};
%cd $TRAINDIR/HDF5_DATA
for i,NNs in enumerate(NNarray):
print('NN name is ',NNs)
path = TRAINDIR+'HDF5_DATA/'+NNs+'.hdf5'
NN[NNs] = load_model(path)
md0[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data0K_fn[i])
md4[NNs] = ModelDiagnostics(NN[NNs],config_fn[i],data4K_fn[i])
#lat_ind = np.arange(32,36) # index over which we evaluate generalization performances
lat_ind = np.arange(0,64)
iini = 2700
iend = 2715
for isim in range(2):
print('isim=',isim)
if isim==0: md = md0
elif isim==1: md = md4
diagno = {} # Diagnostics structure
diagno['truth'] = {} # Diagnostics structure for the truth
for i,NNs in enumerate(NNarray):
diagno[NNs] = {} # Diagnostics structure for each NN
for itime in tqdm(np.arange(iini,iend)):
# Get input, prediction and truth from NN
inp, p, truth = md[NNs].get_inp_pred_truth(itime) # [lat, lon, var, lev]
# Get convective heating and moistening for each NN
if itime==iini:
if i==0:
diagno['truth']['PHQ'] = md[NNs].reshape_ngeo(truth[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno['truth']['TPHYSTND'] = md[NNs].reshape_ngeo(truth[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['PHQ'] = md[NNs].reshape_ngeo(p[:,PHQ_idx])[lat_ind,:,:,np.newaxis]
diagno[NNs]['TPHYSTND'] = md[NNs].reshape_ngeo(p[:,TPHYSTND_idx])[lat_ind,:,:,np.newaxis]
else:
for istr,field in enumerate(['PHQ','TPHYSTND']):
if field=='PHQ': ind_field = PHQ_idx
elif field=='TPHYSTND': ind_field = TPHYSTND_idx
diagno[NNs][field] = np.concatenate((diagno[NNs][field],
md[NNs].reshape_ngeo(p[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if i==0:
diagno['truth'][field] = np.concatenate((diagno['truth'][field],
md[NNs].reshape_ngeo(truth[:,ind_field])[lat_ind,:,:,np.newaxis]),
axis=3)
if isim==0: diagno0 = diagno
elif isim==1: diagno4 = diagno
# Load coordinates
coor = xr.open_dataset("/project/meteo/w2w/A6/S.Rasp/SP-CAM/fluxbypass_aqua/AndKua_aqua_SPCAM3.0_sp_fbp_f4.cam2.h1.0000-01-01-00000.nc",\
decode_times=False)
lat = coor.lat; lon = coor.lon; lev = coor.lev;
coor.close();
# Plot characteristics
fz = 15
lw = 2
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=fz)
mpl.rcParams['lines.linewidth'] = lw
plt.close('all')
diagno['truth'][field].shape
NNarrayplot = NNarray
NNplotname = NNname
#NNarrayplot = ['POG101','POG104','POG107','POG110','POG113','POG119','POG122','POG125']
#NNplotname = ['q T','RH T','q Tma','RH Tma','q Carnotmax','q TTs','RH TTs','RH Carnotmax']
# NNarrayplot = ['POG101','POG104']
# NNplotname = ['q T','RH T']
ilat = np.arange(45,60)
option = 'full' # Full profile vs profile bias
simu = '0K'
if simu=='0K': diagno = diagno0
elif simu=='+4K': diagno=diagno4
f = plt.figure(num=None, figsize=(15,4), dpi=80, facecolor='w', edgecolor='k')
for ifig,field in enumerate(['PHQ','TPHYSTND']):
print('ifig=',ifig,' and field=',field)
ax = f.add_subplot(1,2,ifig+1)
if option=='full': plt.plot(np.mean(diagno['truth'][field][ilat,:,:,:],axis=(0,1,3)),lev,label='Truth',color='k')
elif option=='bias': plt.plot(0*lev**0,lev,label='Truth',color='k')
for i,NNs in enumerate(NNarrayplot):
if option=='full': plt.plot(np.mean(diagno[NNs][field][ilat,:,:,:],axis=(0,1,3)),lev,label=NNplotname[i])
elif option=='bias': plt.plot(np.mean(diagno[NNs][field][ilat,:,:,:]-\
diagno['truth'][field][ilat,:,:,:],axis=(0,1,3)),lev,label=NNplotname[i])
if ifig==1: plt.legend()
plt.gca().invert_yaxis()
plt.ylabel('Pressure [hPa]')
plt.xlabel(field+' [W/m2]')
if option=='full': plt.title('Mean '+simu+' lat='+'%02.1f'%coor.lat[lat_ind[0]]+','+'%02.1f'%coor.lat[lat_ind[-1]]+\
' t='+str(iini)+','+str(iend))
elif option=='bias': plt.title('Mean bias '+simu+' lat='+'%02.1f'%coor.lat[lat_ind[0]]+','+'%02.1f'%coor.lat[lat_ind[-1]]+\
' t='+str(iini)+','+str(iend))
###Output
ifig= 0 and field= PHQ
ifig= 1 and field= TPHYSTND
|
notebooks/Simulating_a_Cantilever.ipynb | ###Markdown
Simulating and AFM Cantilever response in PythonThe simulation code integrates the ordinary differential equation describing an AFM cantilever's motion. That, by itself, is not particularly interesting as it's just an exercise for undergraduates in physics courses. What IS interesting, is adding some sort of transient excitation and seeing the response.The simulation code in **FFTA** package allows for an excitation voltage to be applied at some particular point in time and therefore affects the cantilever's motion. More specifically, this simulation applies a shift in the resonance frequenecy with some time constant and an electrostatic force that changes with the same time constant. By default, that time-dependent response is an exponential decay (1-exp(-t/tau)), but that can be any callable function.A secondary option is to apply an electric drive to the cantilever to see second resonance responses, but that is outside the scope of this notebook.Much of this is used in the following publication:Karatay DU, Harrison JA, et al. Fast time-resolved electrostatic force microscopy: Achieving sub-cycle time resolution. *Rev Sci Inst.* **87,** 053702 (2016). [DOI: 10.1063/1.4948396](http://dx.doi.org/10.1063/1.4948396)
###Code
!pip install FFTA==0.2
import ffta
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
import numpy as np
from ffta.simulation.mechanical_drive import MechanicalDrive
from ffta.simulation.load import simulation_configuration
from ffta.simulation import excitation
# Specify our configuration file
# In this case, we'll download our shared file and use that. This block works for CoLab; for Jupyter, download from the URL then set path manually
!pip install wget
import wget
path = r'example_sim_params.cfg'
wget.download('https://raw.githubusercontent.com/rajgiriUW/ffta/master/ffta/simulation/example_sim_params_roundnumbers.cfg', path, bar=None)
path = r'/content/example_sim_params.cfg'
print(path)
###Output
Requirement already satisfied: wget in /usr/local/lib/python3.6/dist-packages (3.2)
/content/example_sim_params.cfg
###Markdown
The configuration fileThe configuration if usually some .cfg file on the disk. Typically it has a form like this:```[Cantilever Parameters]amp_invols = 5.52e-08 ;in m/V.def_invols = 5.06e-08 ;in m/V.soft_amp = 0.3 ;in m/Vdrive_freq = 272244.5 ;in Hz.res_freq = 272218.4 ;in m.k = 26.2 ;in N/m.q_factor = 432[Force Parameters]es_force = 4e-9 ;in N.delta_freq = -170 ;in Hz.tau = 10e-9 ;in seconds.[Simulation Parameters]trigger = 16384e-7 ;in seconds.total_time = 32768e-7 ;in seconds.sampling_rate = 1e7 ;in Hz.```Other possible parameters for the Force are * dC/dz (a typical value of 1e-10 F/m is fine* V_cpd, V_dc, V_ac : voltages that describe the electric drive signalCurrently there's no way to simply supply a drive signal as might be recorded in an experiment, but that will be addedLet's go through what some of these parameters mean:```amp_invols: Amplitude inverted optical lever sensitivity, converts amplitude from Volts to metersdef_invols: Defletion inverted optical level sensitivity, converts the deflection from Volts to meters.soft_amp: "Soft amplitude" which is just an archaic way of saying cantilever amplitude during EFMdrive_freq and res_freq: The driving frequency and resonance frequency of the cantileverk: spring constantq_factor: Quality factor of the cantilever (i.e. FWHM of the tune curve peakes_force: Electrostatic force. This value is measured experimentally by applying a voltage and measuring the deflection (using Hooke's Law F = -kz)delta_freq: When shifting the resonance frequency, this is how much to shift it bytau: time constant for the exponential function applied to the cantilever.trigger: When during the simulation to apply a change in the resonance frequencytotal_time: Total time to integrate forsampling_rate: This is only used to have the simulation match the experiment. The experiment is often at 1e7 Hz (10 MHz). The simulation is currently hard-coded at 100 MHz then down-sampled at the end.``` What does that mean?In our simulation the resonance frequency is changed at some point in time. In the config file this is "trigger_time" and the total length of the simulation is "total_time." So, for a cantilever where **total_time** = 1 ms, and **trigger** = 0.5 ms, then at 0.5 ms the resonance frequency is shifted by an amount **delta_freq**, with the rate of the shift controlled by **tau** The next step is load the configuration fileThis next line creates three dictionaries corresponding to the three sections above.
###Code
can_params, force_params, sim_params = simulation_configuration(path)
print('Force parameters are', force_params)
###Output
Force parameters are {'es_force': 3e-09, 'delta_freq': -200.0, 'tau': 1e-05}
###Markdown
Create the cantilever object---------------The simulation uses a base class, Cantilever, and we call Cantilever using parent classes that actually do the interesting parts of the simulation.In the next cell, we create the Cantilever object using a class MechanicalDrive.```cant = ffta.simulation.mechanical_drive.MechanicalDrive(can_params, force_params, sim_params)```This creates "cant" which is an object of the class "MechanicalDrive," which is itself inheriting Cantilever. Note that you could be a little more readable and say:```params = ffta.simulation.load.simulation_configuration(path)cant = ffta.simulation.mechanical_drive.MechanicalDrive(*params)```...but that's a style choice.Anyway, now that we have our MechanicalDrive object, we can simulate it.
###Code
cant = MechanicalDrive(can_params, force_params, sim_params)
###Output
_____no_output_____
###Markdown
Properties of our cantileverThe object "cant" here contains many parameters, for which you may want to check out the source code.To see some relevant things, you could type for example:```cant.taucant.triggercant.total_timecant.drive_freqcant.res_freq```In Python, these are all stored in a dictionary called .__dict__ that you rarely need to access.Of these, "tau" is the time constant assuming a single exponential.
###Code
# Here is a list of ALL the parameters when you initialize the object
for k, v in cant.__dict__.items():
print(k, v)
###Output
amp_invols 5e-08
def_invols 5e-08
soft_amp 1.0
drive_freq 300000.0
res_freq 300000.0
k 20.0
q_factor 450.0
w0 1884955.5921538759
wd 1884955.5921538759
beta 2094.3951023931954
mass 5.628954646796543e-12
amp 5e-08
f0 394.7841760435743
delta 1.5707963267948966
es_force 3e-09
delta_freq -200.0
tau 1e-05
delta_w -1256.6370614359173
fe 532.9586376588254
trigger 0.0002
total_time 0.005
sampling_rate 10000000.0
t_Z [0.00000e+00 1.00002e-07 2.00004e-07 ... 4.99980e-03 4.99990e-03
5.00000e-03]
freq_Z [0.0000e+00 2.0000e+02 4.0000e+02 ... 4.9996e+06 4.9998e+06 5.0000e+06]
fit_params {'filter_amplitude': True, 'method': 'hilbert', 'fit': True, 'fit_form': 'product'}
parameters {'es_force': 3e-09, 'delta_freq': -200.0, 'tau': 1e-05, 'trigger': 0.0002, 'total_time': 0.005, 'sampling_rate': 10000000.0, 'bandpass_filter': 1.0, 'drive_freq': 300000.0, 'filter_bandwidth': 10000.0, 'n_taps': 799, 'roi': 0.0003, 'window': 'blackman', 'wavelet_analysis': 0}
can_params {'amp_invols': 5e-08, 'def_invols': 5e-08, 'soft_amp': 1.0, 'drive_freq': 300000.0, 'res_freq': 300000.0, 'k': 20.0, 'q_factor': 450.0}
df 100000000.0
use_varray False
func <function single_exp at 0x7fc340fdfea0>
func_args [1e-05]
###Markdown
Simulate the cantileverThe simulation can take one of two parameters:```trigger_phase: float, optional Trigger phase is in degrees and with respect to cosine. Default value is 180.Z0 : list, optional Z0 = [z0, v0], the initial position and velocity If not specified, is calculated from the analytical solution to DDHO (using class function "set_conditions")```**Trigger_phase** controls the phase of the cantilever when the excitation is applied. In our paper on this subject, we found that the phase does matter; if the cantilever is at some random phase relative to the excitation, you can lose time-resolution. Practically speaking, this effect is not all that important except at very fast timescales. The cantilever, prior to ODE integration, will try and set the initial conditions for the differential equation. That is, it will try and set Z(0) and v(0), the initial position and velocity for time t=0. The default is to set them such that the cantilever is at steady-state conditions for t=0, avoiding any wasted simulation time.**Z0** Instead of defaulting, you can set the initial conditions explicitly. This parameter is useful for electrically-driven cantilevers where the solution to the DDHO is not well known.
###Code
# Run the simulation!
Z, info = cant.simulate()
###Output
_____no_output_____
###Markdown
Plot the resultsNow that we have our simulation, let's see what it looks like. Note that Z is rescaled to Volts to match an experiment, but you can multiply by *cant.def_invols*
###Code
plt.plot(cant.t_Z * 1e3, Z)
plt.xlabel('Time (ms)')
plt.ylabel('Deflection (V)')
plt.title('Simulation with time constant = ' + str(cant.tau) + ' s')
###Output
_____no_output_____
###Markdown
As you can see, the cantilever is going well, then at halfway there's a change due to an exponential voltage pulse with time constant 10 ns. Let's see what happens if we change the initial conditions.
###Code
Z, info = cant.simulate(Z0 = [0,0])
plt.plot(cant.t_Z * 1e3, Z)
plt.xlabel('Time (ms)')
plt.ylabel('Deflection (V)')
plt.title('Simulation with initial conditions 0 m, 0 m/s')
###Output
_____no_output_____
###Markdown
What happened? Well, because we specified the cantilever to be at rest at time t=0 (Z0 = [0,0]), the cantilever had to oscillate for awhile before it settled at its steady-state condition. This initial part of the simulation is not all that useful, and it just wastes time, so we set the initial conditions. Change Parameters for the SimulationNow, what if we change up some of the parameters? Let's try a different time constant, maybe of 1e-3 s (1 ms).```cant.func_args = [1e-3]```In the cantilever, we will call a function that applies a single exponential decay to the voltage. That function requires a parameter, which we supply in the **func_args** list. We can use this approach for** bi-exponential **and **stretched exponentials** as well.Below, we will change the time constant to **1e-8 s**, **1e-4 s**, and **1e-2 s** and see the result
###Code
# This is how you might change the time constant
cant.tau = 1e-3
fig, a = plt.subplots(nrows=3, ncols = 2, figsize=(12,14))
cant.tau = 1e-8
cant.func_args = [cant.tau]
Z, info = cant.simulate()
a[0][0].plot(cant.t_Z * 1e3, Z, 'b')
a[0][1].plot(cant.t_Z * 1e3, Z, 'b')
a[0][0].set_ylabel('Deflection (V), 10 ns')
a[0][0].set_title('Simulation with time constant = ' + str(cant.tau) + ' s')
a[0][1].set_title('Zooming in on the trigger time')
a[0][1].set_xlim(0.18, 0.5)
a[0][1].set_ylim(0.9, 1)
cant.tau = 1e-4
cant.func_args = [cant.tau]
Z, info = cant.simulate()
a[1][0].plot(cant.t_Z * 1e3, Z, 'b')
a[1][1].plot(cant.t_Z * 1e3, Z, 'b')
a[1][0].set_ylabel('Deflection (V), 100 us')
a[1][0].set_title('Simulation with time constant = ' + str(cant.tau) + ' s')
a[1][1].set_xlim(0.18, 0.5)
a[1][1].set_ylim(0.9, 1)
cant.tau = 1e-2
cant.func_args = [cant.tau]
Z, info = cant.simulate()
a[2][0].plot(cant.t_Z * 1e3, Z, 'b')
a[2][1].plot(cant.t_Z * 1e3, Z, 'b')
a[2][0].set_ylabel('Deflection (V), 10 ms')
a[2][0].set_title('Simulation with time constant = ' + str(cant.tau) + ' s')
a[2][1].set_xlim(0.18, 0.5)
a[2][1].set_ylim(0.9, 1)
a[2][0].set_xlabel('Time (ms)')
a[2][1].set_xlabel('Time (ms)')
# Add a line to mark the trigger at each point
trigger_line = np.linspace(-1.1, 1.1, 10)
a[0][0].plot(np.ones(len(trigger_line))*cant.trigger * 1e3, trigger_line, 'r-.')
a[1][0].plot(np.ones(len(trigger_line))*cant.trigger * 1e3, trigger_line, 'r-.')
a[2][0].plot(np.ones(len(trigger_line))*cant.trigger * 1e3, trigger_line, 'r-.')
###Output
_____no_output_____
###Markdown
What happened? Here, we are changing how quickly the resonance frequency changes. In the 10 ns and 100 us case, the cantilever looks similar. However, the key is what's happening at the trigger time. For the 10 ns case, there's a sharp change at the trigger (~1.6 ms). For the 100 us case, it's a slow change. For the 10 ms case, there's barely any change. That means the interesting dynamics are in this region of the time window. The cantilevers in all three cases "relax" at about the same rate, because that is just dependent upon the cantilever Q factor for the most part, and it is unrelated to any transient effect. Finally, let's try changing the electrostatic force and the delta_freq (the amount of electrostatic force applied, i.e. the voltage, and the amount the resonance frequency shifts by). These values should normally not be changed independently as they're related to one another and are acquired during a measurement. But, just for fun, let's see what happens.
###Code
fig, a = plt.subplots(nrows=3, figsize=(6, 11))
cant.tau = 1e-6
cant.func_args = [cant.tau]
cant.es_force = 4e-9
cant.delta_freq = -170
cant.delta_w = 2* np.pi * cant.delta_freq # convert to radians
Z, info = cant.simulate()
a[0].plot(cant.t_Z * 1e3, Z, 'g')
a[0].set_ylabel('Deflection (V), Fe = 4 nN')
a[0].set_title('Changing Force, Fe = 4 nN, -170 Hz shift')
cant.es_force = 4e-7 # 100 X the force
cant.delta_freq = -170 # 600 Hz is a huge shift
cant.delta_w = 2* np.pi * cant.delta_freq # convert to radians
Z, info = cant.simulate()
a[1].plot(cant.t_Z * 1e3, Z, 'g')
a[1].set_ylabel('Deflection (V), Fe = 40 nN')
a[1].set_title('Changing Force, Fe = 40 nN, -170 Hz shift')
cant.es_force = 4e-9
cant.delta_freq = -600 # 600 Hz is a huge shift
cant.delta_w = 2* np.pi * cant.delta_freq # convert to radians
Z, info = cant.simulate()
a[2].plot(cant.t_Z * 1e3, Z, 'g')
a[2].set_ylabel('Deflection (V)')
a[2].set_xlabel('Time (ms)')
a[2].set_title('Changing Force, Fe = 40 nN, -600 Hz shift')
# Add a line to mark the trigger at each point
trigger_line = np.linspace(-1.1, 1.1, 10)
a[0].plot(np.ones(len(trigger_line))*cant.trigger * 1e3, trigger_line, 'r-.')
a[1].plot(np.ones(len(trigger_line))*cant.trigger * 1e3, trigger_line, 'r-.')
a[2].plot(np.ones(len(trigger_line))*cant.trigger * 1e3, trigger_line, 'r-.')
###Output
_____no_output_____
###Markdown
As you can see, the Electrostatic Force alone doesn't do too much. The change in resonance frequency, though, changes a lot! This effect is because the change in resonance frequency changes the factors proportional to z and z' in the DDHO equation. Changing the Excitation---------We can supply a range of functions and parameters to the MechanicalDrive object to change the parameters of the simulation. In the ```excitation.py``` file there are several different excitation functions listed. These are (currently) * **single exponential*** **biexponential (sum)** * **stretched exponential**We can also supply an **arbitrary excitation** as long as it is scaled from *0 to 1*. I'll go through an example of each. Single ExponentialThis is actually the default case. But, let's go through it explicitly:``` Create a cantilever using single exponential change to resonance, 1 ms tauparams = [can_params, force_params, sim_params]MechanicalDrive(*params, func = excitation.single_exp, func_args=[1e-3])``````func``` is any callable function. What this does is apply a change such that omega_0 and Fe (resonance frequency, electrostatic force) are multiplied by this function once the trigger occurs.
###Code
cant = MechanicalDrive(can_params, force_params, sim_params, func=excitation.single_exp, func_args=[1e-3])
time_axis = np.arange(cant.trigger, cant.total_time, 1/cant.sampling_rate)
tau = 1e-3
exc_function = excitation.single_exp(time_axis - cant.trigger, tau)
fig, ax = plt.subplots(nrows=3, figsize=(8,16))
Z, _ = cant.simulate()
ax[0].plot(cant.t_Z, Z, 'b')
ax[0].plot(time_axis, exc_function, 'r--', linewidth=5)
ax[0].set_title('Single exponential change with tau = ' + str(tau) + ' s')
ax[0].set_xlabel('Time (s)')
# Change tau and resimulate
tau = 1e-2
cant.func_args = [tau]
exc_function = excitation.single_exp(time_axis - cant.trigger, tau)
Z, _ = cant.simulate()
ax[1].plot(cant.t_Z, Z, 'b')
ax[1].plot(time_axis, exc_function, 'g--', linewidth=5)
ax[1].set_title('Single exponential change with tau = ' + str(tau) + ' s')
ax[1].set_xlabel('Time (s)')
# Change tau and resimulate
tau = 1e-5
cant.func_args = [tau]
exc_function = excitation.single_exp(time_axis - cant.trigger, tau)
Z, _ = cant.simulate()
ax[2].plot(cant.t_Z, Z, 'b')
ax[2].plot(time_axis, exc_function, 'k--', linewidth=5)
ax[2].set_title('Single exponential change with tau = ' + str(tau) + ' s')
ax[2].set_xlabel('Time (s)')
# Add a line to mark the trigger at each point
trigger_line = np.linspace(-1.1, 1.1, 10)
ax[0].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
ax[1].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
ax[2].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
###Output
_____no_output_____
###Markdown
Bi-ExponentialThese functions are also included in the file ```excitation.py``` and can be easily added. They're called "bi_exp" and "str_exp"If you already have a cantilever, you need to change two things:1. ```cant.func ```2. ```cant.func_args```You can also change these when calling the function in the first place.
###Code
# Bi-exponential with tau1 = 1e-3 and tau2 = 1e-5
cant = MechanicalDrive(can_params, force_params, sim_params, func=excitation.bi_exp, func_args=[1e-3, 1e-5])
time_axis = np.arange(cant.trigger, cant.total_time, 1/cant.sampling_rate)
exc_function = excitation.bi_exp(time_axis - cant.trigger, *cant.func_args)
fig, ax = plt.subplots(nrows=3, figsize=(12,16))
Z, _ = cant.simulate()
ax[0].plot(cant.t_Z, Z, 'b')
ax[0].plot(time_axis, exc_function, 'r--', linewidth=5)
ax[0].set_title('Bi-exponential change with tau1 = ' + str(cant.func_args[0]) + ' s and tau2 = ' + str(cant.func_args[1]) + ' s')
ax[0].set_xlabel('Time (s)')
# Change tau1 to 1e-2
cant.func_args = [1e-2, 1e-5]
exc_function = excitation.bi_exp(time_axis - cant.trigger, *cant.func_args)
Z, _ = cant.simulate()
ax[1].plot(cant.t_Z, Z, 'b')
ax[1].plot(time_axis, exc_function, 'g--', linewidth=5)
ax[1].set_title('Bi-exponential change with tau1 = ' + str(cant.func_args[0]) + ' s and tau2 = ' + str(cant.func_args[1]) + ' s')
ax[1].set_xlabel('Time (s)')
# Change tau2 to 1e-7 and tau1 to 1e-3
cant.func_args = [1e-3, 1e-7]
exc_function = excitation.bi_exp(time_axis - cant.trigger, *cant.func_args)
Z, _ = cant.simulate()
ax[2].plot(cant.t_Z, Z, 'b')
ax[2].plot(time_axis, exc_function, 'k--', linewidth=5)
ax[2].set_title('Bi-exponential change with tau1 = ' + str(cant.func_args[0]) + ' s and tau2 = ' + str(cant.func_args[1]) + ' s')
ax[2].set_xlabel('Time (s)')
# Add a line to mark the trigger at each point
trigger_line = np.linspace(-1.1, 1.1, 10)
ax[0].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
ax[1].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
ax[2].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
###Output
_____no_output_____
###Markdown
Stretched Exponential This can be modified in the same way, except that now:```cant.func_args = [tau, beta]```You can also change these when calling the function in the first place, like above.
###Code
# Stretched exponential with tau = 1e-3 and beta=0.5
cant = MechanicalDrive(can_params, force_params, sim_params, func=excitation.str_exp, func_args=[1e-3, 0.5])
time_axis = np.arange(cant.trigger, cant.total_time, 1/cant.sampling_rate)
exc_function = excitation.str_exp(time_axis - cant.trigger, *cant.func_args)
fig, ax = plt.subplots(nrows=3, figsize=(12,16))
Z, _ = cant.simulate()
ax[0].plot(cant.t_Z, Z, 'b')
ax[0].plot(time_axis, exc_function, 'r--', linewidth=5)
ax[0].set_title('Stretched exponential change with tau = ' + str(cant.func_args[0]) + ' s and beta = ' + str(cant.func_args[1]))
ax[0].set_xlabel('Time (s)')
# Change tau to 1e-5, keep beta the same
cant.func_args = [1e-2, 0.5]
exc_function = excitation.str_exp(time_axis - cant.trigger, *cant.func_args)
Z, _ = cant.simulate()
ax[1].plot(cant.t_Z, Z, 'b')
ax[1].plot(time_axis, exc_function, 'g--', linewidth=5)
ax[1].set_title('Stretched exponential change with tau = ' + str(cant.func_args[0]) + ' s and beta = ' + str(cant.func_args[1]))
ax[1].set_xlabel('Time (s)')
# Change tau back to 1e-3, make beta 0.85
cant.func_args = [1e-3, 0.8]
exc_function = excitation.str_exp(time_axis - cant.trigger, *cant.func_args)
Z, _ = cant.simulate()
ax[2].plot(cant.t_Z, Z, 'b')
ax[2].plot(time_axis, exc_function, 'k--', linewidth=5)
ax[2].set_title('Stretched exponential change with tau = ' + str(cant.func_args[0]) + ' s and beta = ' + str(cant.func_args[1]))
ax[2].set_xlabel('Time (s)')
# Add a line to mark the trigger at each point
trigger_line = np.linspace(-1.1, 1.1, 10)
ax[0].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
ax[1].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
ax[2].plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
###Output
_____no_output_____
###Markdown
Arbitrary FunctionYou can easily supply an arbitrary function, as long as you provide the relevant parameters.```def new_function(t, tau, tau2, tau3): return t * tau1 + t * tau2**2 - tau3cant = MechanicalDrive(*params, func=new_function, func_args=[tau, tau2, tau3])```You can also directly change these after the ```cant``` object exists:```cant.func = new_functoncant.func_args = [tau, tau2, tau3]```Note that the function ***should scale from 0 to 1*** to work.Let's try a quick example with a linear ramp.
###Code
def linear_ramp(t, slope, yoffset):
return slope*t + yoffset
cant = MechanicalDrive(can_params, force_params, sim_params, func=linear_ramp, func_args=[1.5e2, 0])
time_axis = np.arange(cant.trigger, cant.total_time, 1/cant.sampling_rate)
exc_function = linear_ramp(time_axis - cant.trigger, *cant.func_args)
fig, ax = plt.subplots(figsize=(8, 8))
Z, _ = cant.simulate()
ax.plot(cant.t_Z, Z, 'b')
ax.plot(time_axis, exc_function, 'r--', linewidth=5)
ax.set_title('Linear ramp with slope ' + str(cant.func_args[0]) + ' and offset = ' + str(cant.func_args[1]))
ax.set_xlabel('Time (s)')
# Add a line to mark the trigger at each point
trigger_line = np.linspace(-1.1, 1.1, 10)
ax.plot(np.ones(len(trigger_line))*cant.trigger, trigger_line, 'r-.')
###Output
_____no_output_____
###Markdown
Arbitrary ArrayLastly, you can also just directly supply the scaling yourself. This is passed differently, using parameter ```v_array``````v_array``` must be the **same sampling rate and length** as the time axis you are simulating over.This process requires a little thought. You could create a simple numpy array that's the length you want if you know the parameters ```total_time``` and ```sampling_rate```. If you multiply a time by a sampling_rate, you get an index of an array.
###Code
# Supply an arbitrary voltage array that effectively turns off the drive_frequency
print(sim_params)
arb_array = np.zeros(int(sim_params['sampling_rate']*sim_params['total_time']))
trigger_index = int(sim_params['trigger'] * sim_params['sampling_rate'])
half_index = int(0.5* (sim_params['total_time'] - sim_params['trigger']) * sim_params['sampling_rate'])
threefourths_index = int(0.75* (sim_params['total_time'] - sim_params['trigger']) * sim_params['sampling_rate'])
arb_array[trigger_index:half_index] = 0.5
arb_array[half_index:threefourths_index] = 0
arb_array[threefourths_index:] = 1
cant = MechanicalDrive(can_params, force_params, sim_params, v_array=arb_array)
time_axis = np.arange(0, cant.total_time, 1/cant.sampling_rate)
fig, ax = plt.subplots(figsize=(8, 8))
Z, _ = cant.simulate()
ax.plot(cant.t_Z, Z, 'b')
ax.plot(time_axis, arb_array, 'r--', linewidth=5)
ax.set_title('Arbitrary array')
ax.set_xlabel('Time (s)')
###Output
_____no_output_____ |
Jerk_jounce_etc.ipynb | ###Markdown
Jerk, jounce, etc.This notebook accompanies a blog post on [Agile](https://agilescintific.com/blog/2018/3/6/jounce-crackle-and-pop).First, the usual preliminaries...
###Code
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
Load the dataThis dataset is from this (slightly weird) blog post https://www.duckware.com/blog/tesla-elon-musk-nytimes-john-broder-feud/index.html. It was the only decent bit of telemetry data I could find. I doubt it's properly licensed. If you have access to any open data — maybe from a Formula 1 car, or maybe your own vehicle, I'd love to know about it!
###Code
data = np.loadtxt('data/tesla_speed.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
Convert `x` to m and `v` to m/s, per the instructions in the blog post about the dataset (modified for metric units).
###Code
x = (data[:, 0] + 3) * 2.05404
x = x - np.min(x)
v_x = np.mean(data[:, 1:], axis=1) * 0.0380610
plt.plot(x, v_x)
plt.xlabel('Displacement [m]')
plt.ylabel('Velocity [m/s]')
plt.show()
###Output
_____no_output_____
###Markdown
Note that the sampling was done per unit of displacement; we'd really prefer time. Let's convert it! Time conversionConvert to the time domain, since we want derivatives with respect to time, not distance.
###Code
elapsed_time = np.cumsum(1 / v_x)
###Output
/Users/matt/anaconda3/envs/geocomp/lib/python3.6/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in true_divide
if __name__ == '__main__':
###Markdown
Adjust the last entry, to avoid a very long interval.
###Code
elapsed_time[-1] = 2 * elapsed_time[-2] - elapsed_time[-3]
t = np.linspace(0, elapsed_time[-1], 1000)
v_t = np.interp(t, elapsed_time, v_x)
plt.plot(t, v_t)
plt.show()
###Output
_____no_output_____
###Markdown
Compute integralsUse trapezoidal integral approximation, https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.cumtrapz.html
###Code
import scipy.integrate
# Displacement, d
d = scipy.integrate.cumtrapz(v_t, t, initial=0)
plt.plot(t, d)
plt.show()
# Absement
abt = scipy.integrate.cumtrapz(d, t, initial=0)
# Absity
aby = scipy.integrate.cumtrapz(abt, t, initial=0)
# Abseleration
abn = scipy.integrate.cumtrapz(aby, t, initial=0)
plt.plot(abn)
plt.show()
###Output
_____no_output_____
###Markdown
That's a boring graph! Check that derivative of displacement gives back velocityUse Savitsky-Golay filter for differentiation with some smoothing: https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
###Code
import scipy.signal
dt = t[1] - t[0]
# Check that Savitsky-Golay filter gives velocity from d/dt displacement.
v_ = scipy.signal.savgol_filter(d, delta=dt, window_length=3, polyorder=2, deriv=1)
plt.figure(figsize=(15, 3))
plt.plot(t, v_, lw=3)
plt.plot(t, v_t, '--', lw=3)
###Output
_____no_output_____
###Markdown
It does: we seem to be computing integrals properly. Compute derivatives
###Code
# Acceleration
a = scipy.signal.savgol_filter(v_t, delta=dt, window_length=11, polyorder=2, deriv=1)
plt.figure(figsize=(15,3))
plt.plot(a, lw=3, color='green')
plt.axhline(c='k', lw=0.5, zorder=0)
plt.show()
plt.figure(figsize=(15,3))
plt.imshow([a], cmap='RdBu_r', vmin=-1.6, vmax=1.6, alpha=0.8,
aspect='auto', extent=[t.min(), t.max(), v_t.min(), v_t.max()])
plt.colorbar(label="Acceleration [m/s²]")
plt.plot(t, v_t, 'white', lw=4)
plt.plot(t, v_t, 'green')
plt.title("Velocity (green) and acceleration (red-blue)")
plt.xlabel('Time [s]')
plt.ylabel('Velocity [m/s]')
plt.grid('off')
plt.show()
###Output
_____no_output_____
###Markdown
Jerk, jounce, and so on
###Code
j = scipy.signal.savgol_filter(v_t, delta=dt, window_length=11, polyorder=2, deriv=2)
s = scipy.signal.savgol_filter(v_t, delta=dt, window_length=15, polyorder=3, deriv=3)
c = scipy.signal.savgol_filter(v_t, delta=dt, window_length=19, polyorder=4, deriv=4)
p = scipy.signal.savgol_filter(v_t, delta=dt, window_length=23, polyorder=5, deriv=5)
plt.figure(figsize=(15,3))
plt.imshow([j], cmap='RdBu_r', vmin=-3, vmax=3, alpha=0.8,
aspect='auto', extent=[t.min(), t.max(), v_t.min(), v_t.max()])
plt.colorbar(label="Jerk [m/s³]")
plt.plot(t, v_t, 'white', lw=4)
plt.plot(t, v_t, 'green')
plt.title("Velocity (green) and jerk (red-blue)")
plt.xlabel('Time [s]')
plt.ylabel('Velocity [m/s]')
plt.grid('off')
plt.show()
###Output
_____no_output_____
###Markdown
Plot everything!
###Code
plots = {
'Abseleration': abn,
'Absity': aby,
'Absement': abt,
'Displacement': d,
'Velocity': v_t,
'Acceleration': a,
'Jerk': j,
'Jounce': s,
# 'Crackle': c,
# 'Pop': p,
}
colors = ['C0', 'C0', 'C0', 'C1', 'C2', 'C2', 'C2', 'C2']
fig, axs = plt.subplots(figsize=(15,15), nrows=len(plots))
pos = 0.01, 0.8
params = dict(fontsize=13)
for i, (k, v) in enumerate(plots.items()):
ax = axs[i]
ax.plot(t, v, lw=2, color=colors[i])
ax.text(*pos, k, transform=ax.transAxes, **params)
# if np.min(v) < 0:
# ax.axhline(color='k', lw=0.5, zorder=0)
if i < len(plots)-1:
ax.set_xticklabels([])
plt.show()
###Output
_____no_output_____ |
homeworks/homework03/homework03_Neural_Machine_Translation_gru_attention_beam_search_v2.ipynb | ###Markdown
Homework №3 Neural Machine Translation in the wildIn the third homework you are supposed to get the best translation you can for the EN-RU translation task.Basic approach using RNNs as encoder and decoder is implemented for you. Your ultimate task is to use the techniques we've covered, e.g.* Optimization enhancements (e.g. learning rate decay)* CNN encoder (with or without positional encoding)* attention/self-attention mechanism* pretraining the language model* [Byte Pair Encoding](https://github.com/rsennrich/subword-nmt)* or just fine-tunning BERT ;)to improve the translation quality. __Please use at least three different approaches/models and compare them (translation quality/complexity/training and evaluation time).__Write down some summary on your experiments and illustrate it with convergence plots/metrics and your thoughts. Just like you would approach a real problem.
###Code
from datetime import datetime
DEVICE_NAME = 'cuda:1'
now = datetime.now().strftime("%Y-%m-%d--%H-%M-%S")
model_name = f'gru_attention_beam_search_{now}'
# Thanks to YSDA NLP course team for the data
# (who thanks tilda and deephack teams for the data in their turn)
import os
path_to_data = '../../datasets/Machine_translation_EN_RU/data.txt'
if not os.path.exists(path_to_data):
print("Dataset not found locally. Downloading from github.")
!wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/Machine_translation_EN_RU/data.txt -nc
path_to_data = './data.txt'
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
import torchtext
from torchtext.data import Field, BucketIterator
from nltk.tokenize import WordPunctTokenizer
from nltk.translate.bleu_score import corpus_bleu
from tqdm import tqdm
import time
import random
import matplotlib.pyplot as plt
%matplotlib inline
from utils import generate_translation, remove_tech_tokens, get_text, \
parse_tensorboard_logs, plot_metrics, beam_search, _len_sort_key, init_weights, count_parameters
SEED = 1234
random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
###Output
_____no_output_____
###Markdown
Main part__Here comes the preprocessing. Do not hesitate to use BPE or more complex preprocessing ;)__
###Code
tokenizer_W = WordPunctTokenizer()
def tokenize(x, tokenizer=tokenizer_W):
return tokenizer.tokenize(x.lower())
SRC = Field(tokenize=tokenize,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize=tokenize,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
dataset = torchtext.data.TabularDataset(
path=path_to_data,
format='tsv',
fields=[('trg', TRG), ('src', SRC)]
)
train_data, valid_data, test_data = dataset.split(split_ratio=[0.8, 0.15, 0.05], random_state=random.seed(SEED))
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
SRC.build_vocab(train_data, min_freq=3)
TRG.build_vocab(train_data, min_freq=3)
print(f"Unique tokens in source (ru) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
###Output
Unique tokens in source (ru) vocabulary: 9256
Unique tokens in target (en) vocabulary: 6734
###Markdown
Here are tokens from original (RU) corpus:
###Code
SRC.vocab.itos[::1000]
###Output
_____no_output_____
###Markdown
And from target (EN) corpus:
###Code
TRG.vocab.itos[::1000]
###Output
_____no_output_____
###Markdown
And here is example from train dataset:
###Code
idx = 9
print(' '.join(train_data.examples[idx].src))
print(' '.join(train_data.examples[idx].trg))
###Output
также предлагается доставка продуктов , услуги прачечной и гладильные услуги .
other facilities offered at the property include grocery deliveries , laundry and ironing services .
###Markdown
Let's check the length distributions:
###Code
def plt_len_dist(data, name):
src_length = list(map(len, [x.src for x in data.examples]))
trg_length = list(map(len, [x.trg for x in data.examples]))
print(f'Length distribution in {name} data')
print(f'Max source length: {max(src_length)}')
print(f'Max target length: {max(trg_length)}')
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.title("source length")
plt.hist(list(src_length), bins=20);
plt.subplot(1, 2, 2)
plt.title("translation length")
plt.hist(list(trg_length), bins=20)
plt_len_dist(train_data, 'Train')
plt_len_dist(valid_data, 'Validation')
plt_len_dist(test_data, 'Test')
###Output
Length distribution in Test data
Max source length: 80
Max target length: 99
###Markdown
Model side__Here comes simple pipeline of NMT model learning. It almost copies the week03 practice__
###Code
device = torch.device(DEVICE_NAME if torch.cuda.is_available() else 'cpu')
device
def eval_bleu(model, test_iterator, target_vocab=TRG.vocab, beam_width=1, with_tqdm=True):
assert beam_width > 0
original_text = []
generated_text = []
model.eval()
with torch.no_grad():
if with_tqdm:
test_iterator = tqdm(test_iterator, position=0, leave=True)
for i, batch in enumerate(test_iterator):
src = batch.src # [src sent len, batch size]
trg = batch.trg # [trg sent len, batch size]
if beam_width == 1:
output = model(src, trg, 0) #turn off teacher forcing
#output = [trg sent len, batch size, output dim]
output = output.argmax(dim=-1) # [trg sent len, batch size]
else:
output = beam_search(model, src, trg, target_vocab, beam_width) # [trg sent len, batch size]
original_text.extend([get_text(x, target_vocab) for x in trg.cpu().numpy().T])
generated_text.extend([get_text(x, target_vocab) for x in output[1:].detach().cpu().numpy().T])
return corpus_bleu([[text] for text in original_text], generated_text) * 100
def get_teacher_forcing_ratio(epoch, base_teacher_forcing_ratio=0.5, decay=1):
return base_teacher_forcing_ratio * decay ** epoch
def train(model, iterator, optimizer, criterion, clip, epoch):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src # [src sent len, batch size]
trg = batch.trg # [trg sent len, batch size]
optimizer.zero_grad()
teacher_forcing_ratio = get_teacher_forcing_ratio(epoch)
output = model(src, trg, teacher_forcing_ratio) # [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1]) # [(trg sent len - 1) * batch size, output dim]
trg = trg[1:].view(-1) # [(trg sent len - 1) * batch size]
loss = criterion(output, trg)
loss.backward()
# Let's clip the gradient
nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
batch_loss = loss.item()
epoch_loss += batch_loss
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src # [src sent len, batch size]
trg = batch.trg # [trg sent len, batch size]
output = model(src, trg, 0) #turn off teacher forcing
# [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1]) # [(trg sent len - 1) * batch size]
trg = trg[1:].view(-1) # [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
def training_procedure(model, model_name, train_iterator, valid_iterator,
optimizer, lr_scheduler, criterion, writer, clip, n_epochs, beam_width=10):
best_valid_bleu = float('-inf')
for epoch in tqdm(range(n_epochs)):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, clip, epoch)
valid_loss = evaluate(model, valid_iterator, criterion)
lr_scheduler.step(valid_loss)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
valid_bleu_greedy = eval_bleu(model, valid_iterator, beam_width=1, with_tqdm=False)
writer.add_scalar('Validation BLEU (Greedy)',
valid_bleu_greedy,
epoch)
if beam_width is None:
valid_bleu_beam = float('-inf')
else:
valid_bleu_beam = eval_bleu(model, valid_iterator, beam_width=beam_width, with_tqdm=False)
writer.add_scalar(f'Validation BLEU (BeamSearch@{beam_width})',
valid_bleu_beam,
epoch)
max_bleu = max(valid_bleu_greedy, valid_bleu_beam)
if max_bleu > best_valid_bleu:
best_valid_bleu = max_bleu
torch.save(model.state_dict(), f'models/{model_name}.pt')
writer.add_scalar('Train loss',
train_loss,
epoch)
writer.add_scalar('Validation loss',
valid_loss,
epoch)
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {np.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {np.exp(valid_loss):7.3f}')
if beam_width is None:
print(f'\tVal. BLEU (Greedy): {valid_bleu_greedy:.3f}')
else:
print(f'\tVal. BLEU (Greedy): {valid_bleu_greedy:.3f} | Val. BLEU (BeamSearch@{beam_width}): {valid_bleu_beam:.3f}')
def get_tensorboard_dir(model_name):
return f'runs/{model_name}'
def print_samples(model, test_iterator, indices=range(0, 10), beam_widths=[2,10]):
batch = next(iter(test_iterator))
for idx in indices:
src = batch.src[:, idx:idx+1]
trg = batch.trg[:, idx:idx+1]
generate_translation(src, trg, model, TRG.vocab, beam_widths)
###Output
_____no_output_____
###Markdown
Let's use GRU encoder-decoder with attention
###Code
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device,
sort_key=_len_sort_key
)
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
BIDIRECTIONAL = False
LR = 1e-3
CLIP = 1
N_EPOCHS = 40
from models import GruEncoder, AttentionGruDecoder, AttentionGruSeq2Seq
enc = GruEncoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, BIDIRECTIONAL, ENC_DROPOUT)
dec = AttentionGruDecoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = AttentionGruSeq2Seq(enc, dec, device).to(device)
model.apply(init_weights)
count_parameters(model)
PAD_IDX = TRG.vocab.stoi['<pad>']
optimizer = optim.AdamW(model.parameters(), lr=LR, weight_decay=0.001, amsgrad=True)
lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.3, patience=2)
criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)
tensorboard_dir = get_tensorboard_dir(model_name)
print(tensorboard_dir)
writer = SummaryWriter(tensorboard_dir)
training_procedure(model, model_name, train_iterator, valid_iterator,
optimizer, lr_scheduler, criterion, writer, clip=CLIP, n_epochs=N_EPOCHS)
###Output
0%| | 0/40 [00:00<?, ?it/s]
###Markdown
**Let's load best model**
###Code
with open(f'models/{model_name}.pt', 'rb') as fp:
best_state_dict = torch.load(fp, map_location='cpu')
model.load_state_dict(best_state_dict)
###Output
_____no_output_____
###Markdown
**And look at its predictions**
###Code
print_samples(model, test_iterator)
print('Test BLEU (Greedy):', eval_bleu(model, test_iterator, beam_width=1))
print('Test BLEU (BeamSearch@2):', eval_bleu(model, test_iterator, beam_width=2))
print('Test BLEU (BeamSearch@5):', eval_bleu(model, test_iterator, beam_width=5))
print('Test BLEU (BeamSearch@10):', eval_bleu(model, test_iterator, beam_width=10))
print('Test BLEU (BeamSearch@16):', eval_bleu(model, test_iterator, beam_width=16))
print('Test BLEU (BeamSearch@32):', eval_bleu(model, test_iterator, beam_width=32))
###Output
100%|██████████| 59/59 [07:57<00:00, 14.41s/it]
###Markdown
**And plot train/val metrics**
###Code
logs = parse_tensorboard_logs(tensorboard_dir)
printable_model_name = '_'.join(model_name.split('_')[:-1])
plot_metrics(logs, printable_model_name)
###Output
_____no_output_____ |
04RNN/00word_embedding.ipynb | ###Markdown
Word embeddings단어 임베딩(Word Embedding)이란 텍스트를 구성하는 하나의 단어를 수치화하는 방법의 일종이다.텍스트에 나타난 단어를 수만개가 될수 있는데 이것을 one-hot encoding 방식으로 처리하는 것은 매우 비효율적이다. 단어를 one-hot vector로 나타내면 대부분 0 값이 된다. hidden layer에서 one-hot input vector와 연산이 대부분 0이 된다. 이런 비효율성을 개선하기 위한 것이 **embeddings** 이다. Embedding layer는 앞서 본 fully connected layer와 같은 형태이다. embedding layer의 weights를 embedding weights라고 한다. 특정 단어와 맵핑되는 정수를 인덱스로 가지는 테이블로부터 임베딩 벡터 값을 가져오는 룩업 테이블이라고 볼 수 있다. 그리고 이 테이블은 단어 집합의 크기만큼의 행을 가지므로 모든 단어는 고유한 임베딩 벡터를 가진다. words를 integers 로 변환하게 된다. 단어 great은 정수 인코딩 과정에서 1918의 정수로 인코딩이 되었고 그에 따라 단어 집합의 크기만큼의 행을 가지는 테이블에서 인덱스 1918번에 위치한 행을 단어 great의 임베딩 벡터로 사용한다. 이것을 **embedding lookup** 라고 하고 hidden units의 갯수가 **embedding dimension**이 된다. embedding lookup table은 단순한 weight matrix이고 embedding layer는 hidden layer이다. --- Pre-trained Word Embedding**사전 훈련된 워드 임베딩(pre-trained word embedding)**위키피디아 등과 같은 방대한 코퍼스를 가지고 사전에 학습된 embedding vector를 사용할 수 있다.
###Code
import torch
import torchtext
# 현재 위치에 .vector_cache 디렉토리를 생성하여 GloVe 파일을 다운로드한다.
# 이미 다운로드 되어 있으면 다시 다운로드 하지 않고 읽어들인다.
glove = torchtext.vocab.GloVe(cache='data',
name="6B", # trained on Wikipedia 2014 corpus
dim=50) # embedding size = 50
###Output
_____no_output_____
###Markdown
학습된 GloVe의 embedding vector값을 확인할 수 있다.
###Code
glove['cat']
###Output
_____no_output_____
###Markdown
Measuring Distance각 단어의 vector간 유사도를 측정할 수 있다.* Euclidean distance* Cosine Similarity
###Code
# Euclidean distance
x = glove['cat']
y = glove['dog']
torch.norm(y - x)
###Output
_____no_output_____
###Markdown
cosine similarity를 사용하여 우리가 선택한 단어와 가장 유사한 단어를 출력한다.$$\mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}$$validation words를 vectors $\vec{a}$ 로 encoding한다. 그리고 validation words 와 embedding table의 단어들에 대한 유사도를 확인한다. 이를 통해 embedding table 내에 학습된 단어들이 얼마나 잘 유사한 단어끼리 grouping 되어 있는지 확인할 수 있다.
###Code
x = glove['cat']
y = glove['dog']
torch.cosine_similarity(x.unsqueeze(0), y.unsqueeze(0))
###Output
_____no_output_____
###Markdown
Word Similarity전체 어휘를 통해 임베딩 공간의 한 지점에 가장 가까운 단어를 찾을 수 있다. 예를 들어 "cat"와 같은 다른 단어와 가장 가까운 단어를 찾을 수 있다.
###Code
def print_closest_words(v, n=5):
dists = torch.norm(glove.vectors - v, dim=1) # compute distances to all words
lst = sorted(enumerate(dists.numpy()), key=lambda x: x[1]) # sort by distance
for idx, difference in lst[1:n+1]: # take the top n
print(glove.itos[idx], difference)
print_closest_words(glove['cat'])
print_closest_words(glove['nurse'])
###Output
doctor 3.1274529
dentist 3.1306612
nurses 3.26872
pediatrician 3.3212206
counselor 3.3987114
###Markdown
Analogies - 유추임베딩 공간의 방향이 의미가 있다. GloVe 벡터의 구조는 다음과 같은 특정 단어들간의 관계를 연산을 통해 확인할 수 있다.```king − man + woman ≈ queen```
###Code
print_closest_words(glove['king'] - glove['man'] + glove['woman'])
print_closest_words(glove['programmer'] - glove['bad'] + glove['good'])
print_closest_words(glove['programmer'] - glove['good'] + glove['bad'])
###Output
hacker 3.8383653
glitch 4.003873
originator 4.041952
hack 4.047719
serial 4.2250676
|
DAY 001 ~ 100/DAY074_[Programmers] 스택 큐 쇠막대기 (Python).ipynb | ###Markdown
2020년 4월 20일 월요일 Programmers - 스택/큐 : 쇠막대기 문제 : https://programmers.co.kr/learn/courses/30/lessons/42585 블로그 : https://somjang.tistory.com/entry/Programmers-%EC%8A%A4%ED%83%9D%ED%81%90-%EC%87%A0%EB%A7%89%EB%8C%80%EA%B8%B0-Python 첫번째 시도
###Code
def solution(arrangement):
sum_num = 0
left = 0
for i in range(len(arrangement)):
if arrangement[i] == '(':
left = left + 1
elif arrangement[i] == ')':
left = left - 1
if arrangement[i-1] == '(':
sum_num = sum_num + left
else:
sum_num = sum_num + 1
return sum_num
###Output
_____no_output_____ |
blends/blend_eda_v7.ipynb | ###Markdown
TPE (Tree-structured Parzen Estimator) and Sequential Least Squares Programming (SLSQP)https://optuna.readthedocs.io/en/stable/reference/generated/optuna.samplers.TPESampler.htmloptuna.samplers.TPESamplerhttps://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html
###Code
def run_inference_scripts(submission, weights=None, target_weights=None):
for i, (script, oof_filename, output_filename, weight) in enumerate(model_list):
print(f"Generating submission file from {script} ......")
infer_start = time.time()
!python {model_path}/{script}
infer_elapsed = time.time() - infer_start
print(f"Time spent on inference: {infer_elapsed/60:.2f} minutes.")
model_submit = pd.read_csv(output_filename, engine='c')
print(model_submit.head(5))
print(model_submit.shape)
if target_weights is not None:
for j, target in enumerate(train_classes):
print(f"Blending {script} for {target} with weight: {optimized_target_weights[j][i]} ......")
submission.iloc[:, j+1] += model_submit.iloc[:, j+1] * optimized_target_weights[j][i]
elif weights is None:
print(f"Blending {script} with weight: {weight} ......")
submission.iloc[:, 1:] += weight * model_submit.iloc[:, 1:]
else:
print(f"Blending {script} with weight: {weights[i]} ......")
submission.iloc[:, 1:] += weights[i] * model_submit.iloc[:, 1:]
return submission
total_start = time.time()
if not search_mode and run_submit_script:
if method == "scipy_per_target":
weights_path = glob.glob(f'{model_path}/{study_name}_*.pkl')[0]
print(f"Loading target-wise optimized weights from {weights_path} ......")
optimized_target_weights = load_pickle(weights_path)
# For 206 target weights
submission = run_inference_scripts(
submission, target_weights=optimized_target_weights)
else:
submission = run_inference_scripts(submission)
elif search_mode and method == "CV":
y_true = non_control_group_train_labels[train_classes].values
all_oof = np.zeros(
(len(model_list), non_control_group_train_labels.shape[0], 206))
blend_oof = np.zeros((non_control_group_train_labels.shape[0], 206))
print(all_oof.shape)
for i, (script, oof_filename, output_filename,
weight) in enumerate(model_list):
print(f"Loading OOF from {oof_filename} ......")
oof = np.load(f"{dataset_folder}/{oof_filename}")
if oof.shape[0] == 23814:
oof = oof[non_control_group_rows, :]
all_oof[i, :, :] = oof
blend_oof += oof * weight
oof_loss = mean_logloss(oof, y_true)
print(f"OOF Validation Loss of {script}: {oof_loss:.6f}\n")
blend_oof_loss = mean_logloss(blend_oof, y_true)
print(f"Blend OOF Validation Loss: {blend_oof_loss:.6f}\n")
elif search_mode and method == "optuna":
print("[Optuna]")
## Search Best Blend Weights by Optuna ##
model_oofs = []
for i, (script, oof_filename, output_filename,
weight) in enumerate(model_list):
print(f"Loading OOF from {oof_filename} ......")
oof = np.load(f"{dataset_folder}/{oof_filename}")
if oof.shape[0] == 23814:
oof = oof[non_control_group_rows, :]
oof_loss = mean_logloss(
oof, non_control_group_train_labels[train_classes].values)
print(f"OOF Validation Loss of {script}: {oof_loss:.6f}\n")
model_oofs.append(oof)
def objective(trial):
weights = []
for i in range(len(model_list)):
weights.append(trial.suggest_float(f"w{i}", 0, 1.0))
blend = np.zeros(model_oofs[0].shape)
for i in range(len(model_list)):
blend += weights[i] * model_oofs[i]
blend = np.clip(blend, 0, 1.0)
loss = mean_logloss(
blend, non_control_group_train_labels[train_classes].values)
return loss
pruner = optuna.pruners.MedianPruner(
n_startup_trials=5,
n_warmup_steps=0,
interval_steps=1,
)
sampler = optuna.samplers.TPESampler(seed=rand_seed)
study = optuna.create_study(direction="minimize",
pruner=pruner,
sampler=sampler,
study_name=study_name,
storage=f'sqlite:///{study_name}.db',
load_if_exists=True)
study.optimize(objective,
n_trials=n_trials,
timeout=None,
gc_after_trial=True,
n_jobs=-1)
trial = study.best_trial
if run_submit_script:
optimal_weights = []
for i, (script, oof_filename, output_filename,
_) in enumerate(model_list):
optimal_weights.append(trial.params[f"w{i}"])
submission = run_inference_scripts(submission, weights=optimal_weights)
print("\n[Optuna]")
print("Number of finished trials: {}".format(len(study.trials)))
print("Best trial:")
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
elif search_mode and method == "scipy":
print("[Scipy SLSQP]")
# Optimise Blending Weights with Bonus
# https://www.kaggle.com/gogo827jz/optimise-blending-weights-with-bonus-0/notebook
model_oofs = []
y_true = non_control_group_train_labels[train_classes].values
all_oof = np.zeros(
(len(model_list), non_control_group_train_labels.shape[0], 206))
print(all_oof.shape)
for i, (script, oof_filename, output_filename,
weight) in enumerate(model_list):
print(f"Loading OOF from {oof_filename} ......")
oof = np.load(f"{dataset_folder}/{oof_filename}")
if oof.shape[0] == 23814:
oof = oof[non_control_group_rows, :]
all_oof[i, :, :] = oof
oof_loss = mean_logloss(oof, y_true)
print(f"OOF Validation Loss of {script}: {oof_loss:.6f}\n")
model_oofs.append(oof)
tol = 1e-10
init_guess = [1 / all_oof.shape[0]] * all_oof.shape[0]
bnds = [(0, 1) for _ in range(all_oof.shape[0])]
cons = {
'type': 'eq',
'fun': lambda x: np.sum(x) - 1,
'jac': lambda x: [1] * len(x)
}
print('Inital Blend OOF:', func_numpy_metric(init_guess))
start_time = time.time()
res_scipy = minimize(
fun=func_numpy_metric,
x0=init_guess,
method='SLSQP',
# jac=grad_func_jit, # grad_func
bounds=bnds,
constraints=cons,
tol=tol)
print("\n[Scipy SLSQP]")
print(
f'[{str(datetime.timedelta(seconds = time.time() - start_time))[2:7]}] Optimised Blend OOF:',
res_scipy.fun)
print(f'Optimised Weights: {res_scipy.x}\n')
if run_submit_script:
submission = run_inference_scripts(submission, weights=res_scipy.x)
# Target-wise Weight Optimization #
elif search_mode and method == "scipy_per_target":
print("[Scipy SLSQP]")
# Optimise Blending Weights with Bonus
# https://www.kaggle.com/gogo827jz/optimise-blending-weights-with-bonus-0/notebook
model_oofs = []
y_true = non_control_group_train_labels[train_classes].values
all_oof = np.zeros(
(len(model_list), non_control_group_train_labels.shape[0], 206))
print(all_oof.shape)
for i, (script, oof_filename, output_filename,
weight) in enumerate(model_list):
print(f"Loading OOF from {oof_filename} ......")
oof = np.load(f"{dataset_folder}/{oof_filename}")
if oof.shape[0] == 23814:
oof = oof[non_control_group_rows, :]
all_oof[i, :, :] = oof
oof_loss = mean_logloss(oof, y_true)
print(f"OOF Validation Loss of {script}: {oof_loss:.6f}\n")
model_oofs.append(oof)
print("\n[Scipy SLSQP Per Target]")
optimized_target_weights = []
for i, target in enumerate(train_classes):
tol = 1e-10
init_guess = [1 / all_oof.shape[0]] * all_oof.shape[0]
bnds = [(0, 1) for _ in range(all_oof.shape[0])]
cons = {
'type': 'eq',
'fun': lambda x: np.sum(x) - 1,
'jac': lambda x: [1] * len(x)
}
def func_numpy_metric_targes(weights):
oof_blend = np.tensordot(weights,
all_oof[:, :, i],
axes=((0), (0)))
return log_loss_numpy(oof_blend, y_true[:, i])
start_time = time.time()
res_scipy = minimize(
fun=func_numpy_metric_targes,
x0=init_guess,
method='SLSQP',
# jac=grad_func_jit, # grad_func
bounds=bnds,
constraints=cons,
tol=tol)
print(
f'[{str(datetime.timedelta(seconds = time.time() - start_time))[2:7]}] ' + \
f'Optimised Blend OOF for {target}:', res_scipy.fun)
print(f'Optimised Weights for {target}: {res_scipy.x}\n')
optimized_target_weights.append(res_scipy.x)
blend_targets_oof = np.zeros(
(non_control_group_train_labels.shape[0], 206))
for i, (script, oof_filename, output_filename,
weight) in enumerate(model_list):
print(f"Loading OOF from {oof_filename} ......")
oof = np.load(f"{dataset_folder}/{oof_filename}")
if oof.shape[0] == 23814:
oof = oof[non_control_group_rows, :]
for j in range(206):
blend_targets_oof[:,
j] += oof[:, j] * optimized_target_weights[j][i]
oof_loss = mean_logloss(oof, y_true)
print(f"OOF Validation Loss of {script}: {oof_loss:.6f}\n")
blend_targets_oof_loss = mean_logloss(blend_targets_oof, y_true)
print(
f"Blend Target-Wise OOF Validation Loss: {blend_targets_oof_loss:.6f}\n"
)
# Save optimized weights per target
save_pickle(optimized_target_weights, model_path,
f"{study_name}_{blend_targets_oof_loss}")
if run_submit_script:
# For 206 target weights
submission = run_inference_scripts(
submission, target_weights=optimized_target_weights)
total_elapsed = time.time() - total_start
print(f"Total time spent: {total_elapsed/60:.2f} minutes.")
# # [V7 - without TabNet, Mark's 2heads, 10-folds 2StageNN and SimpleNN]
# [Optuna]
# Loading OOF from ../../Github/kaggle_moa_team/oof/oof_2stageNN_ns_oldcv_10folds.npy ......
# OOF Validation Loss of ../../Github/kaggle_moa_team/scripts/2stageNN_with_ns_oldcv_10folds.py: 0.015461
# Loading OOF from ../../Github/kaggle_moa_team/oof/oof_NN_oldcv_10fold.npy ......
# OOF Validation Loss of ../../Github/kaggle_moa_team/scripts/script_simpleNN_oldcv.py: 0.015741
# Loading OOF from ../../Github/kaggle_moa_team/oof/oof_improving-mark-s-2-heads-model_0.015660083675738144.npy ......
# OOF Validation Loss of ../../Github/kaggle_moa_team/scripts/improving-mark-s-2-heads-model-infer.py: 0.015660
# Loading OOF from ../../Github/kaggle_moa_team/oof/oof_deepinsight_efficientnet_lightning_v7_b3_0.01850.npy ......
# OOF Validation Loss of ../../Github/kaggle_moa_team/scripts/deepinsight_efficientnet_lightning_v7_b3_infer.py: 0.016016
# Loading OOF from ../../Github/kaggle_moa_team/oof/oof_deepinsight_ResNeSt_v1_resnest50_0.014619621213185928.npy ......
# OOF Validation Loss of ../../Github/kaggle_moa_team/scripts/deepinsight_resnest_lightning_v1_infer.py: 0.015819
# Loading OOF from ../../Github/kaggle_moa_team/oof/oof_deepinsight_ResNeSt_v2_resnest50_0.01854.npy ......
# OOF Validation Loss of ../../Github/kaggle_moa_team/scripts/deepinsight_resnest_lightning_v2_infer.py: 0.015756
# [Optuna]
# Number of finished trials: 10000
# Best trial:
# Value: 0.015107277135046551
# Params:
# w0: 0.36295047115403034
# w1: 0.00019665601658366993
# w2: 0.10383226438101152
# w3: 0.16907012564901389
# w4: 0.10211240990528128
# w5: 0.26308884808653726
# [Scipy SLSQP]
# [00:48] Optimised Blend OOF: 0.015107004381942337
# Optimised Weights: [0.36727479 0. 0.08130163 0.17091322 0.10863643 0.27187394]
# [V6]
# [Optuna]
# Number of finished trials: 5000
# Best trial:
# Value: 0.015173437622007157
# Params:
# w0: 0.30923325055652684
# w1: 0.09831493504786226
# w2: 0.018966959973949222
# w3: 0.19863369862866234
# w4: 0.0013224625996093413
# w5: 0.3728865483320761
# [Scipy SLSQP]
# [00:36] Optimised Blend OOF: 0.015172005464591968
# Optimised Weights: [3.20472642e-01 9.01191588e-02 1.78893358e-18 2.20448482e-01
# 3.27971157e-18 3.68959717e-01]
# [V5]
# Number of finished trials: 3000
# Best trial:
# Value: 0.015344701181290615
# Params:
# w0: 0.5141433844379889
# w1: 0.11747776562133813
# w2: 0.3668324643717302
# [00:14] Optimised Blend OOF: 0.015344695215068541
# Optimised Weights: [0.51922623 0.11292509 0.36784869]
# [V4]
# [Optuna]
# Number of finished trials: 3000
# Best trial:
# Value: 0.015331901615194453
# Params:
# w0: 0.4505928450756189
# w1: 0.13010257032841785
# w2: 0.06308933354044946
# w3: 0.35639153615958885
#
# [Scipy]
# [00:23] Optimised Blend OOF: 0.015331777381591449
# Optimised Weights: [0.44090106 0.14508641 0.05945655 0.35455598]
# [V3]
# improving-mark-s-2-heads-model-infer
# Number of finished trials: 3000
# Best trial:
# Value: 0.01515466145873492
# Params:
# w0: 0.0002980690037490555
# w1: 0.29771381784976886
# w2: 0.1569191862042946
# w3: 0.18156875605872544
# w4: 0.36371774630338105
# [V3]
# fork-of-2heads-looper-super-puper-markpeng-infer
# Number of finished trials: 3000
# Best trial:
# Value: 0.015170138066049686
# Params:
# w0: 0.00019903389488299251
# w1: 0.3853752127955825
# w2: 0.015968332256452233
# w3: 0.22945916769823432
# w4: 0.3711290150522236
all_oof.shape, blend_oof.shape
###Output
_____no_output_____
###Markdown
Correlation Analysis
###Code
class_counts = train_labels[train_classes].sum().to_frame(
name="count").reset_index().rename(columns={"index": "class"})
class_counts = class_counts.sort_values(by="count",
ascending=False).reset_index(drop=True)
class_counts
class_counts.hist()
# OOF scores per target
target_oof_losses = []
for i, target in enumerate(train_classes):
print(target)
# print(y_true[:, i])
oof_loss = mean_logloss(blend_oof[:, i], y_true[:, i])
target_oof_losses.append(oof_loss)
print(f"Blend OOF Validation Loss of {target}: {oof_loss:.6f}\n")
target_loss_df = pd.DataFrame(data={
"target": train_classes,
"oof_logloss": target_oof_losses
},
columns=["target", "oof_logloss"
]).sort_values(by="oof_logloss",
ascending=False).reset_index(drop=True)
target_loss_df
all_oof.shape
# all_mean_oof = np.zeros((all_oof.shape[0], all_oof.shape[2]))
# for i in range(len(model_list)):
# model_oof = all_oof[i, :, :]
# all_mean_oof[i, :] = np.mean(model_oof, axis=0)
# model_names = [m[2] for m in model_list]
# corr_df = pd.DataFrame(data=all_mean_oof.T, columns=model_names)
# corr_df
# print(np.min(corr_df.corr(method='pearson')))
# corr_df.corr(method='pearson')
# plt.figure(figsize=(16, 6))
# heatmap = sns.heatmap(corr_df.corr(method='pearson'),
# vmin=0.993,
# vmax=1,
# fmt='.6g',
# cmap='YlOrRd',
# annot=True)
# # center=0.7)
# # annot=True)
# # cmap='jet')
# # cmap='BrBG')
# heatmap.set_title('Pearson Correlation Heatmap of Blend OOF Predictions',
# fontdict={'fontsize': 18},
# pad=12)
# print(np.min(corr_df.corr(method='spearman')))
# corr_df.corr(method='spearman')
# plt.figure(figsize=(16, 6))
# heatmap = sns.heatmap(corr_df.corr(method='spearman'),
# vmin=0.99,
# vmax=1,
# fmt='.6g',
# cmap='YlOrRd',
# annot=True)
# # center=0.7)
# # annot=True)
# # cmap='jet')
# # cmap='BrBG')
# heatmap.set_title('Spearman Correlation Heatmap of Blend OOF Predictions',
# fontdict={'fontsize': 18},
# pad=12)
# print(np.min(corr_df.corr(method='kendall')))
# corr_df.corr(method='kendall')
# plt.figure(figsize=(16, 6))
# heatmap = sns.heatmap(corr_df.corr(method='kendall'),
# vmin=0.92,
# vmax=0.96,
# fmt='.6g',
# cmap='YlOrRd',
# annot=True)
# # center=0.7)
# # annot=True)
# # cmap='jet')
# # cmap='BrBG')
# heatmap.set_title('Kendall Correlation Heatmap of Blend OOF Predictions',
# fontdict={'fontsize': 18},
# pad=12)
###Output
_____no_output_____
###Markdown
Mean Correlation for Targets
###Code
# def get_corr(predictions):
# mat = np.zeros((len(predictions), len(predictions)))
# for i in range(len(predictions)):
# for j in range(len(predictions)):
# print(
# f'Mean Correlation (column-wise) [Model {i}-{j}]: {predictions[i][target_columns].corrwith(predictions[j][target_columns]).mean()}'
# )
# mat[i, j] = predictions[i][target_columns].corrwith(
# predictions[j][target_columns]).mean()
# return mat
# predictions = [sub1, sub2, sub3, sub4, sub5, sub6, sub7]
# mat = get_corr(predictions)
# mat_df = pd.DataFrame(
# mat, columns=[f'Model-{i}' for i in range(1,
# len(predictions) + 1)])
# mat_df.columns = [
# '2stageNN', '2heads Pytorch', 'simpleNNnewcv', 'efficientnet v7',
# 'resnest v2', '2stage Tabnet', 'simpleNNoldcv'
# ]
# mat_df.index = [
# '2stageNN', '2heads Pytorch', 'simpleNNnewcv', 'efficientnet v7',
# 'resnest v2', '2stage Tabnet', 'simpleNNoldcv'
# ]
# plt.figure(figsize=(14, 14))
# sns.heatmap(mat_df, annot=True)
# model_names = [m[2] for m in model_list]
# mean_oof_corr = np.zeros((all_oof.shape[0], all_oof.shape[0]))
# for i in range(206):
# target_oof = np.zeros((all_oof.shape[0], all_oof.shape[1]))
# for j in range(len(model_list)):
# target_oof[j, :] = all_oof[j, :, i]
# corr_df = pd.DataFrame(data=target_oof.T, columns=model_names)
# # target_corr = corr_df.corr(method='pearson')
# target_corr = corr_df.corr(method='spearman')
# # target_corr = corr_df.corr(method='kendall')
# mean_oof_corr += target_corr / 206
###Output
_____no_output_____
###Markdown
Pearson
###Code
method = 'pearson'
mean_oof_corr = np.zeros((all_oof.shape[0], all_oof.shape[0]))
for i in range(len(model_list)):
i_df = pd.DataFrame(all_oof[i])
for j in range(len(model_list)):
j_df = pd.DataFrame(all_oof[j])
print(
f'Mean Correlation (column-wise) [Model {i}-{j}]: {i_df.corrwith(j_df, method=method).mean()}'
)
mean_oof_corr[i, j] = i_df.corrwith(j_df, method=method).mean()
corr_df = pd.DataFrame(
mean_oof_corr,
columns=[f'Model-{i}' for i in range(1,
len(model_list) + 1)])
corr_df.columns = model_names
corr_df.index = model_names
corr_df
print(np.min(corr_df))
plt.figure(figsize=(12, 10))
heatmap = sns.heatmap(
corr_df,
vmin=0.5,
vmax=0.8,
# center=0.65,
fmt='.6g',
cmap='RdYlGn_r',
# cmap='YlOrRd',
# cmap='jet',
annot=True)
# center=0.7)
# annot=True)
# cmap='jet')
# cmap='BrBG')
heatmap.set_title(
'Mean Correlation Heatmap of Best CV OOF Predictions (Pearson)',
fontdict={'fontsize': 18},
pad=12)
# plt.figure(figsize=(16, 6))
# heatmap = sns.heatmap(mean_oof_corr,
# vmin=0,
# vmax=1,
# fmt='.6g',
# cmap='YlOrRd',
# annot=True)
# # center=0.7)
# # annot=True)
# # cmap='jet')
# # cmap='BrBG')
# heatmap.set_title(
# 'Mean Pearson Correlation Heatmap of Blend OOF Predictions for Targets',
# fontdict={'fontsize': 18},
# pad=12)
###Output
_____no_output_____
###Markdown
Spearman
###Code
method = 'spearman'
mean_oof_corr = np.zeros((all_oof.shape[0], all_oof.shape[0]))
for i in range(len(model_list)):
i_df = pd.DataFrame(all_oof[i])
for j in range(len(model_list)):
j_df = pd.DataFrame(all_oof[j])
print(
f'Mean Correlation (column-wise) [Model {i}-{j}]: {i_df.corrwith(j_df, method=method).mean()}'
)
mean_oof_corr[i, j] = i_df.corrwith(j_df, method=method).mean()
corr_df = pd.DataFrame(
mean_oof_corr,
columns=[f'Model-{i}' for i in range(1,
len(model_list) + 1)])
corr_df.columns = model_names
corr_df.index = model_names
corr_df
print(np.min(corr_df))
plt.figure(figsize=(12, 10))
heatmap = sns.heatmap(mean_oof_corr,
vmin=0.5,
vmax=0.8,
# center=0.6,
fmt='.6g',
cmap='RdYlGn_r',
annot=True)
# center=0.7)
# annot=True)
# cmap='jet')
# cmap='BrBG')
heatmap.set_title(
'Mean Correlation Heatmap of Best CV OOF Predictions (Spearman)',
fontdict={'fontsize': 18},
pad=12)
###Output
_____no_output_____
###Markdown
Kendall
###Code
method = 'kendall'
mean_oof_corr = np.zeros((all_oof.shape[0], all_oof.shape[0]))
for i in range(len(model_list)):
i_df = pd.DataFrame(all_oof[i])
for j in range(len(model_list)):
j_df = pd.DataFrame(all_oof[j])
print(
f'Mean Correlation (column-wise) [Model {i}-{j}]: {i_df.corrwith(j_df, method=method).mean()}'
)
mean_oof_corr[i, j] = i_df.corrwith(j_df, method=method).mean()
corr_df = pd.DataFrame(
mean_oof_corr,
columns=[f'Model-{i}' for i in range(1,
len(model_list) + 1)])
corr_df.columns = model_names
corr_df.index = model_names
corr_df
print(np.min(corr_df))
plt.figure(figsize=(12, 10))
heatmap = sns.heatmap(mean_oof_corr,
vmin=0.3,
vmax=0.8,
# center=0.6,
fmt='.6g',
cmap='RdYlGn_r',
annot=True)
# center=0.7)
# annot=True)
# cmap='jet')
# cmap='BrBG')
heatmap.set_title(
'Mean Correlation Heatmap of Best CV OOF Predictions (Kendall)',
fontdict={'fontsize': 18},
pad=12)
if run_submit_script:
print(submission.shape)
print(submission)
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____
###Markdown
EOF
###Code
if kernel_mode:
!rm ./*.py
!ls -la
###Output
_____no_output_____ |
src/hw6.ipynb | ###Markdown
LFD Homework 6Sixth week homework for the "Learning from Data" course offerd by [Caltech on edX](https://courses.edx.org/courses/course-v1:CaltechX+CS1156x+3T2017). Note that this notebook does not contain *all* solutions; it showcases solutions to those problems that require programming / simulation (and omits pen and paper problems).
###Code
import numpy as np
import matplotlib.pyplot as plt
import requests
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Regularization and Weight Decay (P2 - P6)The data sets used in these problems can be found at * http://work.caltech.edu/data/in.dta * http://work.caltech.edu/data/out.dtawhere `in.dta` corresponds to the training set and `out.dta` to the test set. In these files, each line contains a training example $(x_1, x_2, y)$ so that $\mathcal{X} = \mathbb{R}^2$ and $\mathcal{Y} = \{-1, 1\}$. The first step is to load that data:
###Code
def load_data():
'''Loads the data directly from the caltech website.'''
r = requests.get('http://work.caltech.edu/data/in.dta'); r.raise_for_status()
data = np.loadtxt(r.content.splitlines())
X, Y = np.split(data,[2], axis=-1)
r = requests.get('http://work.caltech.edu/data/out.dta'); r.raise_for_status()
data = np.loadtxt(r.content.splitlines())
X_t, Y_t = np.split(data,[2], axis=-1)
return X, Y, X_t, Y_t
X, Y, X_t, Y_t = load_data()
###Output
_____no_output_____
###Markdown
We are going to apply linear regression with a non-linear transformation for classification. The non-linear transformation is given by$$\Phi(x_1, x_2) = (1, x_1, x_2, x_1^2, x_2^2, x_1x_2, |x_1 - x_2|, |x_1 + x_2|)$$Recall that the classification error is defined as the fraction of misclassified points. P2Run linear regression on the training set after performing the non-linear transformation. What values are closest (in Euclidean distance) to the in-sample and out-of-sample classification errors, respectively?Here we can re-use the implementation of the linear regression classifier from `HW2`:
###Code
def Phi(X):
x1, x2 = np.hsplit(X, 2)
Z = np.concatenate((X, x1**2, x2**2, x1 * x2, np.abs(x1 - x2), np.abs(x1 + x2)), axis=1)
return Z
class LRBClassifier:
''' Simple linear regression based binary classifier.'''
def __init__(self, X, Y, lambd=0, add_intercept=True):
N, d = X.shape
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
self.w = np.linalg.pinv(X.T @ X + lambd * np.eye(d+1)) @ (X.T) @ Y
else:
self.w = np.linalg.pinv(X.T @ X + lambd * np.eye(d)) @ (X.T) @ Y
self.E_in = np.sum(self(X, add_intercept=False) != Y)/N
def __call__(self, X, add_intercept=True):
N, d = X.shape
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
return np.sign(X @ self.w).reshape(-1,1)
def E(self, X, Y, add_intercept=True):
N, d = X.shape
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
E = np.sum(Y != self(X, add_intercept=False)) / float(N)
return E
###Output
_____no_output_____
###Markdown
Then, we can train this classifier on the training set and evaluate its out of sample error on the test set:
###Code
Z, Z_t = Phi(X), Phi(X_t)
g = LRBClassifier(Z, Y)
g.E_out = g.E(Z_t, Y_t)
print('Errors\n------\nE_in\t= {:.3f}\nE_out\t= {:.3f}'.format(g.E_in, g.E_out))
###Output
Errors
------
E_in = 0.029
E_out = 0.084
###Markdown
The closest (Euclidean distance) choice is then:
###Code
def choose_closest(choices, point, print_result=True):
'''Choose the closest point to a given point (euclidean distance).'''
distances = np.sqrt(np.sum((choices - point)**2, axis=0))
idx = np.argmin(distances)
if print_result:
print('closest choice: ({:.3f}, {:.3f})\t--> ({})'.format(*choices[:, idx], 'abcde'[idx]))
return choices[:, idx]
choose_closest (
np.array([
[.03, .03, .04, .04, .05],
[.08, .10, .09, .11, .10]
]),
np.array([[g.E_in], [g.E_out]])
);
###Output
closest choice: (0.030, 0.080) --> (a)
###Markdown
P3Now we add weight decay to Linear Regression using $\lambda = 10^k$ for $k = -3$:
###Code
g = LRBClassifier(Z, Y, lambd=1e-3)
g.E_out = g.E(Z_t, Y_t)
print('Errors\n------\nE_in\t= {:.3f}\nE_out\t= {:.3f}'.format(g.E_in, g.E_out))
###Output
Errors
------
E_in = 0.029
E_out = 0.080
###Markdown
The closest (Euclidean distance) choice is then:
###Code
choose_closest (
np.array([
[.01, .02, .02, .03, .03],
[.02, .04, .06, .08, .10]
]),
np.array([[g.E_in], [g.E_out]])
);
###Output
closest choice: (0.030, 0.080) --> (d)
###Markdown
P4Now we add weight decay to Linear Regression using $\lambda = 10^k$ for $k = 3$:
###Code
g = LRBClassifier(Z, Y, lambd=1e3)
g.E_out = g.E(Z_t, Y_t)
print('Errors\n------\nE_in\t= {:.3f}\nE_out\t= {:.3f}'.format(g.E_in, g.E_out))
###Output
Errors
------
E_in = 0.371
E_out = 0.436
###Markdown
The closest (Euclidean distance) choice is then:
###Code
choose_closest (
np.array([
[.2, .2, .3, .3, .4],
[.2, .3, .3, .4, .4]
]),
np.array([[g.E_in], [g.E_out]])
);
###Output
closest choice: (0.400, 0.400) --> (e)
###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
import matplotlib.pyplot as plt
h = 0.1
def BM(n):
tt = []
ttt = []
i = 0
sum = 0
while i<n:
tt.append(sum)
AA = sum/np.sqrt(2*(h*i)*np.log(np.log(h*i)))
ttt.append(AA)
sum = sum+np.sqrt(h)*np.random.randn(1)[0]
i = i+1
return([tt,ttt])
def pic_wi_path(num,sta,en,h):
path=mc_wi_path(num,sta,en,h)
x0=sta
n=int((en-sta)/h)
x_ls=[x0,]
for i in range(n):
x0+=h
x_ls.append(x0)
for i in range(num):
plt.plot(x_ls,path[i+1])
pic_wi_path(10,100,110,0.1)
import numpy as np
import math
import matplotlib.pyplot as plt
import numpy as np
import math
import scipy.stats as ss
import matplotlib.pyplot as plt
class VanillaOption:
def __init__(
self,
otype = 1, # 1: 'call'
# -1: 'put'
strike = 110.,
maturity = 1.,
market_price = 10.):
self.otype = otype
self.strike = strike
self.maturity = maturity
self.market_price = market_price #this will be used for calibration
def explain_yourself(self):
if self.otype==1:
print('I am call')
elif self.otype==-1:
print('I am put')
def payoff(self, s): #s: excercise price
otype = self.otype
k = self.strike
maturity = self.maturity
return max([0, (s - k)*otype])
class Gbm:
def __init__(self,
init_state = 100.,
drift_ratio = .0475,
vol_ratio = .2,
nstep=5,
N = 1000
):
self.init_state = init_state
self.drift_ratio = drift_ratio
self.vol_ratio = vol_ratio
self.nstep = nstep
self.N = N
def stk_price(self,option):
n = self.n_step
T = option.maturity
s0 = self.init_state
sigma = self.vol_ratio
r = self.drift_ratio
w_s=0
h=T/n
stk_ls=np.zeros(1)
stk_ls[0]=s0
for i in range(n):
b=np.random.normal(0,1)
w_s+=math.sqrt(h)*b
s=s0*np.exp((r-0.5*(sigma**2))*h*(i+1)+sigma*w_s)
stk_ls=np.append(stk_ls,s)
return stk_ls
def asianprice(self,VanillaOption):
r = self.drift_ratio
sigma = self.vol_ratio
n = self.nstep
N = self.N
s0 = self.init_state
K = VanillaOption.strike
T = VanillaOption.maturity
Optiontype = VanillaOption.otype
W=[]
X=[]
sum = 0
sum2 = 0
CT = 0
AA = 0
i=0
while i<N:
h = T/n
W.append(sum)
AA=s0*np.exp(h*i*(r-0.5*np.square(sigma))+sigma*W[i])
sum2 = sum2+AA
X.append(AA)
sum = sum+np.sqrt(h)*np.random.randn(1)[0]
ave = sum2/N
CT = CT+np.exp(-r*h*i*np.max([ave-K, 0]))
i=i+1
return(CT/N)
gbm1 = Gbm()
option1 = VanillaOption()
gbm1.asianprice(option1)
###Output
_____no_output_____
###Markdown
P5What value among the given choices achieves the smallest out-of-sample classification error?
###Code
def compute_E_out_for_k(choices):
E_out = np.array([(lambda g: g.E(Z_t, Y_t))(LRBClassifier(Z, Y, lambd=10.**k)) for k in choices])
return E_out
choices = np.array([2, 1, 0, -1, -2])
E_out = compute_E_out_for_k(choices)
idx = np.argmin(E_out)
print('closest choice: k={}\t--> ({})'.format(choices[idx], 'abcde'[idx]))
###Output
closest choice: k=-1 --> (d)
###Markdown
P6What value is closest to the minimum out of sample error achieved by varying $k$ (limited to integer values)?
###Code
k = np.arange(-10, 10)
E_out = compute_E_out_for_k(k)
E_out_min = np.min(E_out)
k_min = k[np.argmin(E_out)]
plt.plot(k, E_out)
plt.plot(k_min, E_out_min, 'ro')
plt.text(k_min + 1, E_out_min, '$k_{\\min} = ' + str(k_min) + '$')
plt.xticks(k)
plt.grid(alpha=.3)
print('Results\n-------\nk_min\t= {}\nE_out\t= {:.3f}'.format(k_min, E_out_min))
###Output
_____no_output_____
###Markdown
Neural Networks (P9 - P10)While the compuation/exploration for these problems is better done on paper, the following method helps in counting connections (according to the definition given in the assignment) and thus exploring solutions. Let $d_l$ denote the number of nodes in layer $l \in \{0, \ldots, L-1\}$ and $d = (d_0, \ldots, d_{L-1}) \in \mathbb{N}^L$ the architecture of the network (to make the notation match, we start indexing the vector components from $0, \ldots, L-1$).First, note that bias terms are counted as nodes here. The networks are fully connected, each unit of the previous layer (including its bias) has a connection to each unit of the current layer (excluding its bias). So in each layer, we have $d_{l-1}(d_l - 1)$ incoming conncetions. The final hidden layer is connceted to the single output node, so we have $d_{L-1}$ incoming connections there. This gives us$$c(d) = d_{L-1} + \sum_{l=1}^{L-1}{d_{l-1}(d_l - 1)}$$connections in the network. Now the constraint that we are given is that there are $36$ hidden nodes, which means that$$\sum_{l=1}^{L-1}d_l = 36$$Using this constraint and the fact that $d_0 = 10$, we can further simplify $c$ to obtain$$c(d) = 2d_{L-1} - 46 + \sum_{l=1}^{L-1}{d_{l-1}d_l}$$So the number of connections in a network with architecture $d$ is:
###Code
c = lambda d: 2*d[-1] - 46 + np.sum(d[:-1] * d[1:])
###Output
_____no_output_____
###Markdown
And the solution architectures for P9 and P10 respectively are:
###Code
arch_min = np.array([10] + 18*[2] + [1])
arch_max = np.array([10,22,14,1])
architectures = arch_min, arch_max
print('Solution\n--------\narch_min: {}\narch_max: {}\n--------\nc_min\t= {}\nc_max\t= {}'.format(
*architectures,
*[c(arch[:-1]) for arch in architectures]
))
###Output
Solution
--------
arch_min: [10 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1]
arch_max: [10 22 14 1]
--------
c_min = 46
c_max = 510
|
_notebooks/2020-02-22-Linear-Regresyon-Tam-Uygulama.ipynb | ###Markdown
İleri Lineer Regresyon Uygulaması (✗)> Lineer regresyonun örneğe uygulanması.- toc: true - badges: true- comments: true- categories: [jupyter]- image: images/chart-preview.png The following additional libraries are needed to run thisnotebook. Note that running on Colab is experimental, please report a Githubissue if you have any problem.
###Code
!pip install -U mxnet-cu101mkl==1.6.0 # updating mxnet to at least v1.6
!pip install d2l==0.13.2 -f https://d2l.ai/whl.html # installing d2l
###Output
_____no_output_____
###Markdown
Concise Implementation of Linear Regression:label:`sec_linear_gluon`Broad and intense interest in deep learning for the past several yearshas inspired both companies, academics, and hobbyiststo develop a variety of mature open source frameworksfor automating the repetitive work of implementinggradient-based learning algorithms.In the previous section, we relied only on(i) tensors for data storage and linear algebra;and (ii) auto differentiation for calculating derivatives.In practice, because data iterators, loss functions, optimizers,and neural network layers (and some whole architectures)are so common, modern libraries implement these components for us as well.In this section, we will show you how to implementthe linear regression model from :numref:`sec_linear_scratch`concisely by using framework's high-level APIs. Generating the DatasetTo start, we will generate the same dataset as in the previous section.
###Code
from d2l import mxnet as d2l
from mxnet import autograd, gluon, np, npx
npx.set_np()
true_w = np.array([2, -3.4])
true_b = 4.2
features, labels = d2l.synthetic_data(true_w, true_b, 1000)
###Output
_____no_output_____
###Markdown
Reading the DatasetRather than rolling our own iterator,we can call upon the `data` module to read data.The first step will be to instantiate an `ArrayDataset`.This object's constructor takes one or more tensors as arguments.Here, we pass in `features` and `labels` as arguments.Next, we will use the `ArrayDataset` to instantiate a `DataLoader`,which also requires that we specify a `batch_size`and specify a Boolean value `shuffle` indicating whether or notwe want the `DataLoader` to shuffle the dataon each epoch (pass through the dataset).
###Code
def load_array(data_arrays, batch_size, is_train=True): #@save
"""Construct a Gluon data loader."""
dataset = gluon.data.ArrayDataset(*data_arrays)
return gluon.data.DataLoader(dataset, batch_size, shuffle=is_train)
batch_size = 10
data_iter = load_array((features, labels), batch_size)
###Output
_____no_output_____
###Markdown
Now we can use `data_iter` in much the same way as we calledthe `data_iter` function in the previous section.To verify that it is working, we can read and printthe first minibatch of instances. Comparing to :numref:`sec_linear_scratch`, here we use `iter` to construct an Python iterator and then use `next` to obtain the first item from the iterator.
###Code
next(iter(data_iter))
###Output
_____no_output_____
###Markdown
Defining the ModelWhen we implemented linear regression from scratch(in :numref:`sec_linear_scratch`),we defined our model parameters explicitlyand coded up the calculations to produce outputusing basic linear algebra operations.You *should* know how to do this.But once your models get more complex,and once you have to do this nearly every day,you will be glad for the assistance.The situation is similar to coding up your own blog from scratch.Doing it once or twice is rewarding and instructive,but you would be a lousy web developerif every time you needed a blog you spent a monthreinventing the wheel.For standard operations, we can use the framework's predefined layers,which allow us to focus especiallyon the layers used to construct the modelrather than having to focus on the implementation.To define a linear model, we first import the `nn` module,which defines a large number of neural network layers(note that "nn" is an abbreviation for neural networks).We will first define a model variable `net`,which will refer to an instance of the `Sequential` class.The `Sequential` class defines a containerfor several layers that will be chained together.Given input data, a `Sequential` passes it throughthe first layer, in turn passing the outputas the second layer's input and so forth.In the following example, our model consists of only one layer,so we do not really need `Sequential`.But since nearly all of our future modelswill involve multiple layers,we will use it anyway just to familiarize youwith the most standard workflow.Recall the architecture of a single-layer network as shown in :numref:`fig_singleneuron`.The layer is said to be *fully-connected*because each of its inputs are connected to each of its outputsby means of a matrix-vector multiplication.:label:`fig_singleneuron`
###Code
from mxnet.gluon import nn
net = nn.Sequential()
net.add(nn.Dense(1))
###Output
_____no_output_____
###Markdown
In Gluon, the fully-connected layer is defined in the `Dense` class.Since we only want to generate a single scalar output,we set that number to $1$.It is worth noting that, for convenience,Gluon does not require us to specifythe input shape for each layer.So here, we do not need to tell Gluonhow many inputs go into this linear layer.When we first try to pass data through our model,e.g., when we execute `net(X)` later,Gluon will automatically infer the number of inputs to each layer.We will describe how this works in more detailin the chapter "Deep Learning Computation". Initializing Model ParametersBefore using `net`, we need to initialize the model parameters,such as the weights and biases in the linear regression model. We will import the `initializer` module from MXNet.This module provides various methods for model parameter initialization.Gluon makes `init` available as a shortcut (abbreviation)to access the `initializer` package.By calling `init.Normal(sigma=0.01)`,we specify that each *weight* parametershould be randomly sampled from a normal distributionwith mean $0$ and standard deviation $0.01$.The *bias* parameter will be initialized to zero by default.Both the weight vector and bias will have attached gradients.
###Code
from mxnet import init
net.initialize(init.Normal(sigma=0.01))
###Output
_____no_output_____
###Markdown
The code above may look straightforward but you should notethat something strange is happening here.We are initializing parameters for a networkeven though Gluon does not yet knowhow many dimensions the input will have!It might be $2$ as in our example or it might be $2000$.Gluon lets us get away with this because behind the scenes,the initialization is actually *deferred*.The real initialization will take place onlywhen we for the first time attempt to pass data through the network.Just be careful to remember that since the parametershave not been initialized yet,we cannot access or manipulate them. Defining the Loss Function In Gluon, the `loss` module defines various loss functions.We will use the imported module `loss` with the pseudonym `gloss`to avoid confusing it for the variableholding our chosen loss function.In this example, we will use the Gluonimplementation of squared loss (`L2Loss`).
###Code
from mxnet.gluon import loss as gloss
loss = gloss.L2Loss()
###Output
_____no_output_____
###Markdown
Defining the Optimization Algorithm Minibatch SGD and related variantsare standard tools for optimizing neural networksand thus Gluon supports SGD alongside a number ofvariations on this algorithm through its `Trainer` class.When we instantiate the `Trainer`,we will specify the parameters to optimize over(obtainable from our net via `net.collect_params()`),the optimization algorithm we wish to use (`sgd`),and a dictionary of hyper-parametersrequired by our optimization algorithm.SGD just requires that we set the value `learning_rate`,(here we set it to 0.03).
###Code
from mxnet import gluon
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.03})
###Output
_____no_output_____
###Markdown
TrainingYou might have noticed that expressing our model through Gluonrequires comparatively few lines of code.We did not have to individually allocate parameters,define our loss function, or implement stochastic gradient descent.Once we start working with much more complex models,Gluon's advantages will grow considerably.However, once we have all the basic pieces in place,the training loop itself is strikingly similarto what we did when implementing everything from scratch.To refresh your memory: for some number of epochs,we will make a complete pass over the dataset (train_data),iteratively grabbing one minibatch of inputsand the corresponding ground-truth labels.For each minibatch, we go through the following ritual:* Generate predictions by calling `net(X)` and calculate the loss `l` (the forward pass).* Calculate gradients by calling `l.backward()` (the backward pass).* Update the model parameters by invoking our SGD optimizer (note that `trainer` already knows which parameters to optimize over, so we just need to pass in the minibatch size.For good measure, we compute the loss after each epoch and print it to monitor progress.
###Code
num_epochs = 3
for epoch in range(1, num_epochs + 1):
for X, y in data_iter:
with autograd.record():
l = loss(net(X), y)
l.backward()
trainer.step(batch_size)
l = loss(net(features), labels)
print('epoch %d, loss: %f' % (epoch, l.mean().asnumpy()))
###Output
epoch 1, loss: 0.025046
###Markdown
Below, we compare the model parameters learned by training on finite dataand the actual parameters that generated our dataset.To access parameters with Gluon,we first access the layer that we need from `net`and then access that layer's weight (`weight`) and bias (`bias`).To access each parameter's values as a tensor,we invoke its `data` method.As in our from-scratch implementation,note that our estimated parameters areclose to their ground truth counterparts.
###Code
w = net[0].weight.data()
print('Error in estimating w', true_w.reshape(w.shape) - w)
b = net[0].bias.data()
print('Error in estimating b', true_b - b)
###Output
Error in estimating w [[ 0.00041139 -0.00037718]]
Error in estimating b [0.00084257]
|
C1 Natural Language Processing with Classification/W1/Labs/3 Visualizing tweets and Logistic Regression models.ipynb | ###Markdown
Visualizing tweets and the Logistic Regression model**Objectives:** Visualize and interpret the logistic regression model**Steps:*** Plot tweets in a scatter plot using their positive and negative sums.* Plot the output of the logistic regression model in the same plot as a solid line Import the required librariesWe will be using [*NLTK*](http://www.nltk.org/howto/twitter.html), an opensource NLP library, for collecting, handling, and processing Twitter data. In this lab, we will use the example dataset that comes alongside with NLTK. This dataset has been manually annotated and serves to establish baselines for models quickly. So, to start, let's import the required libraries.
###Code
import nltk # NLP toolbox
from os import getcwd
import pandas as pd # Library for Dataframes
from nltk.corpus import twitter_samples
import matplotlib.pyplot as plt # Library for visualization
import numpy as np # Library for math functions
from utils import process_tweet, build_freqs # Our functions for NLP
nltk.download('twitter_samples')
###Output
[nltk_data] Downloading package twitter_samples to
[nltk_data] /home/jovyan/nltk_data...
[nltk_data] Unzipping corpora/twitter_samples.zip.
###Markdown
Load the NLTK sample datasetTo complete this lab, you need the sample dataset of the previous lab. Here, we assume the files are already available, and we only need to load into Python lists.
###Code
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
tweets = all_positive_tweets + all_negative_tweets ## Concatenate the lists.
labels = np.append(np.ones((len(all_positive_tweets),1)), np.zeros((len(all_negative_tweets),1)), axis = 0)
# split the data into two pieces, one for training and one for testing (validation set)
train_pos = all_positive_tweets[:4000]
train_neg = all_negative_tweets[:4000]
train_x = train_pos + train_neg
print("Number of tweets: ", len(train_x))
###Output
Number of tweets: 8000
###Markdown
Load the extracted featuresPart of this week's assignment is the creation of the numerical features needed for the Logistic regression model. In order not to interfere with it, we have previously calculated and stored these features in a CSV file for the entire training set.So, please load these features created for the tweets sample.
###Code
data = pd.read_csv('./data/logistic_features.csv'); # Load a 3 columns csv file using pandas function
data.head(10) # Print the first 10 data entries
###Output
_____no_output_____
###Markdown
Now let us get rid of the data frame to keep only Numpy arrays.
###Code
# Each feature is labeled as bias, positive and negative
X = data[['bias', 'positive', 'negative']].values # Get only the numerical values of the dataframe
Y = data['sentiment'].values; # Put in Y the corresponding labels or sentiments
print(X.shape) # Print the shape of the X part
print(X) # Print some rows of X
###Output
(8000, 3)
[[1.000e+00 3.020e+03 6.100e+01]
[1.000e+00 3.573e+03 4.440e+02]
[1.000e+00 3.005e+03 1.150e+02]
...
[1.000e+00 1.440e+02 7.830e+02]
[1.000e+00 2.050e+02 3.890e+03]
[1.000e+00 1.890e+02 3.974e+03]]
###Markdown
Load a pretrained Logistic Regression modelIn the same way, as part of this week's assignment, a Logistic regression model must be trained. The next cell contains the resulting model from such training. Notice that a list of 3 numeric values represents the whole model, that we have called _theta_ $\theta$.
###Code
theta = [6.03518871e-08, 5.38184972e-04, -5.58300168e-04]
###Output
_____no_output_____
###Markdown
Plot the samples in a scatter plotThe vector theta represents a plane that split our feature space into two parts. Samples located over that plane are considered positive, and samples located under that plane are considered negative. Remember that we have a 3D feature space, i.e., each tweet is represented as a vector comprised of three values: `[bias, positive_sum, negative_sum]`, always having `bias = 1`. If we ignore the bias term, we can plot each tweet in a cartesian plane, using `positive_sum` and `negative_sum`. In the cell below, we do precisely this. Additionally, we color each tweet, depending on its class. Positive tweets will be green and negative tweets will be red.
###Code
# Plot the samples using columns 1 and 2 of the matrix
fig, ax = plt.subplots(figsize = (8, 8))
colors = ['red', 'green']
# Color based on the sentiment Y
ax.scatter(X[:,1], X[:,2], c=[colors[int(k)] for k in Y], s = 0.1) # Plot a dot for each pair of words
plt.xlabel("Positive")
plt.ylabel("Negative")
###Output
_____no_output_____
###Markdown
From the plot, it is evident that the features that we have chosen to represent tweets as numerical vectors allow an almost perfect separation between positive and negative tweets. So you can expect a very high accuracy for this model! Plot the model alongside the dataWe will draw a gray line to show the cutoff between the positive and negative regions. In other words, the gray line marks the line where $$ z = \theta * x = 0.$$To draw this line, we have to solve the above equation in terms of one of the independent variables.$$ z = \theta * x = 0$$$$ x = [1, pos, neg] $$$$ z(\theta, x) = \theta_0+ \theta_1 * pos + \theta_2 * neg = 0 $$$$ neg = (-\theta_0 - \theta_1 * pos) / \theta_2 $$The red and green lines that point in the direction of the corresponding sentiment are calculated using a perpendicular line to the separation line calculated in the previous equations (neg function). It must point in the same direction as the derivative of the Logit function, but the magnitude may differ. It is only for a visual representation of the model. $$direction = pos * \theta_2 / \theta_1$$
###Code
# Equation for the separation plane
# It give a value in the negative axe as a function of a positive value
# f(pos, neg, W) = w0 + w1 * pos + w2 * neg = 0
# s(pos, W) = (w0 - w1 * pos) / w2
def neg(theta, pos):
return (-theta[0] - pos * theta[1]) / theta[2]
# Equation for the direction of the sentiments change
# We don't care about the magnitude of the change. We are only interested
# in the direction. So this direction is just a perpendicular function to the
# separation plane
# df(pos, W) = pos * w2 / w1
def direction(theta, pos):
return pos * theta[2] / theta[1]
###Output
_____no_output_____
###Markdown
The green line in the chart points in the direction where z > 0 and the red line points in the direction where z < 0. The direction of these lines are given by the weights $\theta_1$ and $\theta_2$
###Code
# Plot the samples using columns 1 and 2 of the matrix
fig, ax = plt.subplots(figsize = (8, 8))
colors = ['red', 'green']
# Color base on the sentiment Y
ax.scatter(X[:,1], X[:,2], c=[colors[int(k)] for k in Y], s = 0.1) # Plot a dot for each pair of words
plt.xlabel("Positive")
plt.ylabel("Negative")
# Now lets represent the logistic regression model in this chart.
maxpos = np.max(X[:,1])
offset = 5000 # The pos value for the direction vectors origin
# Plot a gray line that divides the 2 areas.
ax.plot([0, maxpos], [neg(theta, 0), neg(theta, maxpos)], color = 'gray')
# Plot a green line pointing to the positive direction
ax.arrow(offset, neg(theta, offset), offset, direction(theta, offset), head_width=500, head_length=500, fc='g', ec='g')
# Plot a red line pointing to the negative direction
ax.arrow(offset, neg(theta, offset), -offset, -direction(theta, offset), head_width=500, head_length=500, fc='r', ec='r')
plt.show()
###Output
_____no_output_____ |
Jornadas_2020.ipynb | ###Markdown
Universidade Federal do Rio Grande do Sul (UFRGS) Programa de Pós-Graduação em Engenharia Civil (PPGEC) Efeito de segunda ordem na frequência fundamental de torres esbeltas[1. Introdução](section_1) [2. Soluções analíticas exatas e aproximadas](section_2) [3. Solução numérica com matrix de rigidez geométrica](section_3) [4. Modelo experimental](section_4) [5. Comparação de resultados](section_5) [6. Conclusões](section_6) ---_Prof. Marcelo M. Rocha, Dr.techn._ [(ORCID)](https://orcid.org/0000-0001-5640-1020) _Porto Alegre, RS, Brazil_
###Code
# Importing Python modules required for this notebook
# (this cell must be executed with "shift+enter" before any other Python cell)
import numpy as np
import matplotlib.pyplot as plt
import pickle as pk
# Module for system matrices calculation
from FEM_column import *
###Output
_____no_output_____
###Markdown
__Resumo:__ A consideração dos efeitos de segunda ordem é imprescindível para o projeto adequado de torres esbeltas com concentração de massa no topo. Neste contexto, torres de telecomunicações devem ser dimensionadas para resistirem ao peso das antenas de transmissão. A carga compressiva produzida pelo conjunto de antenas pode tornar relevante o efeito de segunda ordem nas propriedades dinâmicas da torre, reduzindo as frequências naturais de vibração livre, causando assim um aumento na resposta ressonante frente à ação dinâmica do vento. Nesse contexto, o presente trabalho possui como objetivo avaliar experimentalmente, por meio de modelos reduzidos, a influência dos efeitos de segunda ordem na frequência fundamental de torres de telecomunicações, considerando que esta frequência é função direta da rigidez efetiva da estrutura. Para isto foram confeccionados modelos reduzidos com os quais foram realizadas uma série de medições de frequências naturais mediante o aumento progressivo da carga axial em seu topo, até a configuração de flambagem global. Por meio da comparação com os resultados provenientes do cálculo de frequência teórico, sem consideração de efeitos de segunda ordem, verificam-se valores experimentais para a frequência fundamental cada vez menores com o aumento da carga axial, quantificando-se assim a relevância destes efeitos. Finalmente, os resultados experimentais são comparados com um modelo teórico mais elaborado, no qual a matriz de rigidez da torre é corrigida por uma matriz geométrica linearizada. 1. Introdução 1. A principal carga em torres é o vento.2. Torres esbeltas tem resposta ressonante importante.3. A resposta ressonante depende da frequência fundamental da torre.4. Algumas torres tem grande massa adicionada ao topo (mirantes, antenas, etc.)5. Massas adicionais causam um efeito de segunda ordem na resposta em vibração livre.6. Neste caso resposta dinâmica à ação do vento deve considerar uma frequência reduzida. 2. Soluções analíticas aproximadas 2.1. Solução exata para a barra de seção constante Para torres esbeltas de seção constante é conhecida a solução analítica para a frequência natural de vibração livre associada ao modo fundamental:$$ f_{\rm n} = \frac{1}{2\pi} \left( \frac{1.8751}{H} \right)^2 \sqrt{\frac{EI}{\mu}} $$onde H é a altura da torre, $EI$ é a rigidez à flexão, e $\mu$ é a massa por unidadede comprimento. A constante 1.8751 é aproximada e consiste na menor raiz não nula e positiva, $a$, da equação característica:$$ \cos(a) \cosh(a) + 1 = 0 $$As demais soluções positivas estão associadas à modos superiores que não serão considerados neste trabalho. A frequência fundamental pode ser calculada com esta fórmula para as três hastes de alumínio com seção transversal 2cm $\times$ 0,5mm e com comprimentos 22, 30 e 38cm, respectivamente, que são utilizadas na parte experimental deste trabalho. Assim temos:
###Code
# Seção transversal em alumínio
EI = 7.2e10*(0.020*0.0005**3)/12
mu = 2.7e03*(0.020*0.0005)
print('Rigidez da seção transversal à flexão: {0:6.4f}Nm²'.format(EI))
print('Massa por unidade de comprimento: {0:6.4f}kg/m'.format(mu))
###Output
Rigidez da seção transversal à flexão: 0.0150Nm²
Massa por unidade de comprimento: 0.0270kg/m
###Markdown
Aplicando-se então a fórmula para os três comprimentos de haste a serem estudados tem-se:
###Code
H22 = 0.22
H30 = 0.30
H38 = 0.38
f0 = np.sqrt(EI/mu)/(2*np.pi)
f22 = ((1.875/H22)**2)*f0
f30 = ((1.875/H30)**2)*f0
f38 = ((1.875/H38)**2)*f0
print('Frequência fundamental para a haste de 22cm: {0:5.2f}Hz'.format(f22))
print('Frequência fundamental para a haste de 30cm: {0:5.2f}Hz'.format(f30))
print('Frequência fundamental para a haste de 38cm: {0:5.2f}Hz'.format(f38))
###Output
Frequência fundamental para a haste de 22cm: 8.62Hz
Frequência fundamental para a haste de 30cm: 4.63Hz
Frequência fundamental para a haste de 38cm: 2.89Hz
###Markdown
É importante lembrar que o resultado acima desconsidera o efeito de segunda ordem decorrente do peso próprio da torre ou do acréscimo de qualquer massa adicional.Como este trabalho tem como objetivo justamente a quantificação do efeito de segunda ordem na frequência fundamental, determina-se a seguir as cargas críticas de flambagem elástica (carga de Euler) para os três comprimentos de haste utilizadosna parte experimental. Estas cargas correspondem ao peso da maior massa que pode ser acrescentada ao topo das hastes e serão posteriormente utilizadas como parâmetro de adimensionalização.
###Code
r0 = 0.0005/np.sqrt(12) # raio de giração
P0 = (np.pi**2)*7.2E10*(0.020*0.0005) # numerador da fórmula de Euler
P22 = P0/(2*H22/r0)**2
P30 = P0/(2*H30/r0)**2
P38 = P0/(2*H38/r0)**2
print('Carga crítica para a haste de 22cm: {0:5.3f}N ({1:4.1f}g)'.format(P22, 1000*P22/9.81))
print('Carga crítica para a haste de 30cm: {0:5.3f}N ({1:4.1f}g)'.format(P30, 1000*P30/9.81))
print('Carga crítica para a haste de 38cm: {0:5.3f}N ({1:4.1f}g)'.format(P38, 1000*P38/9.81))
###Output
Carga crítica para a haste de 22cm: 0.765N (78.0g)
Carga crítica para a haste de 30cm: 0.411N (41.9g)
Carga crítica para a haste de 38cm: 0.256N (26.1g)
###Markdown
2.2. Solução aproximada por quociente de Rayleigh Para o cálculo analítico da frequência fundamental com o acréscimo de uma massa no topoda torre pode ser utilizado o método do quociente de Rayleigh, que representa um estimador da frequência natural de vibração livre e é dado pela relação entre a energia potencial elástica, $V$, e a energia cinética de referência, $T_{\rm ref}$:$$ f_{\rm n} \leq \frac{1}{2\pi} \sqrt{\frac{V}{T_{\rm ref}}} $$O cálculo destas energias requer a definição de uma função de interpolação para a linha elástica, tão próxima quanto possível da forma modal. Por exemplo, a solução da linha elástica para uma carga horizontal no topo é muito semelhante ao primeiromodo de vibração livre:$$ \varphi(\xi) = \frac{1}{2}\left(3\xi^2 - \xi^3\right)$$com $\xi = z/H$ sendo a coordenada vertical adimensionalizada. Note-se que esta função de interpolação está normalizada para deslocamento unitário no topo. Uma vez escolhida a função $\varphi(\xi)$, as energias $V$ e $T_{\rm ref}$ são calculadas como:\begin{align*} V &= \frac{1}{2} \int_0^H { EI \left[ \varphi^{\prime\prime}(z) \right] ^2 \, dz} \\ T_{\rm ref} &= \frac{1}{2} \int_0^H {\mu \left[ \varphi(z) \right] ^2 \, dz} \end{align*}Para as três hastes utilizadas na parte experimental deste trabalho as frequênciasobtidas por este método estão apresentadas abaixo:
###Code
phi = lambda z: (3*(z/H)**2 - (z/H)**3)/2 # função de interpolação
ph1 = lambda z: (6*(z/H) - 3*(z/H)**2)/2/H # primeira derivada = rotação
ph2 = lambda z: (6 - 6*(z/H))/2/(H**2) # segunda derivada = curvatura
n = 100 # número de segmentos de discretização
H = H22
zi = np.linspace(0, H22, n)
V = np.trapz(ph2(zi)**2, dx=H22/n)*(EI/2)
Tr = np.trapz(phi(zi)**2, dx=H22/n)*(mu/2)
f22r = np.sqrt(V/Tr)/(2*np.pi)
er22 = (f22r - f22)*100/f22
H = H30
zi = np.linspace(0, H30, n)
V = np.trapz(ph2(zi)**2, dx=H30/n)*(EI/2)
Tr = np.trapz(phi(zi)**2, dx=H30/n)*(mu/2)
f30r = np.sqrt(V/Tr)/(2*np.pi)
er30 = (f30r - f30)*100/f30
H = H38
zi = np.linspace(0, H38, n)
V = np.trapz(ph2(zi)**2, dx=H38/n)*(EI/2)
Tr = np.trapz(phi(zi)**2, dx=H38/n)*(mu/2)
f38r = np.sqrt(V/Tr)/(2*np.pi)
er38 = (f38r - f38)*100/f38
print('Frequência fundamental para a haste de 22cm: {0:6.2f}Hz'.format(f22r))
print('Erro de aproximação: {0:6.2f}% '.format(er22))
print('')
print('Frequência fundamental para a haste de 30cm: {0:6.2f}Hz'.format(f30r))
print('Erro de aproximação: {0:6.2f}% '.format(er30))
print('')
print('Frequência fundamental para a haste de 38cm: {0:6.2f}Hz'.format(f38r))
print('Erro de aproximação: {0:6.2f}% '.format(er38))
###Output
Frequência fundamental para a haste de 22cm: 8.74Hz
Erro de aproximação: 1.47%
Frequência fundamental para a haste de 30cm: 4.70Hz
Erro de aproximação: 1.47%
Frequência fundamental para a haste de 38cm: 2.93Hz
Erro de aproximação: 1.47%
###Markdown
2.3. Quociente de Rayleigh com massas adicionais Portanto, o erro é de apenas 1,5% para a função de interpolação proposta, a qualintencionalmente difere da linha elástica associada ao modo fundamental. Contudo, na medida em que se acrescente massa no topo este erro tende a diminuir,já que as hastes neste caso serão de fato submetida a uma carga inercial concentrada no topo, cuja linha elástica tenderá então à função proposta.A grande vantagem do uso do quociente de Rayleigh é a fácil introdução de massasadicionais no denominador, as quais devem ser multiplicadas pelos respectivos valoresda função de interpolação nas posições em que se encontram. Por exemplo, uma massaadicional no topo, $M_1$, deve ser multiplicada pelo valor $\left[\varphi(H/H)\right]^2 = 1^2$e neste caso a energia cinética de referência fica:$$ T_{\rm ref} = \frac{1}{2} \left( \int_0^H {\mu \left[ \varphi(z) \right] ^2 \, dz} + M_1 \cdot 1^2 \right) $$A título de exemplo, vamos aplicar a equação acima às hastes utilizadas na parteexperimental deste trabalho. Em cada caso admite-se que a massa adicionada no topocorresponde à 50% da carga crítica de flambagem:
###Code
H = H22
zi = np.linspace(0, H22, n)
V = np.trapz(ph2(zi)**2, dx=H22/n)*(EI/2)
Tr = np.trapz(phi(zi)**2, dx=H22/n)*(mu/2) + (0.5*P22/9.81)/2
f22M = np.sqrt(V/Tr)/(2*np.pi)
H = H30
zi = np.linspace(0, H30, n)
V = np.trapz(ph2(zi)**2, dx=H30/n)*(EI/2)
Tr = np.trapz(phi(zi)**2, dx=H30/n)*(mu/2) + (0.5*P30/9.81)/2
f30M = np.sqrt(V/Tr)/(2*np.pi)
H = H38
zi = np.linspace(0, H38, n)
V = np.trapz(ph2(zi)**2, dx=H38/n)*(EI/2)
Tr = np.trapz(phi(zi)**2, dx=H38/n)*(mu/2) + (0.5*P38/9.81)/2
f38M = np.sqrt(V/Tr)/(2*np.pi)
print('Frequência para a haste de 22cm com massa no topo: {0:6.2f}Hz'.format(f22M))
print('Frequência para a haste de 30cm com massa no topo: {0:6.2f}Hz'.format(f30M))
print('Frequência para a haste de 38cm com massa no topo: {0:6.2f}Hz'.format(f38M))
###Output
Frequência para a haste de 22cm com massa no topo: 1.62Hz
Frequência para a haste de 30cm com massa no topo: 1.35Hz
Frequência para a haste de 38cm com massa no topo: 1.15Hz
###Markdown
2.4. Quociente de Rayleigh com efeito de segunda ordemEmbora o cálculo acima considere, com excelente precisão, a massa adicional presenteno topo de uma torre, ele ainda não inclui o efeito de segunda ordem devido à compressãodecorrente de uma carga compressiva elevada. Para isso é necessário incluir-se também uma modificação no cálculo da energia potencial elástica do sistema, de modo a se considerar a deformação axial adicional:$$ V = \frac{1}{2} \left( \int_0^H { EI \left[ \varphi^{\prime\prime}(z) \right] ^2 \, dz} - \int_0^H { P \left[ \varphi^{\prime}(z) \right] ^2 \, dz} \right) $$onde a segunda integral corresponde ao trabalho realizado pela carga compressiva peloencurtamento vertical da torre. Note-se que o sinal negativo (compressão) implica em uma redução da frequência natural, sendo que esta tenderá a zero na medida em que acarga $P$ se aproxima da carga crítica de flambagem.Aplicando-se esta nova equação às hastes sujeitas à 50% da carga de crítca obtem-se os seguintes resultados:
###Code
H = H22
zi = np.linspace(0, H22, n)
V = np.trapz(ph2(zi)**2, dx=H22/n)*(EI/2)
V -= np.trapz(ph1(zi)**2, dx=H22/n)*(0.5*P22/2)
Tr = np.trapz(phi(zi)**2, dx=H22/n)*(mu/2) + (0.5*P22/9.81)/2
f22P = np.sqrt(V/Tr)/(2*np.pi)
ef22 = (f22M - f22P)*100/f22P
H = H30
zi = np.linspace(0, H30, n)
V = np.trapz(ph2(zi)**2, dx=H30/n)*(EI/2)
V -= np.trapz(ph1(zi)**2, dx=H30/n)*(0.5*P30/2)
Tr = np.trapz(phi(zi)**2, dx=H30/n)*(mu/2) + (0.5*P30/9.81)/2
f30P = np.sqrt(V/Tr)/(2*np.pi)
ef30 = (f30M - f30P)*100/f30P
H = H38
zi = np.linspace(0, H38, n)
V = np.trapz(ph2(zi)**2, dx=H38/n)*(EI/2)
V -= np.trapz(ph1(zi)**2, dx=H38/n)*(0.5*P38/2)
Tr = np.trapz(phi(zi)**2, dx=H38/n)*(mu/2) + (0.5*P38/9.81)/2
f38P = np.sqrt(V/Tr)/(2*np.pi)
ef38 = (f38M - f38P)*100/f38P
print('Frequência para a haste de 22cm com efeito de 2a ordem: {0:6.2f}Hz'.format(f22P))
print('Erro pelo efeito de 2a. ordem: {0:6.2f}% '.format(ef22))
print('')
print('Frequência para a haste de 30cm com efeito de 2a ordem: {0:6.2f}Hz'.format(f30P))
print('Erro pelo efeito de 2a. ordem: {0:6.2f}% '.format(ef30))
print('')
print('Frequência para a haste de 38cm com efeito de 2a ordem: {0:6.2f}Hz'.format(f38P))
print('Erro pelo efeito de 2a. ordem: {0:6.2f}% '.format(ef30))
###Output
Frequência para a haste de 22cm com efeito de 2a ordem: 1.15Hz
Erro pelo efeito de 2a. ordem: 40.50%
Frequência para a haste de 30cm com efeito de 2a ordem: 0.96Hz
Erro pelo efeito de 2a. ordem: 40.50%
Frequência para a haste de 38cm com efeito de 2a ordem: 0.82Hz
Erro pelo efeito de 2a. ordem: 40.50%
###Markdown
Portanto, a não consideração do efeito de segunda ordem para uma carga compressivacorrespondente a apenas 50% da carga crítica implica que a frequência natural das hastesde seção constante são superestimadas em aproximadamente 40%. Uma diferença desta magnitude tem severas implicações nos cálculos das amplitudes deresposta dinâmica à ação do vento. 3. Solução numérica com matriz de rigidez geométrica 3.1. Matriz de rigidez elástica $$ \mathbf{K} = \frac{EI}{L^3} \; \left[ \begin{array}{cccc} 12 & 6L & -12 & 6L \\ 6L & 4L^2 & -6L & 2L^2 \\ -12 & -6L & 12 & -6L \\ 6L & 2L^2 & -6L & 4L^2 \end{array} \right] $$
###Code
# Discretiza lâminas de alumínio
L22 = 0.01*np.ones(22)
L30 = 0.01*np.ones(30)
L38 = 0.01*np.ones(38)
# Matrizes de rigidez elásticas in N/m
KE22 = stiffness(L22, EI, P=0)
KE30 = stiffness(L30, EI, P=0)
KE38 = stiffness(L38, EI, P=0)
# Visualização das matrizes
fig1, ax = plt.subplots(1, 3, figsize=(12,4))
plt.suptitle('Stiffness Matrices', fontweight='bold', fontsize=16)
hax0 = ax[0].imshow(KE22); tax0 = ax[0].title.set_text("K for 22cm blade")
hax1 = ax[1].imshow(KE30); tax1 = ax[1].title.set_text("K for 30cm blade")
hax2 = ax[2].imshow(KE38); tax2 = ax[2].title.set_text("K for 38cm blade")
###Output
_____no_output_____
###Markdown
3.2. Matriz de rigidez geométrica $$ \mathbf{K_{\rm G}} = \frac{P}{30L} \; \left[ \begin{array}{cccc} 36 & 3L & -36 & 3L \\ 3L & 4L^2 & -3L & -L^2 \\ -36 & -3L & 36 & -3L \\ 3L & -L^2 & -3L & 4L^2 \end{array} \right] $$
###Code
# Matrizes de rigidez geométricas in N/m
KG22 = stiffness(L22, EI=0, P=-P22/2)
KG30 = stiffness(L30, EI=0, P=-P30/2)
KG38 = stiffness(L38, EI=0, P=-P38/2)
# Visualização das matrizes
fig2, ax = plt.subplots(1, 3, figsize=(12,4))
plt.suptitle('Geometric Matrices', fontweight='bold', fontsize=16)
hax0 = ax[0].imshow(KG22); tax0 = ax[0].title.set_text("K for 22cm blade")
hax1 = ax[1].imshow(KG30); tax1 = ax[1].title.set_text("K for 30cm blade")
hax2 = ax[2].imshow(KG38); tax2 = ax[2].title.set_text("K for 38cm blade")
###Output
_____no_output_____
###Markdown
Sobrepondo as matrizes elástica e geométrica tem-se a matriz de rigidez com efeito de segunda ordem:
###Code
# Matrizes de rigidez geométricas in N/m
K22 = stiffness(L22, EI=EI, P=-P22/2)
K30 = stiffness(L30, EI=EI, P=-P30/2)
K38 = stiffness(L38, EI=EI, P=-P38/2)
# Visualização das matrizes
fig3, ax = plt.subplots(1, 3, figsize=(12,4))
plt.suptitle('2nd Order Matrices', fontweight='bold', fontsize=16)
hax0 = ax[0].imshow(K22); tax0 = ax[0].title.set_text("K for 22cm blade")
hax1 = ax[1].imshow(K30); tax1 = ax[1].title.set_text("K for 30cm blade")
hax2 = ax[2].imshow(K38); tax2 = ax[2].title.set_text("K for 38cm blade")
###Output
_____no_output_____
###Markdown
3.3. Matriz de massa consistente $$ \mathbf{M} = \frac{\mu L}{420} \; \left[ \begin{array}{cccc} 156 & 22L & 54 & -13L \\ 22L & 4L^2 & 13L & -3L^2 \\ 54 & 13L & 156 & -22L \\ -13L & -3L^2 & -22L & 4L^2 \end{array} \right] $$
###Code
# Consistent masses in kg
M22 = consistMass(L22, mu)
M30 = consistMass(L30, mu)
M38 = consistMass(L38, mu)
# Visualização das matrizes
fig4, ax = plt.subplots(1, 3, figsize=(12,4))
plt.suptitle('Mass Matrices', fontweight='bold', fontsize=16)
hax0 = ax[0].imshow(M22); tax0 = ax[0].title.set_text("M for 22cm blade")
hax1 = ax[1].imshow(M30); tax1 = ax[1].title.set_text("M for 30cm blade")
hax2 = ax[2].imshow(M38); tax2 = ax[2].title.set_text("M for 38cm blade")
###Output
_____no_output_____
###Markdown
3.4. Estimativa de frequências naturais
###Code
import scipy.linalg as sc
# For L = 22cm
Z22 = L22.cumsum()[::-1]
KT22 = K22[:-2,:-2]
MT22 = M22[:-2,:-2]
MT22[0,0] += 0.5*P22/9.81 # massa adicional no topo é 50% da massa de flambagem
w22, Ph22 = sc.eig(KT22, MT22)
iw = w22.argsort()
w22 = w22[iw]
Ph22 = Ph22[:,iw]
wk22 = np.sqrt(np.real(w22))
fk22 = wk22/2/np.pi
fig5, ax = plt.subplots(1, 3, figsize=(12,6))
plt.suptitle('Vibration Modes', fontweight='bold', fontsize=16)
for k in range(3):
pk = Ph22[0:-1:2,k] # retem somente a translação
pm = np.max(np.abs(pk)) # normaliza máxima amplitude unitária
ha = ax[k].plot(pk/pm, Z22);
tx = ax[k].title.set_text('Modo {0} :: fk = {1:4.2f}Hz'.format(k+1,fk22[k]))
ax[k].axis([-1.5, 1.5, 0, Z22[0]])
ax[k].grid(True)
H = 0.22
P1 = P22/2
n = 22
fn = analyseCase(H, EI, mu, P1, n)
print(fn)
###Output
1.6282741550177127
|
Silver/.ipynb_checkpoints/D06_Shors_Algorithm_In_More_Detail_Solutions-checkpoint.ipynb | ###Markdown
prepared by Özlem Salehi (QTurkey) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $\newcommand{\Mod}[1]{\ (\mathrm{mod}\ 1)}$$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Solutions for Shor's Algorithm In More Detail Task 1 (on paper)Show that after applying the controlled $U^{2^j}$ gates for $j=0,\dots,t-1$, the state obtained can be expressed as $\displaystyle \frac{1}{2^{t/2}} \sum_{k=0}^{2^t-1} \ket{k}U^k\ket{\psi}$. Solution We have already seen the effect of the first gate corresponding to case $j=0$. Let's continue with the remaining gates. - $ j=0 $, qubit $ t$ is the control. If $ k_t=0 $, new state is $\ket{k} \ket{\psi} $. If $ k_t=1 $, new state is $ e^{2\pi i \phi 2^0} \ket{k} \ket{\psi} $. Hence, we can write it as, $\ket{k} U^{k_t 2 ^0} \ket{\psi}.$ - $ j=1 $, qubit $ t-1 $ is the control. If $ k_{t-1}=0 $, new state is $ \ket{k} U^{k_t 2 ^0} \ket{\psi} $. If $ k_{t-1}=1 $, new state is $ \ket{k} e^{2\pi i \phi 2^1} U^{k_t 2 ^0} \ket{\psi} $. Hence, we can write it as, $\ket{k} U^{k_{t-1} 2 ^1} U^{k_t 2 ^0} \ket{\psi}$ Applying the $ CU^{2^j} $ gates for each qubit, we obtain the following state at the end.$\ket{k} U^{k_1 (2^t-1)} \cdots U^{k_{t-1} 2^1} U^{k_t 2^0} \ket{\psi} = \ket{k} U^{k_1 (2^t-1)+\cdots +k_{t-1} 2^1+ k_t 2^0} \ket{\psi} $ Now noting that $ k_1 (2^t-1)+\cdots +k_{t-1} 2^1+k_t 2^0=k $, we can express the state obtained as $\displaystyle \frac{1}{2^{t/2}} \sum_{k=0}^{2^t-1} \ket{k}U^k\ket{\psi}$. Task 2Implement the order finding procedure until the Inverse Quantum Fourier Transform and check whether you obtain the above state. Simulate the circuit without measuring it. Use the function dirac_notation() to print the state representation after getting the results. Check the first 5 states, convert to integer representation and compare with the above expression. Recall that to implement $CU$ operator you should pass $x$ and $N$ as parameter to the function Ux. Run the following cell to load the function.CU=Ux(x,N)
###Code
%run operator.py
###Output
_____no_output_____
###Markdown
Solution
###Code
import cirq
import matplotlib
#Create a circuit
circuit = cirq.Circuit()
#Set t and n, sizes of the registers
t=12
n=4
#Create control and target qubits
control = [cirq.LineQubit(i) for i in range(1,t+1) ]
target = [cirq.LineQubit(i) for i in range(t+1,t+1+n) ]
circuit.append(cirq.X(target[n-1]))
#Create the operator CU
x=7
N=15
CU=Ux(x,N)
#Apply Hadamard to control qubits
circuit.append(cirq.H.on_each(control))
#Apply CU gates
for i in range(t):
#Obtain the power of CU gate
CUi = CU**(2**i)
#Apply CUi gate where t-i-1 is the control
circuit.append(CUi(control[t-i-1],*target))
#Simulate the circuit
s = cirq.Simulator()
#Use Dirac notation to print the output
results=s.simulate(circuit)
print(results.dirac_notation())
###Output
0.02|0000000000000001⟩ + 0.02|0000000000010111⟩ + 0.02|0000000000100100⟩ + 0.02|0000000000111101⟩ + 0.02|0000000001000001⟩ + 0.02|0000000001010111⟩ + 0.02|0000000001100100⟩ + 0.02|0000000001111101⟩ + 0.02|0000000010000001⟩ + 0.02|0000000010010111⟩ + 0.02|0000000010100100⟩ + 0.02|0000000010111101⟩ + 0.02|0000000011000001⟩ + 0.02|0000000011010111⟩ + 0.02|0000000011100100⟩ + 0.02|0000000011111101⟩ + 0.02|0000000100000001⟩ + 0.02|0000000100010111⟩ + 0.02|0000000100100100⟩ + 0.02|0000000100111101⟩ + 0.02|0000000101000001⟩ + 0.02|0000000101010111⟩ + 0.02|0000000101100100⟩ + 0.02|0000000101111101⟩ + 0.02|0000000110000001⟩ + 0.02|0000000110010111⟩ + 0.02|0000000110100100⟩ + 0.02|0000000110111101⟩ + 0.02|0000000111000001⟩ + 0.02|0000000111010111⟩ + 0.02|0000000111100100⟩ + 0.02|0000000111111101⟩ + 0.02|0000001000000001⟩ + 0.02|0000001000010111⟩ + 0.02|0000001000100100⟩ + 0.02|0000001000111101⟩ + 0.02|0000001001000001⟩ + 0.02|0000001001010111⟩ + 0.02|0000001001100100⟩ + 0.02|0000001001111101⟩ + 0.02|0000001010000001⟩ + 0.02|0000001010010111⟩ + 0.02|0000001010100100⟩ + 0.02|0000001010111101⟩ + 0.02|0000001011000001⟩ + 0.02|0000001011010111⟩ + 0.02|0000001011100100⟩ + 0.02|0000001011111101⟩ + 0.02|0000001100000001⟩ + 0.02|0000001100010111⟩ + 0.02|0000001100100100⟩ + 0.02|0000001100111101⟩ + 0.02|0000001101000001⟩ + 0.02|0000001101010111⟩ + 0.02|0000001101100100⟩ + 0.02|0000001101111101⟩ + 0.02|0000001110000001⟩ + 0.02|0000001110010111⟩ + 0.02|0000001110100100⟩ + 0.02|0000001110111101⟩ + 0.02|0000001111000001⟩ + 0.02|0000001111010111⟩ + 0.02|0000001111100100⟩ + 0.02|0000001111111101⟩ + 0.02|0000010000000001⟩ + 0.02|0000010000010111⟩ + 0.02|0000010000100100⟩ + 0.02|0000010000111101⟩ + 0.02|0000010001000001⟩ + 0.02|0000010001010111⟩ + 0.02|0000010001100100⟩ + 0.02|0000010001111101⟩ + 0.02|0000010010000001⟩ + 0.02|0000010010010111⟩ + 0.02|0000010010100100⟩ + 0.02|0000010010111101⟩ + 0.02|0000010011000001⟩ + 0.02|0000010011010111⟩ + 0.02|0000010011100100⟩ + 0.02|0000010011111101⟩ + 0.02|0000010100000001⟩ + 0.02|0000010100010111⟩ + 0.02|0000010100100100⟩ + 0.02|0000010100111101⟩ + 0.02|0000010101000001⟩ + 0.02|0000010101010111⟩ + 0.02|0000010101100100⟩ + 0.02|0000010101111101⟩ + 0.02|0000010110000001⟩ + 0.02|0000010110010111⟩ + 0.02|0000010110100100⟩ + 0.02|0000010110111101⟩ + 0.02|0000010111000001⟩ + 0.02|0000010111010111⟩ + 0.02|0000010111100100⟩ + 0.02|0000010111111101⟩ + 0.02|0000011000000001⟩ + 0.02|0000011000010111⟩ + 0.02|0000011000100100⟩ + 0.02|0000011000111101⟩ + 0.02|0000011001000001⟩ + 0.02|0000011001010111⟩ + 0.02|0000011001100100⟩ + 0.02|0000011001111101⟩ + 0.02|0000011010000001⟩ + 0.02|0000011010010111⟩ + 0.02|0000011010100100⟩ + 0.02|0000011010111101⟩ + 0.02|0000011011000001⟩ + 0.02|0000011011010111⟩ + 0.02|0000011011100100⟩ + 0.02|0000011011111101⟩ + 0.02|0000011100000001⟩ + 0.02|0000011100010111⟩ + 0.02|0000011100100100⟩ + 0.02|0000011100111101⟩ + 0.02|0000011101000001⟩ + 0.02|0000011101010111⟩ + 0.02|0000011101100100⟩ + 0.02|0000011101111101⟩ + 0.02|0000011110000001⟩ + 0.02|0000011110010111⟩ + 0.02|0000011110100100⟩ + 0.02|0000011110111101⟩ + 0.02|0000011111000001⟩ + 0.02|0000011111010111⟩ + 0.02|0000011111100100⟩ + 0.02|0000011111111101⟩ + 0.02|0000100000000001⟩ + 0.02|0000100000010111⟩ + 0.02|0000100000100100⟩ + 0.02|0000100000111101⟩ + 0.02|0000100001000001⟩ + 0.02|0000100001010111⟩ + 0.02|0000100001100100⟩ + 0.02|0000100001111101⟩ + 0.02|0000100010000001⟩ + 0.02|0000100010010111⟩ + 0.02|0000100010100100⟩ + 0.02|0000100010111101⟩ + 0.02|0000100011000001⟩ + 0.02|0000100011010111⟩ + 0.02|0000100011100100⟩ + 0.02|0000100011111101⟩ + 0.02|0000100100000001⟩ + 0.02|0000100100010111⟩ + 0.02|0000100100100100⟩ + 0.02|0000100100111101⟩ + 0.02|0000100101000001⟩ + 0.02|0000100101010111⟩ + 0.02|0000100101100100⟩ + 0.02|0000100101111101⟩ + 0.02|0000100110000001⟩ + 0.02|0000100110010111⟩ + 0.02|0000100110100100⟩ + 0.02|0000100110111101⟩ + 0.02|0000100111000001⟩ + 0.02|0000100111010111⟩ + 0.02|0000100111100100⟩ + 0.02|0000100111111101⟩ + 0.02|0000101000000001⟩ + 0.02|0000101000010111⟩ + 0.02|0000101000100100⟩ + 0.02|0000101000111101⟩ + 0.02|0000101001000001⟩ + 0.02|0000101001010111⟩ + 0.02|0000101001100100⟩ + 0.02|0000101001111101⟩ + 0.02|0000101010000001⟩ + 0.02|0000101010010111⟩ + 0.02|0000101010100100⟩ + 0.02|0000101010111101⟩ + 0.02|0000101011000001⟩ + 0.02|0000101011010111⟩ + 0.02|0000101011100100⟩ + 0.02|0000101011111101⟩ + 0.02|0000101100000001⟩ + 0.02|0000101100010111⟩ + 0.02|0000101100100100⟩ + 0.02|0000101100111101⟩ + 0.02|0000101101000001⟩ + 0.02|0000101101010111⟩ + 0.02|0000101101100100⟩ + 0.02|0000101101111101⟩ + 0.02|0000101110000001⟩ + 0.02|0000101110010111⟩ + 0.02|0000101110100100⟩ + 0.02|0000101110111101⟩ + 0.02|0000101111000001⟩ + 0.02|0000101111010111⟩ + 0.02|0000101111100100⟩ + 0.02|0000101111111101⟩ + 0.02|0000110000000001⟩ + 0.02|0000110000010111⟩ + 0.02|0000110000100100⟩ + 0.02|0000110000111101⟩ + 0.02|0000110001000001⟩ + 0.02|0000110001010111⟩ + 0.02|0000110001100100⟩ + 0.02|0000110001111101⟩ + 0.02|0000110010000001⟩ + 0.02|0000110010010111⟩ + 0.02|0000110010100100⟩ + 0.02|0000110010111101⟩ + 0.02|0000110011000001⟩ + 0.02|0000110011010111⟩ + 0.02|0000110011100100⟩ + 0.02|0000110011111101⟩ + 0.02|0000110100000001⟩ + 0.02|0000110100010111⟩ + 0.02|0000110100100100⟩ + 0.02|0000110100111101⟩ + 0.02|0000110101000001⟩ + 0.02|0000110101010111⟩ + 0.02|0000110101100100⟩ + 0.02|0000110101111101⟩ + 0.02|0000110110000001⟩ + 0.02|0000110110010111⟩ + 0.02|0000110110100100⟩ + 0.02|0000110110111101⟩ + 0.02|0000110111000001⟩ + 0.02|0000110111010111⟩ + 0.02|0000110111100100⟩ + 0.02|0000110111111101⟩ + 0.02|0000111000000001⟩ + 0.02|0000111000010111⟩ + 0.02|0000111000100100⟩ + 0.02|0000111000111101⟩ + 0.02|0000111001000001⟩ + 0.02|0000111001010111⟩ + 0.02|0000111001100100⟩ + 0.02|0000111001111101⟩ + 0.02|0000111010000001⟩ + 0.02|0000111010010111⟩ + 0.02|0000111010100100⟩ + 0.02|0000111010111101⟩ + 0.02|0000111011000001⟩ + 0.02|0000111011010111⟩ + 0.02|0000111011100100⟩ + 0.02|0000111011111101⟩ + 0.02|0000111100000001⟩ + 0.02|0000111100010111⟩ + 0.02|0000111100100100⟩ + 0.02|0000111100111101⟩ + 0.02|0000111101000001⟩ + 0.02|0000111101010111⟩ + 0.02|0000111101100100⟩ + 0.02|0000111101111101⟩ + 0.02|0000111110000001⟩ + 0.02|0000111110010111⟩ + 0.02|0000111110100100⟩ + 0.02|0000111110111101⟩ + 0.02|0000111111000001⟩ + 0.02|0000111111010111⟩ + 0.02|0000111111100100⟩ + 0.02|0000111111111101⟩ + 0.02|0001000000000001⟩ + 0.02|0001000000010111⟩ + 0.02|0001000000100100⟩ + 0.02|0001000000111101⟩ + 0.02|0001000001000001⟩ + 0.02|0001000001010111⟩ + 0.02|0001000001100100⟩ + 0.02|0001000001111101⟩ + 0.02|0001000010000001⟩ + 0.02|0001000010010111⟩ + 0.02|0001000010100100⟩ + 0.02|0001000010111101⟩ + 0.02|0001000011000001⟩ + 0.02|0001000011010111⟩ + 0.02|0001000011100100⟩ + 0.02|0001000011111101⟩ + 0.02|0001000100000001⟩ + 0.02|0001000100010111⟩ + 0.02|0001000100100100⟩ + 0.02|0001000100111101⟩ + 0.02|0001000101000001⟩ + 0.02|0001000101010111⟩ + 0.02|0001000101100100⟩ + 0.02|0001000101111101⟩ + 0.02|0001000110000001⟩ + 0.02|0001000110010111⟩ + 0.02|0001000110100100⟩ + 0.02|0001000110111101⟩ + 0.02|0001000111000001⟩ + 0.02|0001000111010111⟩ + 0.02|0001000111100100⟩ + 0.02|0001000111111101⟩ + 0.02|0001001000000001⟩ + 0.02|0001001000010111⟩ + 0.02|0001001000100100⟩ + 0.02|0001001000111101⟩ + 0.02|0001001001000001⟩ + 0.02|0001001001010111⟩ + 0.02|0001001001100100⟩ + 0.02|0001001001111101⟩ + 0.02|0001001010000001⟩ + 0.02|0001001010010111⟩ + 0.02|0001001010100100⟩ + 0.02|0001001010111101⟩ + 0.02|0001001011000001⟩ + 0.02|0001001011010111⟩ + 0.02|0001001011100100⟩ + 0.02|0001001011111101⟩ + 0.02|0001001100000001⟩ + 0.02|0001001100010111⟩ + 0.02|0001001100100100⟩ + 0.02|0001001100111101⟩ + 0.02|0001001101000001⟩ + 0.02|0001001101010111⟩ + 0.02|0001001101100100⟩ + 0.02|0001001101111101⟩ + 0.02|0001001110000001⟩ + 0.02|0001001110010111⟩ + 0.02|0001001110100100⟩ + 0.02|0001001110111101⟩ + 0.02|0001001111000001⟩ + 0.02|0001001111010111⟩ + 0.02|0001001111100100⟩ + 0.02|0001001111111101⟩ + 0.02|0001010000000001⟩ + 0.02|0001010000010111⟩ + 0.02|0001010000100100⟩ + 0.02|0001010000111101⟩ + 0.02|0001010001000001⟩ + 0.02|0001010001010111⟩ + 0.02|0001010001100100⟩ + 0.02|0001010001111101⟩ + 0.02|0001010010000001⟩ + 0.02|0001010010010111⟩ + 0.02|0001010010100100⟩ + 0.02|0001010010111101⟩ + 0.02|0001010011000001⟩ + 0.02|0001010011010111⟩ + 0.02|0001010011100100⟩ + 0.02|0001010011111101⟩ + 0.02|0001010100000001⟩ + 0.02|0001010100010111⟩ + 0.02|0001010100100100⟩ + 0.02|0001010100111101⟩ + 0.02|0001010101000001⟩ + 0.02|0001010101010111⟩ + 0.02|0001010101100100⟩ + 0.02|0001010101111101⟩ + 0.02|0001010110000001⟩ + 0.02|0001010110010111⟩ + 0.02|0001010110100100⟩ + 0.02|0001010110111101⟩ + 0.02|0001010111000001⟩ + 0.02|0001010111010111⟩ + 0.02|0001010111100100⟩ + 0.02|0001010111111101⟩ + 0.02|0001011000000001⟩ + 0.02|0001011000010111⟩ + 0.02|0001011000100100⟩ + 0.02|0001011000111101⟩ + 0.02|0001011001000001⟩ + 0.02|0001011001010111⟩ + 0.02|0001011001100100⟩ + 0.02|0001011001111101⟩ + 0.02|0001011010000001⟩ + 0.02|0001011010010111⟩ + 0.02|0001011010100100⟩ + 0.02|0001011010111101⟩ + 0.02|0001011011000001⟩ + 0.02|0001011011010111⟩ + 0.02|0001011011100100⟩ + 0.02|0001011011111101⟩ + 0.02|0001011100000001⟩ + 0.02|0001011100010111⟩ + 0.02|0001011100100100⟩ + 0.02|0001011100111101⟩ + 0.02|0001011101000001⟩ + 0.02|0001011101010111⟩ + 0.02|0001011101100100⟩ + 0.02|0001011101111101⟩ + 0.02|0001011110000001⟩ + 0.02|0001011110010111⟩ + 0.02|0001011110100100⟩ + 0.02|0001011110111101⟩ + 0.02|0001011111000001⟩ + 0.02|0001011111010111⟩ + 0.02|0001011111100100⟩ + 0.02|0001011111111101⟩ + 0.02|0001100000000001⟩ + 0.02|0001100000010111⟩ + 0.02|0001100000100100⟩ + 0.02|0001100000111101⟩ + 0.02|0001100001000001⟩ + 0.02|0001100001010111⟩ + 0.02|0001100001100100⟩ + 0.02|0001100001111101⟩ + 0.02|0001100010000001⟩ + 0.02|0001100010010111⟩ + 0.02|0001100010100100⟩ + 0.02|0001100010111101⟩ + 0.02|0001100011000001⟩ + 0.02|0001100011010111⟩ + 0.02|0001100011100100⟩ + 0.02|0001100011111101⟩ + 0.02|0001100100000001⟩ + 0.02|0001100100010111⟩ + 0.02|0001100100100100⟩ + 0.02|0001100100111101⟩ + 0.02|0001100101000001⟩ + 0.02|0001100101010111⟩ + 0.02|0001100101100100⟩ + 0.02|0001100101111101⟩ + 0.02|0001100110000001⟩ + 0.02|0001100110010111⟩ + 0.02|0001100110100100⟩ + 0.02|0001100110111101⟩ + 0.02|0001100111000001⟩ + 0.02|0001100111010111⟩ + 0.02|0001100111100100⟩ + 0.02|0001100111111101⟩ + 0.02|0001101000000001⟩ + 0.02|0001101000010111⟩ + 0.02|0001101000100100⟩ + 0.02|0001101000111101⟩ + 0.02|0001101001000001⟩ + 0.02|0001101001010111⟩ + 0.02|0001101001100100⟩ + 0.02|0001101001111101⟩ + 0.02|0001101010000001⟩ + 0.02|0001101010010111⟩ + 0.02|0001101010100100⟩ + 0.02|0001101010111101⟩ + 0.02|0001101011000001⟩ + 0.02|0001101011010111⟩ + 0.02|0001101011100100⟩ + 0.02|0001101011111101⟩ + 0.02|0001101100000001⟩ + 0.02|0001101100010111⟩ + 0.02|0001101100100100⟩ + 0.02|0001101100111101⟩ + 0.02|0001101101000001⟩ + 0.02|0001101101010111⟩ + 0.02|0001101101100100⟩ + 0.02|0001101101111101⟩ + 0.02|0001101110000001⟩ + 0.02|0001101110010111⟩ + 0.02|0001101110100100⟩ + 0.02|0001101110111101⟩ + 0.02|0001101111000001⟩ + 0.02|0001101111010111⟩ + 0.02|0001101111100100⟩ + 0.02|0001101111111101⟩ + 0.02|0001110000000001⟩ + 0.02|0001110000010111⟩ + 0.02|0001110000100100⟩ + 0.02|0001110000111101⟩ + 0.02|0001110001000001⟩ + 0.02|0001110001010111⟩ + 0.02|0001110001100100⟩ + 0.02|0001110001111101⟩ + 0.02|0001110010000001⟩ + 0.02|0001110010010111⟩ + 0.02|0001110010100100⟩ + 0.02|0001110010111101⟩ + 0.02|0001110011000001⟩ + 0.02|0001110011010111⟩ + 0.02|0001110011100100⟩ + 0.02|0001110011111101⟩ + 0.02|0001110100000001⟩ + 0.02|0001110100010111⟩ + 0.02|0001110100100100⟩ + 0.02|0001110100111101⟩ + 0.02|0001110101000001⟩ + 0.02|0001110101010111⟩ + 0.02|0001110101100100⟩ + 0.02|0001110101111101⟩ + 0.02|0001110110000001⟩ + 0.02|0001110110010111⟩ + 0.02|0001110110100100⟩ + 0.02|0001110110111101⟩ + 0.02|0001110111000001⟩ + 0.02|0001110111010111⟩ + 0.02|0001110111100100⟩ + 0.02|0001110111111101⟩ + 0.02|0001111000000001⟩ + 0.02|0001111000010111⟩ + 0.02|0001111000100100⟩ + 0.02|0001111000111101⟩ + 0.02|0001111001000001⟩ + 0.02|0001111001010111⟩ + 0.02|0001111001100100⟩ + 0.02|0001111001111101⟩ + 0.02|0001111010000001⟩ + 0.02|0001111010010111⟩ + 0.02|0001111010100100⟩ + 0.02|0001111010111101⟩ + 0.02|0001111011000001⟩ + 0.02|0001111011010111⟩ + 0.02|0001111011100100⟩ + 0.02|0001111011111101⟩ + 0.02|0001111100000001⟩ + 0.02|0001111100010111⟩ + 0.02|0001111100100100⟩ + 0.02|0001111100111101⟩ + 0.02|0001111101000001⟩ + 0.02|0001111101010111⟩ + 0.02|0001111101100100⟩ + 0.02|0001111101111101⟩ + 0.02|0001111110000001⟩ + 0.02|0001111110010111⟩ + 0.02|0001111110100100⟩ + 0.02|0001111110111101⟩ + 0.02|0001111111000001⟩ + 0.02|0001111111010111⟩ + 0.02|0001111111100100⟩ + 0.02|0001111111111101⟩ + 0.02|0010000000000001⟩ + 0.02|0010000000010111⟩ + 0.02|0010000000100100⟩ + 0.02|0010000000111101⟩ + 0.02|0010000001000001⟩ + 0.02|0010000001010111⟩ + 0.02|0010000001100100⟩ + 0.02|0010000001111101⟩ + 0.02|0010000010000001⟩ + 0.02|0010000010010111⟩ + 0.02|0010000010100100⟩ + 0.02|0010000010111101⟩ + 0.02|0010000011000001⟩ + 0.02|0010000011010111⟩ + 0.02|0010000011100100⟩ + 0.02|0010000011111101⟩ + 0.02|0010000100000001⟩ + 0.02|0010000100010111⟩ + 0.02|0010000100100100⟩ + 0.02|0010000100111101⟩ + 0.02|0010000101000001⟩ + 0.02|0010000101010111⟩ + 0.02|0010000101100100⟩ + 0.02|0010000101111101⟩ + 0.02|0010000110000001⟩ + 0.02|0010000110010111⟩ + 0.02|0010000110100100⟩ + 0.02|0010000110111101⟩ + 0.02|0010000111000001⟩ + 0.02|0010000111010111⟩ + 0.02|0010000111100100⟩ + 0.02|0010000111111101⟩ + 0.02|0010001000000001⟩ + 0.02|0010001000010111⟩ + 0.02|0010001000100100⟩ + 0.02|0010001000111101⟩ + 0.02|0010001001000001⟩ + 0.02|0010001001010111⟩ + 0.02|0010001001100100⟩ + 0.02|0010001001111101⟩ + 0.02|0010001010000001⟩ + 0.02|0010001010010111⟩ + 0.02|0010001010100100⟩ + 0.02|0010001010111101⟩ + 0.02|0010001011000001⟩ + 0.02|0010001011010111⟩ + 0.02|0010001011100100⟩ + 0.02|0010001011111101⟩ + 0.02|0010001100000001⟩ + 0.02|0010001100010111⟩ + 0.02|0010001100100100⟩ + 0.02|0010001100111101⟩ + 0.02|0010001101000001⟩ + 0.02|0010001101010111⟩ + 0.02|0010001101100100⟩ + 0.02|0010001101111101⟩ + 0.02|0010001110000001⟩ + 0.02|0010001110010111⟩ + 0.02|0010001110100100⟩ + 0.02|0010001110111101⟩ + 0.02|0010001111000001⟩ + 0.02|0010001111010111⟩ + 0.02|0010001111100100⟩ + 0.02|0010001111111101⟩ + 0.02|0010010000000001⟩ + 0.02|0010010000010111⟩ + 0.02|0010010000100100⟩ + 0.02|0010010000111101⟩ + 0.02|0010010001000001⟩ + 0.02|0010010001010111⟩ + 0.02|0010010001100100⟩ + 0.02|0010010001111101⟩ + 0.02|0010010010000001⟩ + 0.02|0010010010010111⟩ + 0.02|0010010010100100⟩ + 0.02|0010010010111101⟩ + 0.02|0010010011000001⟩ + 0.02|0010010011010111⟩ + 0.02|0010010011100100⟩ + 0.02|0010010011111101⟩ + 0.02|0010010100000001⟩ + 0.02|0010010100010111⟩ + 0.02|0010010100100100⟩ + 0.02|0010010100111101⟩ + 0.02|0010010101000001⟩ + 0.02|0010010101010111⟩ + 0.02|0010010101100100⟩ + 0.02|0010010101111101⟩ + 0.02|0010010110000001⟩ + 0.02|0010010110010111⟩ + 0.02|0010010110100100⟩ + 0.02|0010010110111101⟩ + 0.02|0010010111000001⟩ + 0.02|0010010111010111⟩ + 0.02|0010010111100100⟩ + 0.02|0010010111111101⟩ + 0.02|0010011000000001⟩ + 0.02|0010011000010111⟩ + 0.02|0010011000100100⟩ + 0.02|0010011000111101⟩ + 0.02|0010011001000001⟩ + 0.02|0010011001010111⟩ + 0.02|0010011001100100⟩ + 0.02|0010011001111101⟩ + 0.02|0010011010000001⟩ + 0.02|0010011010010111⟩ + 0.02|0010011010100100⟩ + 0.02|0010011010111101⟩ + 0.02|0010011011000001⟩ + 0.02|0010011011010111⟩ + 0.02|0010011011100100⟩ + 0.02|0010011011111101⟩ + 0.02|0010011100000001⟩ + 0.02|0010011100010111⟩ + 0.02|0010011100100100⟩ + 0.02|0010011100111101⟩ + 0.02|0010011101000001⟩ + 0.02|0010011101010111⟩ + 0.02|0010011101100100⟩ + 0.02|0010011101111101⟩ + 0.02|0010011110000001⟩ + 0.02|0010011110010111⟩ + 0.02|0010011110100100⟩ + 0.02|0010011110111101⟩ + 0.02|0010011111000001⟩ + 0.02|0010011111010111⟩ + 0.02|0010011111100100⟩ + 0.02|0010011111111101⟩ + 0.02|0010100000000001⟩ + 0.02|0010100000010111⟩ + 0.02|0010100000100100⟩ + 0.02|0010100000111101⟩ + 0.02|0010100001000001⟩ + 0.02|0010100001010111⟩ + 0.02|0010100001100100⟩ + 0.02|0010100001111101⟩ + 0.02|0010100010000001⟩ + 0.02|0010100010010111⟩ + 0.02|0010100010100100⟩ + 0.02|0010100010111101⟩ + 0.02|0010100011000001⟩ + 0.02|0010100011010111⟩ + 0.02|0010100011100100⟩ + 0.02|0010100011111101⟩ + 0.02|0010100100000001⟩ + 0.02|0010100100010111⟩ + 0.02|0010100100100100⟩ + 0.02|0010100100111101⟩ + 0.02|0010100101000001⟩ + 0.02|0010100101010111⟩ + 0.02|0010100101100100⟩ + 0.02|0010100101111101⟩ + 0.02|0010100110000001⟩ + 0.02|0010100110010111⟩ + 0.02|0010100110100100⟩ + 0.02|0010100110111101⟩ + 0.02|0010100111000001⟩ + 0.02|0010100111010111⟩ + 0.02|0010100111100100⟩ + 0.02|0010100111111101⟩ + 0.02|0010101000000001⟩ + 0.02|0010101000010111⟩ + 0.02|0010101000100100⟩ + 0.02|0010101000111101⟩ + 0.02|0010101001000001⟩ + 0.02|0010101001010111⟩ + 0.02|0010101001100100⟩ + 0.02|0010101001111101⟩ + 0.02|0010101010000001⟩ + 0.02|0010101010010111⟩ + 0.02|0010101010100100⟩ + 0.02|0010101010111101⟩ + 0.02|0010101011000001⟩ + 0.02|0010101011010111⟩ + 0.02|0010101011100100⟩ + 0.02|0010101011111101⟩ + 0.02|0010101100000001⟩ + 0.02|0010101100010111⟩ + 0.02|0010101100100100⟩ + 0.02|0010101100111101⟩ + 0.02|0010101101000001⟩ + 0.02|0010101101010111⟩ + 0.02|0010101101100100⟩ + 0.02|0010101101111101⟩ + 0.02|0010101110000001⟩ + 0.02|0010101110010111⟩ + 0.02|0010101110100100⟩ + 0.02|0010101110111101⟩ + 0.02|0010101111000001⟩ + 0.02|0010101111010111⟩ + 0.02|0010101111100100⟩ + 0.02|0010101111111101⟩ + 0.02|0010110000000001⟩ + 0.02|0010110000010111⟩ + 0.02|0010110000100100⟩ + 0.02|0010110000111101⟩ + 0.02|0010110001000001⟩ + 0.02|0010110001010111⟩ + 0.02|0010110001100100⟩ + 0.02|0010110001111101⟩ + 0.02|0010110010000001⟩ + 0.02|0010110010010111⟩ + 0.02|0010110010100100⟩ + 0.02|0010110010111101⟩ + 0.02|0010110011000001⟩ + 0.02|0010110011010111⟩ + 0.02|0010110011100100⟩ + 0.02|0010110011111101⟩ + 0.02|0010110100000001⟩ + 0.02|0010110100010111⟩ + 0.02|0010110100100100⟩ + 0.02|0010110100111101⟩ + 0.02|0010110101000001⟩ + 0.02|0010110101010111⟩ + 0.02|0010110101100100⟩ + 0.02|0010110101111101⟩ + 0.02|0010110110000001⟩ + 0.02|0010110110010111⟩ + 0.02|0010110110100100⟩ + 0.02|0010110110111101⟩ + 0.02|0010110111000001⟩ + 0.02|0010110111010111⟩ + 0.02|0010110111100100⟩ + 0.02|0010110111111101⟩ + 0.02|0010111000000001⟩ + 0.02|0010111000010111⟩ + 0.02|0010111000100100⟩ + 0.02|0010111000111101⟩ + 0.02|0010111001000001⟩ + 0.02|0010111001010111⟩ + 0.02|0010111001100100⟩ + 0.02|0010111001111101⟩ + 0.02|0010111010000001⟩ + 0.02|0010111010010111⟩ + 0.02|0010111010100100⟩ + 0.02|0010111010111101⟩ + 0.02|0010111011000001⟩ + 0.02|0010111011010111⟩ + 0.02|0010111011100100⟩ + 0.02|0010111011111101⟩ + 0.02|0010111100000001⟩ + 0.02|0010111100010111⟩ + 0.02|0010111100100100⟩ + 0.02|0010111100111101⟩ + 0.02|0010111101000001⟩ + 0.02|0010111101010111⟩ + 0.02|0010111101100100⟩ + 0.02|0010111101111101⟩ + 0.02|0010111110000001⟩ + 0.02|0010111110010111⟩ + 0.02|0010111110100100⟩ + 0.02|0010111110111101⟩ + 0.02|0010111111000001⟩ + 0.02|0010111111010111⟩ + 0.02|0010111111100100⟩ + 0.02|0010111111111101⟩ + 0.02|0011000000000001⟩ + 0.02|0011000000010111⟩ + 0.02|0011000000100100⟩ + 0.02|0011000000111101⟩ + 0.02|0011000001000001⟩ + 0.02|0011000001010111⟩ + 0.02|0011000001100100⟩ + 0.02|0011000001111101⟩ + 0.02|0011000010000001⟩ + 0.02|0011000010010111⟩ + 0.02|0011000010100100⟩ + 0.02|0011000010111101⟩ + 0.02|0011000011000001⟩ + 0.02|0011000011010111⟩ + 0.02|0011000011100100⟩ + 0.02|0011000011111101⟩ + 0.02|0011000100000001⟩ + 0.02|0011000100010111⟩ + 0.02|0011000100100100⟩ + 0.02|0011000100111101⟩ + 0.02|0011000101000001⟩ + 0.02|0011000101010111⟩ + 0.02|0011000101100100⟩ + 0.02|0011000101111101⟩ + 0.02|0011000110000001⟩ + 0.02|0011000110010111⟩ + 0.02|0011000110100100⟩ + 0.02|0011000110111101⟩ + 0.02|0011000111000001⟩ + 0.02|0011000111010111⟩ + 0.02|0011000111100100⟩ + 0.02|0011000111111101⟩ + 0.02|0011001000000001⟩ + 0.02|0011001000010111⟩ + 0.02|0011001000100100⟩ + 0.02|0011001000111101⟩ + 0.02|0011001001000001⟩ + 0.02|0011001001010111⟩ + 0.02|0011001001100100⟩ + 0.02|0011001001111101⟩ + 0.02|0011001010000001⟩ + 0.02|0011001010010111⟩ + 0.02|0011001010100100⟩ + 0.02|0011001010111101⟩ + 0.02|0011001011000001⟩ + 0.02|0011001011010111⟩ + 0.02|0011001011100100⟩ + 0.02|0011001011111101⟩ + 0.02|0011001100000001⟩ + 0.02|0011001100010111⟩ + 0.02|0011001100100100⟩ + 0.02|0011001100111101⟩ + 0.02|0011001101000001⟩ + 0.02|0011001101010111⟩ + 0.02|0011001101100100⟩ + 0.02|0011001101111101⟩ + 0.02|0011001110000001⟩ + 0.02|0011001110010111⟩ + 0.02|0011001110100100⟩ + 0.02|0011001110111101⟩ + 0.02|0011001111000001⟩ + 0.02|0011001111010111⟩ + 0.02|0011001111100100⟩ + 0.02|0011001111111101⟩ + 0.02|0011010000000001⟩ + 0.02|0011010000010111⟩ + 0.02|0011010000100100⟩ + 0.02|0011010000111101⟩ + 0.02|0011010001000001⟩ + 0.02|0011010001010111⟩ + 0.02|0011010001100100⟩ + 0.02|0011010001111101⟩ + 0.02|0011010010000001⟩ + 0.02|0011010010010111⟩ + 0.02|0011010010100100⟩ + 0.02|0011010010111101⟩ + 0.02|0011010011000001⟩ + 0.02|0011010011010111⟩ + 0.02|0011010011100100⟩ + 0.02|0011010011111101⟩ + 0.02|0011010100000001⟩ + 0.02|0011010100010111⟩ + 0.02|0011010100100100⟩ + 0.02|0011010100111101⟩ + 0.02|0011010101000001⟩ + 0.02|0011010101010111⟩ + 0.02|0011010101100100⟩ + 0.02|0011010101111101⟩ + 0.02|0011010110000001⟩ + 0.02|0011010110010111⟩ + 0.02|0011010110100100⟩ + 0.02|0011010110111101⟩ + 0.02|0011010111000001⟩ + 0.02|0011010111010111⟩ + 0.02|0011010111100100⟩ + 0.02|0011010111111101⟩ + 0.02|0011011000000001⟩ + 0.02|0011011000010111⟩ + 0.02|0011011000100100⟩ + 0.02|0011011000111101⟩ + 0.02|0011011001000001⟩ + 0.02|0011011001010111⟩ + 0.02|0011011001100100⟩ + 0.02|0011011001111101⟩ + 0.02|0011011010000001⟩ + 0.02|0011011010010111⟩ + 0.02|0011011010100100⟩ + 0.02|0011011010111101⟩ + 0.02|0011011011000001⟩ + 0.02|0011011011010111⟩ + 0.02|0011011011100100⟩ + 0.02|0011011011111101⟩ + 0.02|0011011100000001⟩ + 0.02|0011011100010111⟩ + 0.02|0011011100100100⟩ + 0.02|0011011100111101⟩ + 0.02|0011011101000001⟩ + 0.02|0011011101010111⟩ + 0.02|0011011101100100⟩ + 0.02|0011011101111101⟩ + 0.02|0011011110000001⟩ + 0.02|0011011110010111⟩ + 0.02|0011011110100100⟩ + 0.02|0011011110111101⟩ + 0.02|0011011111000001⟩ + 0.02|0011011111010111⟩ + 0.02|0011011111100100⟩ + 0.02|0011011111111101⟩ + 0.02|0011100000000001⟩ + 0.02|0011100000010111⟩ + 0.02|0011100000100100⟩ + 0.02|0011100000111101⟩ + 0.02|0011100001000001⟩ + 0.02|0011100001010111⟩ + 0.02|0011100001100100⟩ + 0.02|0011100001111101⟩ + 0.02|0011100010000001⟩ + 0.02|0011100010010111⟩ + 0.02|0011100010100100⟩ + 0.02|0011100010111101⟩ + 0.02|0011100011000001⟩ + 0.02|0011100011010111⟩ + 0.02|0011100011100100⟩ + 0.02|0011100011111101⟩ + 0.02|0011100100000001⟩ + 0.02|0011100100010111⟩ + 0.02|0011100100100100⟩ + 0.02|0011100100111101⟩ + 0.02|0011100101000001⟩ + 0.02|0011100101010111⟩ + 0.02|0011100101100100⟩ + 0.02|0011100101111101⟩ + 0.02|0011100110000001⟩ + 0.02|0011100110010111⟩ + 0.02|0011100110100100⟩ + 0.02|0011100110111101⟩ + 0.02|0011100111000001⟩ + 0.02|0011100111010111⟩ + 0.02|0011100111100100⟩ + 0.02|0011100111111101⟩ + 0.02|0011101000000001⟩ + 0.02|0011101000010111⟩ + 0.02|0011101000100100⟩ + 0.02|0011101000111101⟩ + 0.02|0011101001000001⟩ + 0.02|0011101001010111⟩ + 0.02|0011101001100100⟩ + 0.02|0011101001111101⟩ + 0.02|0011101010000001⟩ + 0.02|0011101010010111⟩ + 0.02|0011101010100100⟩ + 0.02|0011101010111101⟩ + 0.02|0011101011000001⟩ + 0.02|0011101011010111⟩ + 0.02|0011101011100100⟩ + 0.02|0011101011111101⟩ + 0.02|0011101100000001⟩ + 0.02|0011101100010111⟩ + 0.02|0011101100100100⟩ + 0.02|0011101100111101⟩ + 0.02|0011101101000001⟩ + 0.02|0011101101010111⟩ + 0.02|0011101101100100⟩ + 0.02|0011101101111101⟩ + 0.02|0011101110000001⟩ + 0.02|0011101110010111⟩ + 0.02|0011101110100100⟩ + 0.02|0011101110111101⟩ + 0.02|0011101111000001⟩ + 0.02|0011101111010111⟩ + 0.02|0011101111100100⟩ + 0.02|0011101111111101⟩ + 0.02|0011110000000001⟩ + 0.02|0011110000010111⟩ + 0.02|0011110000100100⟩ + 0.02|0011110000111101⟩ + 0.02|0011110001000001⟩ + 0.02|0011110001010111⟩ + 0.02|0011110001100100⟩ + 0.02|0011110001111101⟩ + 0.02|0011110010000001⟩ + 0.02|0011110010010111⟩ + 0.02|0011110010100100⟩ + 0.02|0011110010111101⟩ + 0.02|0011110011000001⟩ + 0.02|0011110011010111⟩ + 0.02|0011110011100100⟩ + 0.02|0011110011111101⟩ + 0.02|0011110100000001⟩ + 0.02|0011110100010111⟩ + 0.02|0011110100100100⟩ + 0.02|0011110100111101⟩ + 0.02|0011110101000001⟩ + 0.02|0011110101010111⟩ + 0.02|0011110101100100⟩ + 0.02|0011110101111101⟩ + 0.02|0011110110000001⟩ + 0.02|0011110110010111⟩ + 0.02|0011110110100100⟩ + 0.02|0011110110111101⟩ + 0.02|0011110111000001⟩ + 0.02|0011110111010111⟩ + 0.02|0011110111100100⟩ + 0.02|0011110111111101⟩ + 0.02|0011111000000001⟩ + 0.02|0011111000010111⟩ + 0.02|0011111000100100⟩ + 0.02|0011111000111101⟩ + 0.02|0011111001000001⟩ + 0.02|0011111001010111⟩ + 0.02|0011111001100100⟩ + 0.02|0011111001111101⟩ + 0.02|0011111010000001⟩ + 0.02|0011111010010111⟩ + 0.02|0011111010100100⟩ + 0.02|0011111010111101⟩ + 0.02|0011111011000001⟩ + 0.02|0011111011010111⟩ + 0.02|0011111011100100⟩ + 0.02|0011111011111101⟩ + 0.02|0011111100000001⟩ + 0.02|0011111100010111⟩ + 0.02|0011111100100100⟩ + 0.02|0011111100111101⟩ + 0.02|0011111101000001⟩ + 0.02|0011111101010111⟩ + 0.02|0011111101100100⟩ + 0.02|0011111101111101⟩ + 0.02|0011111110000001⟩ + 0.02|0011111110010111⟩ + 0.02|0011111110100100⟩ + 0.02|0011111110111101⟩ + 0.02|0011111111000001⟩ + 0.02|0011111111010111⟩ + 0.02|0011111111100100⟩ + 0.02|0011111111111101⟩ + 0.02|0100000000000001⟩ + 0.02|0100000000010111⟩ + 0.02|0100000000100100⟩ + 0.02|0100000000111101⟩ + 0.02|0100000001000001⟩ + 0.02|0100000001010111⟩ + 0.02|0100000001100100⟩ + 0.02|0100000001111101⟩ + 0.02|0100000010000001⟩ + 0.02|0100000010010111⟩ + 0.02|0100000010100100⟩ + 0.02|0100000010111101⟩ + 0.02|0100000011000001⟩ + 0.02|0100000011010111⟩ + 0.02|0100000011100100⟩ + 0.02|0100000011111101⟩ + 0.02|0100000100000001⟩ + 0.02|0100000100010111⟩ + 0.02|0100000100100100⟩ + 0.02|0100000100111101⟩ + 0.02|0100000101000001⟩ + 0.02|0100000101010111⟩ + 0.02|0100000101100100⟩ + 0.02|0100000101111101⟩ + 0.02|0100000110000001⟩ + 0.02|0100000110010111⟩ + 0.02|0100000110100100⟩ + 0.02|0100000110111101⟩ + 0.02|0100000111000001⟩ + 0.02|0100000111010111⟩ + 0.02|0100000111100100⟩ + 0.02|0100000111111101⟩ + 0.02|0100001000000001⟩ + 0.02|0100001000010111⟩ + 0.02|0100001000100100⟩ + 0.02|0100001000111101⟩ + 0.02|0100001001000001⟩ + 0.02|0100001001010111⟩ + 0.02|0100001001100100⟩ + 0.02|0100001001111101⟩ + 0.02|0100001010000001⟩ + 0.02|0100001010010111⟩ + 0.02|0100001010100100⟩ + 0.02|0100001010111101⟩ + 0.02|0100001011000001⟩ + 0.02|0100001011010111⟩ + 0.02|0100001011100100⟩ + 0.02|0100001011111101⟩ + 0.02|0100001100000001⟩ + 0.02|0100001100010111⟩ + 0.02|0100001100100100⟩ + 0.02|0100001100111101⟩ + 0.02|0100001101000001⟩ + 0.02|0100001101010111⟩ + 0.02|0100001101100100⟩ + 0.02|0100001101111101⟩ + 0.02|0100001110000001⟩ + 0.02|0100001110010111⟩ + 0.02|0100001110100100⟩ + 0.02|0100001110111101⟩ + 0.02|0100001111000001⟩ + 0.02|0100001111010111⟩ + 0.02|0100001111100100⟩ + 0.02|0100001111111101⟩ + 0.02|0100010000000001⟩ + 0.02|0100010000010111⟩ + 0.02|0100010000100100⟩ + 0.02|0100010000111101⟩ + 0.02|0100010001000001⟩ + 0.02|0100010001010111⟩ + 0.02|0100010001100100⟩ + 0.02|0100010001111101⟩ + 0.02|0100010010000001⟩ + 0.02|0100010010010111⟩ + 0.02|0100010010100100⟩ + 0.02|0100010010111101⟩ + 0.02|0100010011000001⟩ + 0.02|0100010011010111⟩ + 0.02|0100010011100100⟩ + 0.02|0100010011111101⟩ + 0.02|0100010100000001⟩ + 0.02|0100010100010111⟩ + 0.02|0100010100100100⟩ + 0.02|0100010100111101⟩ + 0.02|0100010101000001⟩ + 0.02|0100010101010111⟩ + 0.02|0100010101100100⟩ + 0.02|0100010101111101⟩ + 0.02|0100010110000001⟩ + 0.02|0100010110010111⟩ + 0.02|0100010110100100⟩ + 0.02|0100010110111101⟩ + 0.02|0100010111000001⟩ + 0.02|0100010111010111⟩ + 0.02|0100010111100100⟩ + 0.02|0100010111111101⟩ + 0.02|0100011000000001⟩ + 0.02|0100011000010111⟩ + 0.02|0100011000100100⟩ + 0.02|0100011000111101⟩ + 0.02|0100011001000001⟩ + 0.02|0100011001010111⟩ + 0.02|0100011001100100⟩ + 0.02|0100011001111101⟩ + 0.02|0100011010000001⟩ + 0.02|0100011010010111⟩ + 0.02|0100011010100100⟩ + 0.02|0100011010111101⟩ + 0.02|0100011011000001⟩ + 0.02|0100011011010111⟩ + 0.02|0100011011100100⟩ + 0.02|0100011011111101⟩ + 0.02|0100011100000001⟩ + 0.02|0100011100010111⟩ + 0.02|0100011100100100⟩ + 0.02|0100011100111101⟩ + 0.02|0100011101000001⟩ + 0.02|0100011101010111⟩ + 0.02|0100011101100100⟩ + 0.02|0100011101111101⟩ + 0.02|0100011110000001⟩ + 0.02|0100011110010111⟩ + 0.02|0100011110100100⟩ + 0.02|0100011110111101⟩ + 0.02|0100011111000001⟩ + 0.02|0100011111010111⟩ + 0.02|0100011111100100⟩ + 0.02|0100011111111101⟩ + 0.02|0100100000000001⟩ + 0.02|0100100000010111⟩ + 0.02|0100100000100100⟩ + 0.02|0100100000111101⟩ + 0.02|0100100001000001⟩ + 0.02|0100100001010111⟩ + 0.02|0100100001100100⟩ + 0.02|0100100001111101⟩ + 0.02|0100100010000001⟩ + 0.02|0100100010010111⟩ + 0.02|0100100010100100⟩ + 0.02|0100100010111101⟩ + 0.02|0100100011000001⟩ + 0.02|0100100011010111⟩ + 0.02|0100100011100100⟩ + 0.02|0100100011111101⟩ + 0.02|0100100100000001⟩ + 0.02|0100100100010111⟩ + 0.02|0100100100100100⟩ + 0.02|0100100100111101⟩ + 0.02|0100100101000001⟩ + 0.02|0100100101010111⟩ + 0.02|0100100101100100⟩ + 0.02|0100100101111101⟩ + 0.02|0100100110000001⟩ + 0.02|0100100110010111⟩ + 0.02|0100100110100100⟩ + 0.02|0100100110111101⟩ + 0.02|0100100111000001⟩ + 0.02|0100100111010111⟩ + 0.02|0100100111100100⟩ + 0.02|0100100111111101⟩ + 0.02|0100101000000001⟩ + 0.02|0100101000010111⟩ + 0.02|0100101000100100⟩ + 0.02|0100101000111101⟩ + 0.02|0100101001000001⟩ + 0.02|0100101001010111⟩ + 0.02|0100101001100100⟩ + 0.02|0100101001111101⟩ + 0.02|0100101010000001⟩ + 0.02|0100101010010111⟩ + 0.02|0100101010100100⟩ + 0.02|0100101010111101⟩ + 0.02|0100101011000001⟩ + 0.02|0100101011010111⟩ + 0.02|0100101011100100⟩ + 0.02|0100101011111101⟩ + 0.02|0100101100000001⟩ + 0.02|0100101100010111⟩ + 0.02|0100101100100100⟩ + 0.02|0100101100111101⟩ + 0.02|0100101101000001⟩ + 0.02|0100101101010111⟩ + 0.02|0100101101100100⟩ + 0.02|0100101101111101⟩ + 0.02|0100101110000001⟩ + 0.02|0100101110010111⟩ + 0.02|0100101110100100⟩ + 0.02|0100101110111101⟩ + 0.02|0100101111000001⟩ + 0.02|0100101111010111⟩ + 0.02|0100101111100100⟩ + 0.02|0100101111111101⟩ + 0.02|0100110000000001⟩ + 0.02|0100110000010111⟩ + 0.02|0100110000100100⟩ + 0.02|0100110000111101⟩ + 0.02|0100110001000001⟩ + 0.02|0100110001010111⟩ + 0.02|0100110001100100⟩ + 0.02|0100110001111101⟩ + 0.02|0100110010000001⟩ + 0.02|0100110010010111⟩ + 0.02|0100110010100100⟩ + 0.02|0100110010111101⟩ + 0.02|0100110011000001⟩ + 0.02|0100110011010111⟩ + 0.02|0100110011100100⟩ + 0.02|0100110011111101⟩ + 0.02|0100110100000001⟩ + 0.02|0100110100010111⟩ + 0.02|0100110100100100⟩ + 0.02|0100110100111101⟩ + 0.02|0100110101000001⟩ + 0.02|0100110101010111⟩ + 0.02|0100110101100100⟩ + 0.02|0100110101111101⟩ + 0.02|0100110110000001⟩ + 0.02|0100110110010111⟩ + 0.02|0100110110100100⟩ + 0.02|0100110110111101⟩ + 0.02|0100110111000001⟩ + 0.02|0100110111010111⟩ + 0.02|0100110111100100⟩ + 0.02|0100110111111101⟩ + 0.02|0100111000000001⟩ + 0.02|0100111000010111⟩ + 0.02|0100111000100100⟩ + 0.02|0100111000111101⟩ + 0.02|0100111001000001⟩ + 0.02|0100111001010111⟩ + 0.02|0100111001100100⟩ + 0.02|0100111001111101⟩ + 0.02|0100111010000001⟩ + 0.02|0100111010010111⟩ + 0.02|0100111010100100⟩ + 0.02|0100111010111101⟩ + 0.02|0100111011000001⟩ + 0.02|0100111011010111⟩ + 0.02|0100111011100100⟩ + 0.02|0100111011111101⟩ + 0.02|0100111100000001⟩ + 0.02|0100111100010111⟩ + 0.02|0100111100100100⟩ + 0.02|0100111100111101⟩ + 0.02|0100111101000001⟩ + 0.02|0100111101010111⟩ + 0.02|0100111101100100⟩ + 0.02|0100111101111101⟩ + 0.02|0100111110000001⟩ + 0.02|0100111110010111⟩ + 0.02|0100111110100100⟩ + 0.02|0100111110111101⟩ + 0.02|0100111111000001⟩ + 0.02|0100111111010111⟩ + 0.02|0100111111100100⟩ + 0.02|0100111111111101⟩ + 0.02|0101000000000001⟩ + 0.02|0101000000010111⟩ + 0.02|0101000000100100⟩ + 0.02|0101000000111101⟩ + 0.02|0101000001000001⟩ + 0.02|0101000001010111⟩ + 0.02|0101000001100100⟩ + 0.02|0101000001111101⟩ + 0.02|0101000010000001⟩ + 0.02|0101000010010111⟩ + 0.02|0101000010100100⟩ + 0.02|0101000010111101⟩ + 0.02|0101000011000001⟩ + 0.02|0101000011010111⟩ + 0.02|0101000011100100⟩ + 0.02|0101000011111101⟩ + 0.02|0101000100000001⟩ + 0.02|0101000100010111⟩ + 0.02|0101000100100100⟩ + 0.02|0101000100111101⟩ + 0.02|0101000101000001⟩ + 0.02|0101000101010111⟩ + 0.02|0101000101100100⟩ + 0.02|0101000101111101⟩ + 0.02|0101000110000001⟩ + 0.02|0101000110010111⟩ + 0.02|0101000110100100⟩ + 0.02|0101000110111101⟩ + 0.02|0101000111000001⟩ + 0.02|0101000111010111⟩ + 0.02|0101000111100100⟩ + 0.02|0101000111111101⟩ + 0.02|0101001000000001⟩ + 0.02|0101001000010111⟩ + 0.02|0101001000100100⟩ + 0.02|0101001000111101⟩ + 0.02|0101001001000001⟩ + 0.02|0101001001010111⟩ + 0.02|0101001001100100⟩ + 0.02|0101001001111101⟩ + 0.02|0101001010000001⟩ + 0.02|0101001010010111⟩ + 0.02|0101001010100100⟩ + 0.02|0101001010111101⟩ + 0.02|0101001011000001⟩ + 0.02|0101001011010111⟩ + 0.02|0101001011100100⟩ + 0.02|0101001011111101⟩ + 0.02|0101001100000001⟩ + 0.02|0101001100010111⟩ + 0.02|0101001100100100⟩ + 0.02|0101001100111101⟩ + 0.02|0101001101000001⟩ + 0.02|0101001101010111⟩ + 0.02|0101001101100100⟩ + 0.02|0101001101111101⟩ + 0.02|0101001110000001⟩ + 0.02|0101001110010111⟩ + 0.02|0101001110100100⟩ + 0.02|0101001110111101⟩ + 0.02|0101001111000001⟩ + 0.02|0101001111010111⟩ + 0.02|0101001111100100⟩ + 0.02|0101001111111101⟩ + 0.02|0101010000000001⟩ + 0.02|0101010000010111⟩ + 0.02|0101010000100100⟩ + 0.02|0101010000111101⟩ + 0.02|0101010001000001⟩ + 0.02|0101010001010111⟩ + 0.02|0101010001100100⟩ + 0.02|0101010001111101⟩ + 0.02|0101010010000001⟩ + 0.02|0101010010010111⟩ + 0.02|0101010010100100⟩ + 0.02|0101010010111101⟩ + 0.02|0101010011000001⟩ + 0.02|0101010011010111⟩ + 0.02|0101010011100100⟩ + 0.02|0101010011111101⟩ + 0.02|0101010100000001⟩ + 0.02|0101010100010111⟩ + 0.02|0101010100100100⟩ + 0.02|0101010100111101⟩ + 0.02|0101010101000001⟩ + 0.02|0101010101010111⟩ + 0.02|0101010101100100⟩ + 0.02|0101010101111101⟩ + 0.02|0101010110000001⟩ + 0.02|0101010110010111⟩ + 0.02|0101010110100100⟩ + 0.02|0101010110111101⟩ + 0.02|0101010111000001⟩ + 0.02|0101010111010111⟩ + 0.02|0101010111100100⟩ + 0.02|0101010111111101⟩ + 0.02|0101011000000001⟩ + 0.02|0101011000010111⟩ + 0.02|0101011000100100⟩ + 0.02|0101011000111101⟩ + 0.02|0101011001000001⟩ + 0.02|0101011001010111⟩ + 0.02|0101011001100100⟩ + 0.02|0101011001111101⟩ + 0.02|0101011010000001⟩ + 0.02|0101011010010111⟩ + 0.02|0101011010100100⟩ + 0.02|0101011010111101⟩ + 0.02|0101011011000001⟩ + 0.02|0101011011010111⟩ + 0.02|0101011011100100⟩ + 0.02|0101011011111101⟩ + 0.02|0101011100000001⟩ + 0.02|0101011100010111⟩ + 0.02|0101011100100100⟩ + 0.02|0101011100111101⟩ + 0.02|0101011101000001⟩ + 0.02|0101011101010111⟩ + 0.02|0101011101100100⟩ + 0.02|0101011101111101⟩ + 0.02|0101011110000001⟩ + 0.02|0101011110010111⟩ + 0.02|0101011110100100⟩ + 0.02|0101011110111101⟩ + 0.02|0101011111000001⟩ + 0.02|0101011111010111⟩ + 0.02|0101011111100100⟩ + 0.02|0101011111111101⟩ + 0.02|0101100000000001⟩ + 0.02|0101100000010111⟩ + 0.02|0101100000100100⟩ + 0.02|0101100000111101⟩ + 0.02|0101100001000001⟩ + 0.02|0101100001010111⟩ + 0.02|0101100001100100⟩ + 0.02|0101100001111101⟩ + 0.02|0101100010000001⟩ + 0.02|0101100010010111⟩ + 0.02|0101100010100100⟩ + 0.02|0101100010111101⟩ + 0.02|0101100011000001⟩ + 0.02|0101100011010111⟩ + 0.02|0101100011100100⟩ + 0.02|0101100011111101⟩ + 0.02|0101100100000001⟩ + 0.02|0101100100010111⟩ + 0.02|0101100100100100⟩ + 0.02|0101100100111101⟩ + 0.02|0101100101000001⟩ + 0.02|0101100101010111⟩ + 0.02|0101100101100100⟩ + 0.02|0101100101111101⟩ + 0.02|0101100110000001⟩ + 0.02|0101100110010111⟩ + 0.02|0101100110100100⟩ + 0.02|0101100110111101⟩ + 0.02|0101100111000001⟩ + 0.02|0101100111010111⟩ + 0.02|0101100111100100⟩ + 0.02|0101100111111101⟩ + 0.02|0101101000000001⟩ + 0.02|0101101000010111⟩ + 0.02|0101101000100100⟩ + 0.02|0101101000111101⟩ + 0.02|0101101001000001⟩ + 0.02|0101101001010111⟩ + 0.02|0101101001100100⟩ + 0.02|0101101001111101⟩ + 0.02|0101101010000001⟩ + 0.02|0101101010010111⟩ + 0.02|0101101010100100⟩ + 0.02|0101101010111101⟩ + 0.02|0101101011000001⟩ + 0.02|0101101011010111⟩ + 0.02|0101101011100100⟩ + 0.02|0101101011111101⟩ + 0.02|0101101100000001⟩ + 0.02|0101101100010111⟩ + 0.02|0101101100100100⟩ + 0.02|0101101100111101⟩ + 0.02|0101101101000001⟩ + 0.02|0101101101010111⟩ + 0.02|0101101101100100⟩ + 0.02|0101101101111101⟩ + 0.02|0101101110000001⟩ + 0.02|0101101110010111⟩ + 0.02|0101101110100100⟩ + 0.02|0101101110111101⟩ + 0.02|0101101111000001⟩ + 0.02|0101101111010111⟩ + 0.02|0101101111100100⟩ + 0.02|0101101111111101⟩ + 0.02|0101110000000001⟩ + 0.02|0101110000010111⟩ + 0.02|0101110000100100⟩ + 0.02|0101110000111101⟩ + 0.02|0101110001000001⟩ + 0.02|0101110001010111⟩ + 0.02|0101110001100100⟩ + 0.02|0101110001111101⟩ + 0.02|0101110010000001⟩ + 0.02|0101110010010111⟩ + 0.02|0101110010100100⟩ + 0.02|0101110010111101⟩ + 0.02|0101110011000001⟩ + 0.02|0101110011010111⟩ + 0.02|0101110011100100⟩ + 0.02|0101110011111101⟩ + 0.02|0101110100000001⟩ + 0.02|0101110100010111⟩ + 0.02|0101110100100100⟩ + 0.02|0101110100111101⟩ + 0.02|0101110101000001⟩ + 0.02|0101110101010111⟩ + 0.02|0101110101100100⟩ + 0.02|0101110101111101⟩ + 0.02|0101110110000001⟩ + 0.02|0101110110010111⟩ + 0.02|0101110110100100⟩ + 0.02|0101110110111101⟩ + 0.02|0101110111000001⟩ + 0.02|0101110111010111⟩ + 0.02|0101110111100100⟩ + 0.02|0101110111111101⟩ + 0.02|0101111000000001⟩ + 0.02|0101111000010111⟩ + 0.02|0101111000100100⟩ + 0.02|0101111000111101⟩ + 0.02|0101111001000001⟩ + 0.02|0101111001010111⟩ + 0.02|0101111001100100⟩ + 0.02|0101111001111101⟩ + 0.02|0101111010000001⟩ + 0.02|0101111010010111⟩ + 0.02|0101111010100100⟩ + 0.02|0101111010111101⟩ + 0.02|0101111011000001⟩ + 0.02|0101111011010111⟩ + 0.02|0101111011100100⟩ + 0.02|0101111011111101⟩ + 0.02|0101111100000001⟩ + 0.02|0101111100010111⟩ + 0.02|0101111100100100⟩ + 0.02|0101111100111101⟩ + 0.02|0101111101000001⟩ + 0.02|0101111101010111⟩ + 0.02|0101111101100100⟩ + 0.02|0101111101111101⟩ + 0.02|0101111110000001⟩ + 0.02|0101111110010111⟩ + 0.02|0101111110100100⟩ + 0.02|0101111110111101⟩ + 0.02|0101111111000001⟩ + 0.02|0101111111010111⟩ + 0.02|0101111111100100⟩ + 0.02|0101111111111101⟩ + 0.02|0110000000000001⟩ + 0.02|0110000000010111⟩ + 0.02|0110000000100100⟩ + 0.02|0110000000111101⟩ + 0.02|0110000001000001⟩ + 0.02|0110000001010111⟩ + 0.02|0110000001100100⟩ + 0.02|0110000001111101⟩ + 0.02|0110000010000001⟩ + 0.02|0110000010010111⟩ + 0.02|0110000010100100⟩ + 0.02|0110000010111101⟩ + 0.02|0110000011000001⟩ + 0.02|0110000011010111⟩ + 0.02|0110000011100100⟩ + 0.02|0110000011111101⟩ + 0.02|0110000100000001⟩ + 0.02|0110000100010111⟩ + 0.02|0110000100100100⟩ + 0.02|0110000100111101⟩ + 0.02|0110000101000001⟩ + 0.02|0110000101010111⟩ + 0.02|0110000101100100⟩ + 0.02|0110000101111101⟩ + 0.02|0110000110000001⟩ + 0.02|0110000110010111⟩ + 0.02|0110000110100100⟩ + 0.02|0110000110111101⟩ + 0.02|0110000111000001⟩ + 0.02|0110000111010111⟩ + 0.02|0110000111100100⟩ + 0.02|0110000111111101⟩ + 0.02|0110001000000001⟩ + 0.02|0110001000010111⟩ + 0.02|0110001000100100⟩ + 0.02|0110001000111101⟩ + 0.02|0110001001000001⟩ + 0.02|0110001001010111⟩ + 0.02|0110001001100100⟩ + 0.02|0110001001111101⟩ + 0.02|0110001010000001⟩ + 0.02|0110001010010111⟩ + 0.02|0110001010100100⟩ + 0.02|0110001010111101⟩ + 0.02|0110001011000001⟩ + 0.02|0110001011010111⟩ + 0.02|0110001011100100⟩ + 0.02|0110001011111101⟩ + 0.02|0110001100000001⟩ + 0.02|0110001100010111⟩ + 0.02|0110001100100100⟩ + 0.02|0110001100111101⟩ + 0.02|0110001101000001⟩ + 0.02|0110001101010111⟩ + 0.02|0110001101100100⟩ + 0.02|0110001101111101⟩ + 0.02|0110001110000001⟩ + 0.02|0110001110010111⟩ + 0.02|0110001110100100⟩ + 0.02|0110001110111101⟩ + 0.02|0110001111000001⟩ + 0.02|0110001111010111⟩ + 0.02|0110001111100100⟩ + 0.02|0110001111111101⟩ + 0.02|0110010000000001⟩ + 0.02|0110010000010111⟩ + 0.02|0110010000100100⟩ + 0.02|0110010000111101⟩ + 0.02|0110010001000001⟩ + 0.02|0110010001010111⟩ + 0.02|0110010001100100⟩ + 0.02|0110010001111101⟩ + 0.02|0110010010000001⟩ + 0.02|0110010010010111⟩ + 0.02|0110010010100100⟩ + 0.02|0110010010111101⟩ + 0.02|0110010011000001⟩ + 0.02|0110010011010111⟩ + 0.02|0110010011100100⟩ + 0.02|0110010011111101⟩ + 0.02|0110010100000001⟩ + 0.02|0110010100010111⟩ + 0.02|0110010100100100⟩ + 0.02|0110010100111101⟩ + 0.02|0110010101000001⟩ + 0.02|0110010101010111⟩ + 0.02|0110010101100100⟩ + 0.02|0110010101111101⟩ + 0.02|0110010110000001⟩ + 0.02|0110010110010111⟩ + 0.02|0110010110100100⟩ + 0.02|0110010110111101⟩ + 0.02|0110010111000001⟩ + 0.02|0110010111010111⟩ + 0.02|0110010111100100⟩ + 0.02|0110010111111101⟩ + 0.02|0110011000000001⟩ + 0.02|0110011000010111⟩ + 0.02|0110011000100100⟩ + 0.02|0110011000111101⟩ + 0.02|0110011001000001⟩ + 0.02|0110011001010111⟩ + 0.02|0110011001100100⟩ + 0.02|0110011001111101⟩ + 0.02|0110011010000001⟩ + 0.02|0110011010010111⟩ + 0.02|0110011010100100⟩ + 0.02|0110011010111101⟩ + 0.02|0110011011000001⟩ + 0.02|0110011011010111⟩ + 0.02|0110011011100100⟩ + 0.02|0110011011111101⟩ + 0.02|0110011100000001⟩ + 0.02|0110011100010111⟩ + 0.02|0110011100100100⟩ + 0.02|0110011100111101⟩ + 0.02|0110011101000001⟩ + 0.02|0110011101010111⟩ + 0.02|0110011101100100⟩ + 0.02|0110011101111101⟩ + 0.02|0110011110000001⟩ + 0.02|0110011110010111⟩ + 0.02|0110011110100100⟩ + 0.02|0110011110111101⟩ + 0.02|0110011111000001⟩ + 0.02|0110011111010111⟩ + 0.02|0110011111100100⟩ + 0.02|0110011111111101⟩ + 0.02|0110100000000001⟩ + 0.02|0110100000010111⟩ + 0.02|0110100000100100⟩ + 0.02|0110100000111101⟩ + 0.02|0110100001000001⟩ + 0.02|0110100001010111⟩ + 0.02|0110100001100100⟩ + 0.02|0110100001111101⟩ + 0.02|0110100010000001⟩ + 0.02|0110100010010111⟩ + 0.02|0110100010100100⟩ + 0.02|0110100010111101⟩ + 0.02|0110100011000001⟩ + 0.02|0110100011010111⟩ + 0.02|0110100011100100⟩ + 0.02|0110100011111101⟩ + 0.02|0110100100000001⟩ + 0.02|0110100100010111⟩ + 0.02|0110100100100100⟩ + 0.02|0110100100111101⟩ + 0.02|0110100101000001⟩ + 0.02|0110100101010111⟩ + 0.02|0110100101100100⟩ + 0.02|0110100101111101⟩ + 0.02|0110100110000001⟩ + 0.02|0110100110010111⟩ + 0.02|0110100110100100⟩ + 0.02|0110100110111101⟩ + 0.02|0110100111000001⟩ + 0.02|0110100111010111⟩ + 0.02|0110100111100100⟩ + 0.02|0110100111111101⟩ + 0.02|0110101000000001⟩ + 0.02|0110101000010111⟩ + 0.02|0110101000100100⟩ + 0.02|0110101000111101⟩ + 0.02|0110101001000001⟩ + 0.02|0110101001010111⟩ + 0.02|0110101001100100⟩ + 0.02|0110101001111101⟩ + 0.02|0110101010000001⟩ + 0.02|0110101010010111⟩ + 0.02|0110101010100100⟩ + 0.02|0110101010111101⟩ + 0.02|0110101011000001⟩ + 0.02|0110101011010111⟩ + 0.02|0110101011100100⟩ + 0.02|0110101011111101⟩ + 0.02|0110101100000001⟩ + 0.02|0110101100010111⟩ + 0.02|0110101100100100⟩ + 0.02|0110101100111101⟩ + 0.02|0110101101000001⟩ + 0.02|0110101101010111⟩ + 0.02|0110101101100100⟩ + 0.02|0110101101111101⟩ + 0.02|0110101110000001⟩ + 0.02|0110101110010111⟩ + 0.02|0110101110100100⟩ + 0.02|0110101110111101⟩ + 0.02|0110101111000001⟩ + 0.02|0110101111010111⟩ + 0.02|0110101111100100⟩ + 0.02|0110101111111101⟩ + 0.02|0110110000000001⟩ + 0.02|0110110000010111⟩ + 0.02|0110110000100100⟩ + 0.02|0110110000111101⟩ + 0.02|0110110001000001⟩ + 0.02|0110110001010111⟩ + 0.02|0110110001100100⟩ + 0.02|0110110001111101⟩ + 0.02|0110110010000001⟩ + 0.02|0110110010010111⟩ + 0.02|0110110010100100⟩ + 0.02|0110110010111101⟩ + 0.02|0110110011000001⟩ + 0.02|0110110011010111⟩ + 0.02|0110110011100100⟩ + 0.02|0110110011111101⟩ + 0.02|0110110100000001⟩ + 0.02|0110110100010111⟩ + 0.02|0110110100100100⟩ + 0.02|0110110100111101⟩ + 0.02|0110110101000001⟩ + 0.02|0110110101010111⟩ + 0.02|0110110101100100⟩ + 0.02|0110110101111101⟩ + 0.02|0110110110000001⟩ + 0.02|0110110110010111⟩ + 0.02|0110110110100100⟩ + 0.02|0110110110111101⟩ + 0.02|0110110111000001⟩ + 0.02|0110110111010111⟩ + 0.02|0110110111100100⟩ + 0.02|0110110111111101⟩ + 0.02|0110111000000001⟩ + 0.02|0110111000010111⟩ + 0.02|0110111000100100⟩ + 0.02|0110111000111101⟩ + 0.02|0110111001000001⟩ + 0.02|0110111001010111⟩ + 0.02|0110111001100100⟩ + 0.02|0110111001111101⟩ + 0.02|0110111010000001⟩ + 0.02|0110111010010111⟩ + 0.02|0110111010100100⟩ + 0.02|0110111010111101⟩ + 0.02|0110111011000001⟩ + 0.02|0110111011010111⟩ + 0.02|0110111011100100⟩ + 0.02|0110111011111101⟩ + 0.02|0110111100000001⟩ + 0.02|0110111100010111⟩ + 0.02|0110111100100100⟩ + 0.02|0110111100111101⟩ + 0.02|0110111101000001⟩ + 0.02|0110111101010111⟩ + 0.02|0110111101100100⟩ + 0.02|0110111101111101⟩ + 0.02|0110111110000001⟩ + 0.02|0110111110010111⟩ + 0.02|0110111110100100⟩ + 0.02|0110111110111101⟩ + 0.02|0110111111000001⟩ + 0.02|0110111111010111⟩ + 0.02|0110111111100100⟩ + 0.02|0110111111111101⟩ + 0.02|0111000000000001⟩ + 0.02|0111000000010111⟩ + 0.02|0111000000100100⟩ + 0.02|0111000000111101⟩ + 0.02|0111000001000001⟩ + 0.02|0111000001010111⟩ + 0.02|0111000001100100⟩ + 0.02|0111000001111101⟩ + 0.02|0111000010000001⟩ + 0.02|0111000010010111⟩ + 0.02|0111000010100100⟩ + 0.02|0111000010111101⟩ + 0.02|0111000011000001⟩ + 0.02|0111000011010111⟩ + 0.02|0111000011100100⟩ + 0.02|0111000011111101⟩ + 0.02|0111000100000001⟩ + 0.02|0111000100010111⟩ + 0.02|0111000100100100⟩ + 0.02|0111000100111101⟩ + 0.02|0111000101000001⟩ + 0.02|0111000101010111⟩ + 0.02|0111000101100100⟩ + 0.02|0111000101111101⟩ + 0.02|0111000110000001⟩ + 0.02|0111000110010111⟩ + 0.02|0111000110100100⟩ + 0.02|0111000110111101⟩ + 0.02|0111000111000001⟩ + 0.02|0111000111010111⟩ + 0.02|0111000111100100⟩ + 0.02|0111000111111101⟩ + 0.02|0111001000000001⟩ + 0.02|0111001000010111⟩ + 0.02|0111001000100100⟩ + 0.02|0111001000111101⟩ + 0.02|0111001001000001⟩ + 0.02|0111001001010111⟩ + 0.02|0111001001100100⟩ + 0.02|0111001001111101⟩ + 0.02|0111001010000001⟩ + 0.02|0111001010010111⟩ + 0.02|0111001010100100⟩ + 0.02|0111001010111101⟩ + 0.02|0111001011000001⟩ + 0.02|0111001011010111⟩ + 0.02|0111001011100100⟩ + 0.02|0111001011111101⟩ + 0.02|0111001100000001⟩ + 0.02|0111001100010111⟩ + 0.02|0111001100100100⟩ + 0.02|0111001100111101⟩ + 0.02|0111001101000001⟩ + 0.02|0111001101010111⟩ + 0.02|0111001101100100⟩ + 0.02|0111001101111101⟩ + 0.02|0111001110000001⟩ + 0.02|0111001110010111⟩ + 0.02|0111001110100100⟩ + 0.02|0111001110111101⟩ + 0.02|0111001111000001⟩ + 0.02|0111001111010111⟩ + 0.02|0111001111100100⟩ + 0.02|0111001111111101⟩ + 0.02|0111010000000001⟩ + 0.02|0111010000010111⟩ + 0.02|0111010000100100⟩ + 0.02|0111010000111101⟩ + 0.02|0111010001000001⟩ + 0.02|0111010001010111⟩ + 0.02|0111010001100100⟩ + 0.02|0111010001111101⟩ + 0.02|0111010010000001⟩ + 0.02|0111010010010111⟩ + 0.02|0111010010100100⟩ + 0.02|0111010010111101⟩ + 0.02|0111010011000001⟩ + 0.02|0111010011010111⟩ + 0.02|0111010011100100⟩ + 0.02|0111010011111101⟩ + 0.02|0111010100000001⟩ + 0.02|0111010100010111⟩ + 0.02|0111010100100100⟩ + 0.02|0111010100111101⟩ + 0.02|0111010101000001⟩ + 0.02|0111010101010111⟩ + 0.02|0111010101100100⟩ + 0.02|0111010101111101⟩ + 0.02|0111010110000001⟩ + 0.02|0111010110010111⟩ + 0.02|0111010110100100⟩ + 0.02|0111010110111101⟩ + 0.02|0111010111000001⟩ + 0.02|0111010111010111⟩ + 0.02|0111010111100100⟩ + 0.02|0111010111111101⟩ + 0.02|0111011000000001⟩ + 0.02|0111011000010111⟩ + 0.02|0111011000100100⟩ + 0.02|0111011000111101⟩ + 0.02|0111011001000001⟩ + 0.02|0111011001010111⟩ + 0.02|0111011001100100⟩ + 0.02|0111011001111101⟩ + 0.02|0111011010000001⟩ + 0.02|0111011010010111⟩ + 0.02|0111011010100100⟩ + 0.02|0111011010111101⟩ + 0.02|0111011011000001⟩ + 0.02|0111011011010111⟩ + 0.02|0111011011100100⟩ + 0.02|0111011011111101⟩ + 0.02|0111011100000001⟩ + 0.02|0111011100010111⟩ + 0.02|0111011100100100⟩ + 0.02|0111011100111101⟩ + 0.02|0111011101000001⟩ + 0.02|0111011101010111⟩ + 0.02|0111011101100100⟩ + 0.02|0111011101111101⟩ + 0.02|0111011110000001⟩ + 0.02|0111011110010111⟩ + 0.02|0111011110100100⟩ + 0.02|0111011110111101⟩ + 0.02|0111011111000001⟩ + 0.02|0111011111010111⟩ + 0.02|0111011111100100⟩ + 0.02|0111011111111101⟩ + 0.02|0111100000000001⟩ + 0.02|0111100000010111⟩ + 0.02|0111100000100100⟩ + 0.02|0111100000111101⟩ + 0.02|0111100001000001⟩ + 0.02|0111100001010111⟩ + 0.02|0111100001100100⟩ + 0.02|0111100001111101⟩ + 0.02|0111100010000001⟩ + 0.02|0111100010010111⟩ + 0.02|0111100010100100⟩ + 0.02|0111100010111101⟩ + 0.02|0111100011000001⟩ + 0.02|0111100011010111⟩ + 0.02|0111100011100100⟩ + 0.02|0111100011111101⟩ + 0.02|0111100100000001⟩ + 0.02|0111100100010111⟩ + 0.02|0111100100100100⟩ + 0.02|0111100100111101⟩ + 0.02|0111100101000001⟩ + 0.02|0111100101010111⟩ + 0.02|0111100101100100⟩ + 0.02|0111100101111101⟩ + 0.02|0111100110000001⟩ + 0.02|0111100110010111⟩ + 0.02|0111100110100100⟩ + 0.02|0111100110111101⟩ + 0.02|0111100111000001⟩ + 0.02|0111100111010111⟩ + 0.02|0111100111100100⟩ + 0.02|0111100111111101⟩ + 0.02|0111101000000001⟩ + 0.02|0111101000010111⟩ + 0.02|0111101000100100⟩ + 0.02|0111101000111101⟩ + 0.02|0111101001000001⟩ + 0.02|0111101001010111⟩ + 0.02|0111101001100100⟩ + 0.02|0111101001111101⟩ + 0.02|0111101010000001⟩ + 0.02|0111101010010111⟩ + 0.02|0111101010100100⟩ + 0.02|0111101010111101⟩ + 0.02|0111101011000001⟩ + 0.02|0111101011010111⟩ + 0.02|0111101011100100⟩ + 0.02|0111101011111101⟩ + 0.02|0111101100000001⟩ + 0.02|0111101100010111⟩ + 0.02|0111101100100100⟩ + 0.02|0111101100111101⟩ + 0.02|0111101101000001⟩ + 0.02|0111101101010111⟩ + 0.02|0111101101100100⟩ + 0.02|0111101101111101⟩ + 0.02|0111101110000001⟩ + 0.02|0111101110010111⟩ + 0.02|0111101110100100⟩ + 0.02|0111101110111101⟩ + 0.02|0111101111000001⟩ + 0.02|0111101111010111⟩ + 0.02|0111101111100100⟩ + 0.02|0111101111111101⟩ + 0.02|0111110000000001⟩ + 0.02|0111110000010111⟩ + 0.02|0111110000100100⟩ + 0.02|0111110000111101⟩ + 0.02|0111110001000001⟩ + 0.02|0111110001010111⟩ + 0.02|0111110001100100⟩ + 0.02|0111110001111101⟩ + 0.02|0111110010000001⟩ + 0.02|0111110010010111⟩ + 0.02|0111110010100100⟩ + 0.02|0111110010111101⟩ + 0.02|0111110011000001⟩ + 0.02|0111110011010111⟩ + 0.02|0111110011100100⟩ + 0.02|0111110011111101⟩ + 0.02|0111110100000001⟩ + 0.02|0111110100010111⟩ + 0.02|0111110100100100⟩ + 0.02|0111110100111101⟩ + 0.02|0111110101000001⟩ + 0.02|0111110101010111⟩ + 0.02|0111110101100100⟩ + 0.02|0111110101111101⟩ + 0.02|0111110110000001⟩ + 0.02|0111110110010111⟩ + 0.02|0111110110100100⟩ + 0.02|0111110110111101⟩ + 0.02|0111110111000001⟩ + 0.02|0111110111010111⟩ + 0.02|0111110111100100⟩ + 0.02|0111110111111101⟩ + 0.02|0111111000000001⟩ + 0.02|0111111000010111⟩ + 0.02|0111111000100100⟩ + 0.02|0111111000111101⟩ + 0.02|0111111001000001⟩ + 0.02|0111111001010111⟩ + 0.02|0111111001100100⟩ + 0.02|0111111001111101⟩ + 0.02|0111111010000001⟩ + 0.02|0111111010010111⟩ + 0.02|0111111010100100⟩ + 0.02|0111111010111101⟩ + 0.02|0111111011000001⟩ + 0.02|0111111011010111⟩ + 0.02|0111111011100100⟩ + 0.02|0111111011111101⟩ + 0.02|0111111100000001⟩ + 0.02|0111111100010111⟩ + 0.02|0111111100100100⟩ + 0.02|0111111100111101⟩ + 0.02|0111111101000001⟩ + 0.02|0111111101010111⟩ + 0.02|0111111101100100⟩ + 0.02|0111111101111101⟩ + 0.02|0111111110000001⟩ + 0.02|0111111110010111⟩ + 0.02|0111111110100100⟩ + 0.02|0111111110111101⟩ + 0.02|0111111111000001⟩ + 0.02|0111111111010111⟩ + 0.02|0111111111100100⟩ + 0.02|0111111111111101⟩ + 0.02|1000000000000001⟩ + 0.02|1000000000010111⟩ + 0.02|1000000000100100⟩ + 0.02|1000000000111101⟩ + 0.02|1000000001000001⟩ + 0.02|1000000001010111⟩ + 0.02|1000000001100100⟩ + 0.02|1000000001111101⟩ + 0.02|1000000010000001⟩ + 0.02|1000000010010111⟩ + 0.02|1000000010100100⟩ + 0.02|1000000010111101⟩ + 0.02|1000000011000001⟩ + 0.02|1000000011010111⟩ + 0.02|1000000011100100⟩ + 0.02|1000000011111101⟩ + 0.02|1000000100000001⟩ + 0.02|1000000100010111⟩ + 0.02|1000000100100100⟩ + 0.02|1000000100111101⟩ + 0.02|1000000101000001⟩ + 0.02|1000000101010111⟩ + 0.02|1000000101100100⟩ + 0.02|1000000101111101⟩ + 0.02|1000000110000001⟩ + 0.02|1000000110010111⟩ + 0.02|1000000110100100⟩ + 0.02|1000000110111101⟩ + 0.02|1000000111000001⟩ + 0.02|1000000111010111⟩ + 0.02|1000000111100100⟩ + 0.02|1000000111111101⟩ + 0.02|1000001000000001⟩ + 0.02|1000001000010111⟩ + 0.02|1000001000100100⟩ + 0.02|1000001000111101⟩ + 0.02|1000001001000001⟩ + 0.02|1000001001010111⟩ + 0.02|1000001001100100⟩ + 0.02|1000001001111101⟩ + 0.02|1000001010000001⟩ + 0.02|1000001010010111⟩ + 0.02|1000001010100100⟩ + 0.02|1000001010111101⟩ + 0.02|1000001011000001⟩ + 0.02|1000001011010111⟩ + 0.02|1000001011100100⟩ + 0.02|1000001011111101⟩ + 0.02|1000001100000001⟩ + 0.02|1000001100010111⟩ + 0.02|1000001100100100⟩ + 0.02|1000001100111101⟩ + 0.02|1000001101000001⟩ + 0.02|1000001101010111⟩ + 0.02|1000001101100100⟩ + 0.02|1000001101111101⟩ + 0.02|1000001110000001⟩ + 0.02|1000001110010111⟩ + 0.02|1000001110100100⟩ + 0.02|1000001110111101⟩ + 0.02|1000001111000001⟩ + 0.02|1000001111010111⟩ + 0.02|1000001111100100⟩ + 0.02|1000001111111101⟩ + 0.02|1000010000000001⟩ + 0.02|1000010000010111⟩ + 0.02|1000010000100100⟩ + 0.02|1000010000111101⟩ + 0.02|1000010001000001⟩ + 0.02|1000010001010111⟩ + 0.02|1000010001100100⟩ + 0.02|1000010001111101⟩ + 0.02|1000010010000001⟩ + 0.02|1000010010010111⟩ + 0.02|1000010010100100⟩ + 0.02|1000010010111101⟩ + 0.02|1000010011000001⟩ + 0.02|1000010011010111⟩ + 0.02|1000010011100100⟩ + 0.02|1000010011111101⟩ + 0.02|1000010100000001⟩ + 0.02|1000010100010111⟩ + 0.02|1000010100100100⟩ + 0.02|1000010100111101⟩ + 0.02|1000010101000001⟩ + 0.02|1000010101010111⟩ + 0.02|1000010101100100⟩ + 0.02|1000010101111101⟩ + 0.02|1000010110000001⟩ + 0.02|1000010110010111⟩ + 0.02|1000010110100100⟩ + 0.02|1000010110111101⟩ + 0.02|1000010111000001⟩ + 0.02|1000010111010111⟩ + 0.02|1000010111100100⟩ + 0.02|1000010111111101⟩ + 0.02|1000011000000001⟩ + 0.02|1000011000010111⟩ + 0.02|1000011000100100⟩ + 0.02|1000011000111101⟩ + 0.02|1000011001000001⟩ + 0.02|1000011001010111⟩ + 0.02|1000011001100100⟩ + 0.02|1000011001111101⟩ + 0.02|1000011010000001⟩ + 0.02|1000011010010111⟩ + 0.02|1000011010100100⟩ + 0.02|1000011010111101⟩ + 0.02|1000011011000001⟩ + 0.02|1000011011010111⟩ + 0.02|1000011011100100⟩ + 0.02|1000011011111101⟩ + 0.02|1000011100000001⟩ + 0.02|1000011100010111⟩ + 0.02|1000011100100100⟩ + 0.02|1000011100111101⟩ + 0.02|1000011101000001⟩ + 0.02|1000011101010111⟩ + 0.02|1000011101100100⟩ + 0.02|1000011101111101⟩ + 0.02|1000011110000001⟩ + 0.02|1000011110010111⟩ + 0.02|1000011110100100⟩ + 0.02|1000011110111101⟩ + 0.02|1000011111000001⟩ + 0.02|1000011111010111⟩ + 0.02|1000011111100100⟩ + 0.02|1000011111111101⟩ + 0.02|1000100000000001⟩ + 0.02|1000100000010111⟩ + 0.02|1000100000100100⟩ + 0.02|1000100000111101⟩ + 0.02|1000100001000001⟩ + 0.02|1000100001010111⟩ + 0.02|1000100001100100⟩ + 0.02|1000100001111101⟩ + 0.02|1000100010000001⟩ + 0.02|1000100010010111⟩ + 0.02|1000100010100100⟩ + 0.02|1000100010111101⟩ + 0.02|1000100011000001⟩ + 0.02|1000100011010111⟩ + 0.02|1000100011100100⟩ + 0.02|1000100011111101⟩ + 0.02|1000100100000001⟩ + 0.02|1000100100010111⟩ + 0.02|1000100100100100⟩ + 0.02|1000100100111101⟩ + 0.02|1000100101000001⟩ + 0.02|1000100101010111⟩ + 0.02|1000100101100100⟩ + 0.02|1000100101111101⟩ + 0.02|1000100110000001⟩ + 0.02|1000100110010111⟩ + 0.02|1000100110100100⟩ + 0.02|1000100110111101⟩ + 0.02|1000100111000001⟩ + 0.02|1000100111010111⟩ + 0.02|1000100111100100⟩ + 0.02|1000100111111101⟩ + 0.02|1000101000000001⟩ + 0.02|1000101000010111⟩ + 0.02|1000101000100100⟩ + 0.02|1000101000111101⟩ + 0.02|1000101001000001⟩ + 0.02|1000101001010111⟩ + 0.02|1000101001100100⟩ + 0.02|1000101001111101⟩ + 0.02|1000101010000001⟩ + 0.02|1000101010010111⟩ + 0.02|1000101010100100⟩ + 0.02|1000101010111101⟩ + 0.02|1000101011000001⟩ + 0.02|1000101011010111⟩ + 0.02|1000101011100100⟩ + 0.02|1000101011111101⟩ + 0.02|1000101100000001⟩ + 0.02|1000101100010111⟩ + 0.02|1000101100100100⟩ + 0.02|1000101100111101⟩ + 0.02|1000101101000001⟩ + 0.02|1000101101010111⟩ + 0.02|1000101101100100⟩ + 0.02|1000101101111101⟩ + 0.02|1000101110000001⟩ + 0.02|1000101110010111⟩ + 0.02|1000101110100100⟩ + 0.02|1000101110111101⟩ + 0.02|1000101111000001⟩ + 0.02|1000101111010111⟩ + 0.02|1000101111100100⟩ + 0.02|1000101111111101⟩ + 0.02|1000110000000001⟩ + 0.02|1000110000010111⟩ + 0.02|1000110000100100⟩ + 0.02|1000110000111101⟩ + 0.02|1000110001000001⟩ + 0.02|1000110001010111⟩ + 0.02|1000110001100100⟩ + 0.02|1000110001111101⟩ + 0.02|1000110010000001⟩ + 0.02|1000110010010111⟩ + 0.02|1000110010100100⟩ + 0.02|1000110010111101⟩ + 0.02|1000110011000001⟩ + 0.02|1000110011010111⟩ + 0.02|1000110011100100⟩ + 0.02|1000110011111101⟩ + 0.02|1000110100000001⟩ + 0.02|1000110100010111⟩ + 0.02|1000110100100100⟩ + 0.02|1000110100111101⟩ + 0.02|1000110101000001⟩ + 0.02|1000110101010111⟩ + 0.02|1000110101100100⟩ + 0.02|1000110101111101⟩ + 0.02|1000110110000001⟩ + 0.02|1000110110010111⟩ + 0.02|1000110110100100⟩ + 0.02|1000110110111101⟩ + 0.02|1000110111000001⟩ + 0.02|1000110111010111⟩ + 0.02|1000110111100100⟩ + 0.02|1000110111111101⟩ + 0.02|1000111000000001⟩ + 0.02|1000111000010111⟩ + 0.02|1000111000100100⟩ + 0.02|1000111000111101⟩ + 0.02|1000111001000001⟩ + 0.02|1000111001010111⟩ + 0.02|1000111001100100⟩ + 0.02|1000111001111101⟩ + 0.02|1000111010000001⟩ + 0.02|1000111010010111⟩ + 0.02|1000111010100100⟩ + 0.02|1000111010111101⟩ + 0.02|1000111011000001⟩ + 0.02|1000111011010111⟩ + 0.02|1000111011100100⟩ + 0.02|1000111011111101⟩ + 0.02|1000111100000001⟩ + 0.02|1000111100010111⟩ + 0.02|1000111100100100⟩ + 0.02|1000111100111101⟩ + 0.02|1000111101000001⟩ + 0.02|1000111101010111⟩ + 0.02|1000111101100100⟩ + 0.02|1000111101111101⟩ + 0.02|1000111110000001⟩ + 0.02|1000111110010111⟩ + 0.02|1000111110100100⟩ + 0.02|1000111110111101⟩ + 0.02|1000111111000001⟩ + 0.02|1000111111010111⟩ + 0.02|1000111111100100⟩ + 0.02|1000111111111101⟩ + 0.02|1001000000000001⟩ + 0.02|1001000000010111⟩ + 0.02|1001000000100100⟩ + 0.02|1001000000111101⟩ + 0.02|1001000001000001⟩ + 0.02|1001000001010111⟩ + 0.02|1001000001100100⟩ + 0.02|1001000001111101⟩ + 0.02|1001000010000001⟩ + 0.02|1001000010010111⟩ + 0.02|1001000010100100⟩ + 0.02|1001000010111101⟩ + 0.02|1001000011000001⟩ + 0.02|1001000011010111⟩ + 0.02|1001000011100100⟩ + 0.02|1001000011111101⟩ + 0.02|1001000100000001⟩ + 0.02|1001000100010111⟩ + 0.02|1001000100100100⟩ + 0.02|1001000100111101⟩ + 0.02|1001000101000001⟩ + 0.02|1001000101010111⟩ + 0.02|1001000101100100⟩ + 0.02|1001000101111101⟩ + 0.02|1001000110000001⟩ + 0.02|1001000110010111⟩ + 0.02|1001000110100100⟩ + 0.02|1001000110111101⟩ + 0.02|1001000111000001⟩ + 0.02|1001000111010111⟩ + 0.02|1001000111100100⟩ + 0.02|1001000111111101⟩ + 0.02|1001001000000001⟩ + 0.02|1001001000010111⟩ + 0.02|1001001000100100⟩ + 0.02|1001001000111101⟩ + 0.02|1001001001000001⟩ + 0.02|1001001001010111⟩ + 0.02|1001001001100100⟩ + 0.02|1001001001111101⟩ + 0.02|1001001010000001⟩ + 0.02|1001001010010111⟩ + 0.02|1001001010100100⟩ + 0.02|1001001010111101⟩ + 0.02|1001001011000001⟩ + 0.02|1001001011010111⟩ + 0.02|1001001011100100⟩ + 0.02|1001001011111101⟩ + 0.02|1001001100000001⟩ + 0.02|1001001100010111⟩ + 0.02|1001001100100100⟩ + 0.02|1001001100111101⟩ + 0.02|1001001101000001⟩ + 0.02|1001001101010111⟩ + 0.02|1001001101100100⟩ + 0.02|1001001101111101⟩ + 0.02|1001001110000001⟩ + 0.02|1001001110010111⟩ + 0.02|1001001110100100⟩ + 0.02|1001001110111101⟩ + 0.02|1001001111000001⟩ + 0.02|1001001111010111⟩ + 0.02|1001001111100100⟩ + 0.02|1001001111111101⟩ + 0.02|1001010000000001⟩ + 0.02|1001010000010111⟩ + 0.02|1001010000100100⟩ + 0.02|1001010000111101⟩ + 0.02|1001010001000001⟩ + 0.02|1001010001010111⟩ + 0.02|1001010001100100⟩ + 0.02|1001010001111101⟩ + 0.02|1001010010000001⟩ + 0.02|1001010010010111⟩ + 0.02|1001010010100100⟩ + 0.02|1001010010111101⟩ + 0.02|1001010011000001⟩ + 0.02|1001010011010111⟩ + 0.02|1001010011100100⟩ + 0.02|1001010011111101⟩ + 0.02|1001010100000001⟩ + 0.02|1001010100010111⟩ + 0.02|1001010100100100⟩ + 0.02|1001010100111101⟩ + 0.02|1001010101000001⟩ + 0.02|1001010101010111⟩ + 0.02|1001010101100100⟩ + 0.02|1001010101111101⟩ + 0.02|1001010110000001⟩ + 0.02|1001010110010111⟩ + 0.02|1001010110100100⟩ + 0.02|1001010110111101⟩ + 0.02|1001010111000001⟩ + 0.02|1001010111010111⟩ + 0.02|1001010111100100⟩ + 0.02|1001010111111101⟩ + 0.02|1001011000000001⟩ + 0.02|1001011000010111⟩ + 0.02|1001011000100100⟩ + 0.02|1001011000111101⟩ + 0.02|1001011001000001⟩ + 0.02|1001011001010111⟩ + 0.02|1001011001100100⟩ + 0.02|1001011001111101⟩ + 0.02|1001011010000001⟩ + 0.02|1001011010010111⟩ + 0.02|1001011010100100⟩ + 0.02|1001011010111101⟩ + 0.02|1001011011000001⟩ + 0.02|1001011011010111⟩ + 0.02|1001011011100100⟩ + 0.02|1001011011111101⟩ + 0.02|1001011100000001⟩ + 0.02|1001011100010111⟩ + 0.02|1001011100100100⟩ + 0.02|1001011100111101⟩ + 0.02|1001011101000001⟩ + 0.02|1001011101010111⟩ + 0.02|1001011101100100⟩ + 0.02|1001011101111101⟩ + 0.02|1001011110000001⟩ + 0.02|1001011110010111⟩ + 0.02|1001011110100100⟩ + 0.02|1001011110111101⟩ + 0.02|1001011111000001⟩ + 0.02|1001011111010111⟩ + 0.02|1001011111100100⟩ + 0.02|1001011111111101⟩ + 0.02|1001100000000001⟩ + 0.02|1001100000010111⟩ + 0.02|1001100000100100⟩ + 0.02|1001100000111101⟩ + 0.02|1001100001000001⟩ + 0.02|1001100001010111⟩ + 0.02|1001100001100100⟩ + 0.02|1001100001111101⟩ + 0.02|1001100010000001⟩ + 0.02|1001100010010111⟩ + 0.02|1001100010100100⟩ + 0.02|1001100010111101⟩ + 0.02|1001100011000001⟩ + 0.02|1001100011010111⟩ + 0.02|1001100011100100⟩ + 0.02|1001100011111101⟩ + 0.02|1001100100000001⟩ + 0.02|1001100100010111⟩ + 0.02|1001100100100100⟩ + 0.02|1001100100111101⟩ + 0.02|1001100101000001⟩ + 0.02|1001100101010111⟩ + 0.02|1001100101100100⟩ + 0.02|1001100101111101⟩ + 0.02|1001100110000001⟩ + 0.02|1001100110010111⟩ + 0.02|1001100110100100⟩ + 0.02|1001100110111101⟩ + 0.02|1001100111000001⟩ + 0.02|1001100111010111⟩ + 0.02|1001100111100100⟩ + 0.02|1001100111111101⟩ + 0.02|1001101000000001⟩ + 0.02|1001101000010111⟩ + 0.02|1001101000100100⟩ + 0.02|1001101000111101⟩ + 0.02|1001101001000001⟩ + 0.02|1001101001010111⟩ + 0.02|1001101001100100⟩ + 0.02|1001101001111101⟩ + 0.02|1001101010000001⟩ + 0.02|1001101010010111⟩ + 0.02|1001101010100100⟩ + 0.02|1001101010111101⟩ + 0.02|1001101011000001⟩ + 0.02|1001101011010111⟩ + 0.02|1001101011100100⟩ + 0.02|1001101011111101⟩ + 0.02|1001101100000001⟩ + 0.02|1001101100010111⟩ + 0.02|1001101100100100⟩ + 0.02|1001101100111101⟩ + 0.02|1001101101000001⟩ + 0.02|1001101101010111⟩ + 0.02|1001101101100100⟩ + 0.02|1001101101111101⟩ + 0.02|1001101110000001⟩ + 0.02|1001101110010111⟩ + 0.02|1001101110100100⟩ + 0.02|1001101110111101⟩ + 0.02|1001101111000001⟩ + 0.02|1001101111010111⟩ + 0.02|1001101111100100⟩ + 0.02|1001101111111101⟩ + 0.02|1001110000000001⟩ + 0.02|1001110000010111⟩ + 0.02|1001110000100100⟩ + 0.02|1001110000111101⟩ + 0.02|1001110001000001⟩ + 0.02|1001110001010111⟩ + 0.02|1001110001100100⟩ + 0.02|1001110001111101⟩ + 0.02|1001110010000001⟩ + 0.02|1001110010010111⟩ + 0.02|1001110010100100⟩ + 0.02|1001110010111101⟩ + 0.02|1001110011000001⟩ + 0.02|1001110011010111⟩ + 0.02|1001110011100100⟩ + 0.02|1001110011111101⟩ + 0.02|1001110100000001⟩ + 0.02|1001110100010111⟩ + 0.02|1001110100100100⟩ + 0.02|1001110100111101⟩ + 0.02|1001110101000001⟩ + 0.02|1001110101010111⟩ + 0.02|1001110101100100⟩ + 0.02|1001110101111101⟩ + 0.02|1001110110000001⟩ + 0.02|1001110110010111⟩ + 0.02|1001110110100100⟩ + 0.02|1001110110111101⟩ + 0.02|1001110111000001⟩ + 0.02|1001110111010111⟩ + 0.02|1001110111100100⟩ + 0.02|1001110111111101⟩ + 0.02|1001111000000001⟩ + 0.02|1001111000010111⟩ + 0.02|1001111000100100⟩ + 0.02|1001111000111101⟩ + 0.02|1001111001000001⟩ + 0.02|1001111001010111⟩ + 0.02|1001111001100100⟩ + 0.02|1001111001111101⟩ + 0.02|1001111010000001⟩ + 0.02|1001111010010111⟩ + 0.02|1001111010100100⟩ + 0.02|1001111010111101⟩ + 0.02|1001111011000001⟩ + 0.02|1001111011010111⟩ + 0.02|1001111011100100⟩ + 0.02|1001111011111101⟩ + 0.02|1001111100000001⟩ + 0.02|1001111100010111⟩ + 0.02|1001111100100100⟩ + 0.02|1001111100111101⟩ + 0.02|1001111101000001⟩ + 0.02|1001111101010111⟩ + 0.02|1001111101100100⟩ + 0.02|1001111101111101⟩ + 0.02|1001111110000001⟩ + 0.02|1001111110010111⟩ + 0.02|1001111110100100⟩ + 0.02|1001111110111101⟩ + 0.02|1001111111000001⟩ + 0.02|1001111111010111⟩ + 0.02|1001111111100100⟩ + 0.02|1001111111111101⟩ + 0.02|1010000000000001⟩ + 0.02|1010000000010111⟩ + 0.02|1010000000100100⟩ + 0.02|1010000000111101⟩ + 0.02|1010000001000001⟩ + 0.02|1010000001010111⟩ + 0.02|1010000001100100⟩ + 0.02|1010000001111101⟩ + 0.02|1010000010000001⟩ + 0.02|1010000010010111⟩ + 0.02|1010000010100100⟩ + 0.02|1010000010111101⟩ + 0.02|1010000011000001⟩ + 0.02|1010000011010111⟩ + 0.02|1010000011100100⟩ + 0.02|1010000011111101⟩ + 0.02|1010000100000001⟩ + 0.02|1010000100010111⟩ + 0.02|1010000100100100⟩ + 0.02|1010000100111101⟩ + 0.02|1010000101000001⟩ + 0.02|1010000101010111⟩ + 0.02|1010000101100100⟩ + 0.02|1010000101111101⟩ + 0.02|1010000110000001⟩ + 0.02|1010000110010111⟩ + 0.02|1010000110100100⟩ + 0.02|1010000110111101⟩ + 0.02|1010000111000001⟩ + 0.02|1010000111010111⟩ + 0.02|1010000111100100⟩ + 0.02|1010000111111101⟩ + 0.02|1010001000000001⟩ + 0.02|1010001000010111⟩ + 0.02|1010001000100100⟩ + 0.02|1010001000111101⟩ + 0.02|1010001001000001⟩ + 0.02|1010001001010111⟩ + 0.02|1010001001100100⟩ + 0.02|1010001001111101⟩ + 0.02|1010001010000001⟩ + 0.02|1010001010010111⟩ + 0.02|1010001010100100⟩ + 0.02|1010001010111101⟩ + 0.02|1010001011000001⟩ + 0.02|1010001011010111⟩ + 0.02|1010001011100100⟩ + 0.02|1010001011111101⟩ + 0.02|1010001100000001⟩ + 0.02|1010001100010111⟩ + 0.02|1010001100100100⟩ + 0.02|1010001100111101⟩ + 0.02|1010001101000001⟩ + 0.02|1010001101010111⟩ + 0.02|1010001101100100⟩ + 0.02|1010001101111101⟩ + 0.02|1010001110000001⟩ + 0.02|1010001110010111⟩ + 0.02|1010001110100100⟩ + 0.02|1010001110111101⟩ + 0.02|1010001111000001⟩ + 0.02|1010001111010111⟩ + 0.02|1010001111100100⟩ + 0.02|1010001111111101⟩ + 0.02|1010010000000001⟩ + 0.02|1010010000010111⟩ + 0.02|1010010000100100⟩ + 0.02|1010010000111101⟩ + 0.02|1010010001000001⟩ + 0.02|1010010001010111⟩ + 0.02|1010010001100100⟩ + 0.02|1010010001111101⟩ + 0.02|1010010010000001⟩ + 0.02|1010010010010111⟩ + 0.02|1010010010100100⟩ + 0.02|1010010010111101⟩ + 0.02|1010010011000001⟩ + 0.02|1010010011010111⟩ + 0.02|1010010011100100⟩ + 0.02|1010010011111101⟩ + 0.02|1010010100000001⟩ + 0.02|1010010100010111⟩ + 0.02|1010010100100100⟩ + 0.02|1010010100111101⟩ + 0.02|1010010101000001⟩ + 0.02|1010010101010111⟩ + 0.02|1010010101100100⟩ + 0.02|1010010101111101⟩ + 0.02|1010010110000001⟩ + 0.02|1010010110010111⟩ + 0.02|1010010110100100⟩ + 0.02|1010010110111101⟩ + 0.02|1010010111000001⟩ + 0.02|1010010111010111⟩ + 0.02|1010010111100100⟩ + 0.02|1010010111111101⟩ + 0.02|1010011000000001⟩ + 0.02|1010011000010111⟩ + 0.02|1010011000100100⟩ + 0.02|1010011000111101⟩ + 0.02|1010011001000001⟩ + 0.02|1010011001010111⟩ + 0.02|1010011001100100⟩ + 0.02|1010011001111101⟩ + 0.02|1010011010000001⟩ + 0.02|1010011010010111⟩ + 0.02|1010011010100100⟩ + 0.02|1010011010111101⟩ + 0.02|1010011011000001⟩ + 0.02|1010011011010111⟩ + 0.02|1010011011100100⟩ + 0.02|1010011011111101⟩ + 0.02|1010011100000001⟩ + 0.02|1010011100010111⟩ + 0.02|1010011100100100⟩ + 0.02|1010011100111101⟩ + 0.02|1010011101000001⟩ + 0.02|1010011101010111⟩ + 0.02|1010011101100100⟩ + 0.02|1010011101111101⟩ + 0.02|1010011110000001⟩ + 0.02|1010011110010111⟩ + 0.02|1010011110100100⟩ + 0.02|1010011110111101⟩ + 0.02|1010011111000001⟩ + 0.02|1010011111010111⟩ + 0.02|1010011111100100⟩ + 0.02|1010011111111101⟩ + 0.02|1010100000000001⟩ + 0.02|1010100000010111⟩ + 0.02|1010100000100100⟩ + 0.02|1010100000111101⟩ + 0.02|1010100001000001⟩ + 0.02|1010100001010111⟩ + 0.02|1010100001100100⟩ + 0.02|1010100001111101⟩ + 0.02|1010100010000001⟩ + 0.02|1010100010010111⟩ + 0.02|1010100010100100⟩ + 0.02|1010100010111101⟩ + 0.02|1010100011000001⟩ + 0.02|1010100011010111⟩ + 0.02|1010100011100100⟩ + 0.02|1010100011111101⟩ + 0.02|1010100100000001⟩ + 0.02|1010100100010111⟩ + 0.02|1010100100100100⟩ + 0.02|1010100100111101⟩ + 0.02|1010100101000001⟩ + 0.02|1010100101010111⟩ + 0.02|1010100101100100⟩ + 0.02|1010100101111101⟩ + 0.02|1010100110000001⟩ + 0.02|1010100110010111⟩ + 0.02|1010100110100100⟩ + 0.02|1010100110111101⟩ + 0.02|1010100111000001⟩ + 0.02|1010100111010111⟩ + 0.02|1010100111100100⟩ + 0.02|1010100111111101⟩ + 0.02|1010101000000001⟩ + 0.02|1010101000010111⟩ + 0.02|1010101000100100⟩ + 0.02|1010101000111101⟩ + 0.02|1010101001000001⟩ + 0.02|1010101001010111⟩ + 0.02|1010101001100100⟩ + 0.02|1010101001111101⟩ + 0.02|1010101010000001⟩ + 0.02|1010101010010111⟩ + 0.02|1010101010100100⟩ + 0.02|1010101010111101⟩ + 0.02|1010101011000001⟩ + 0.02|1010101011010111⟩ + 0.02|1010101011100100⟩ + 0.02|1010101011111101⟩ + 0.02|1010101100000001⟩ + 0.02|1010101100010111⟩ + 0.02|1010101100100100⟩ + 0.02|1010101100111101⟩ + 0.02|1010101101000001⟩ + 0.02|1010101101010111⟩ + 0.02|1010101101100100⟩ + 0.02|1010101101111101⟩ + 0.02|1010101110000001⟩ + 0.02|1010101110010111⟩ + 0.02|1010101110100100⟩ + 0.02|1010101110111101⟩ + 0.02|1010101111000001⟩ + 0.02|1010101111010111⟩ + 0.02|1010101111100100⟩ + 0.02|1010101111111101⟩ + 0.02|1010110000000001⟩ + 0.02|1010110000010111⟩ + 0.02|1010110000100100⟩ + 0.02|1010110000111101⟩ + 0.02|1010110001000001⟩ + 0.02|1010110001010111⟩ + 0.02|1010110001100100⟩ + 0.02|1010110001111101⟩ + 0.02|1010110010000001⟩ + 0.02|1010110010010111⟩ + 0.02|1010110010100100⟩ + 0.02|1010110010111101⟩ + 0.02|1010110011000001⟩ + 0.02|1010110011010111⟩ + 0.02|1010110011100100⟩ + 0.02|1010110011111101⟩ + 0.02|1010110100000001⟩ + 0.02|1010110100010111⟩ + 0.02|1010110100100100⟩ + 0.02|1010110100111101⟩ + 0.02|1010110101000001⟩ + 0.02|1010110101010111⟩ + 0.02|1010110101100100⟩ + 0.02|1010110101111101⟩ + 0.02|1010110110000001⟩ + 0.02|1010110110010111⟩ + 0.02|1010110110100100⟩ + 0.02|1010110110111101⟩ + 0.02|1010110111000001⟩ + 0.02|1010110111010111⟩ + 0.02|1010110111100100⟩ + 0.02|1010110111111101⟩ + 0.02|1010111000000001⟩ + 0.02|1010111000010111⟩ + 0.02|1010111000100100⟩ + 0.02|1010111000111101⟩ + 0.02|1010111001000001⟩ + 0.02|1010111001010111⟩ + 0.02|1010111001100100⟩ + 0.02|1010111001111101⟩ + 0.02|1010111010000001⟩ + 0.02|1010111010010111⟩ + 0.02|1010111010100100⟩ + 0.02|1010111010111101⟩ + 0.02|1010111011000001⟩ + 0.02|1010111011010111⟩ + 0.02|1010111011100100⟩ + 0.02|1010111011111101⟩ + 0.02|1010111100000001⟩ + 0.02|1010111100010111⟩ + 0.02|1010111100100100⟩ + 0.02|1010111100111101⟩ + 0.02|1010111101000001⟩ + 0.02|1010111101010111⟩ + 0.02|1010111101100100⟩ + 0.02|1010111101111101⟩ + 0.02|1010111110000001⟩ + 0.02|1010111110010111⟩ + 0.02|1010111110100100⟩ + 0.02|1010111110111101⟩ + 0.02|1010111111000001⟩ + 0.02|1010111111010111⟩ + 0.02|1010111111100100⟩ + 0.02|1010111111111101⟩ + 0.02|1011000000000001⟩ + 0.02|1011000000010111⟩ + 0.02|1011000000100100⟩ + 0.02|1011000000111101⟩ + 0.02|1011000001000001⟩ + 0.02|1011000001010111⟩ + 0.02|1011000001100100⟩ + 0.02|1011000001111101⟩ + 0.02|1011000010000001⟩ + 0.02|1011000010010111⟩ + 0.02|1011000010100100⟩ + 0.02|1011000010111101⟩ + 0.02|1011000011000001⟩ + 0.02|1011000011010111⟩ + 0.02|1011000011100100⟩ + 0.02|1011000011111101⟩ + 0.02|1011000100000001⟩ + 0.02|1011000100010111⟩ + 0.02|1011000100100100⟩ + 0.02|1011000100111101⟩ + 0.02|1011000101000001⟩ + 0.02|1011000101010111⟩ + 0.02|1011000101100100⟩ + 0.02|1011000101111101⟩ + 0.02|1011000110000001⟩ + 0.02|1011000110010111⟩ + 0.02|1011000110100100⟩ + 0.02|1011000110111101⟩ + 0.02|1011000111000001⟩ + 0.02|1011000111010111⟩ + 0.02|1011000111100100⟩ + 0.02|1011000111111101⟩ + 0.02|1011001000000001⟩ + 0.02|1011001000010111⟩ + 0.02|1011001000100100⟩ + 0.02|1011001000111101⟩ + 0.02|1011001001000001⟩ + 0.02|1011001001010111⟩ + 0.02|1011001001100100⟩ + 0.02|1011001001111101⟩ + 0.02|1011001010000001⟩ + 0.02|1011001010010111⟩ + 0.02|1011001010100100⟩ + 0.02|1011001010111101⟩ + 0.02|1011001011000001⟩ + 0.02|1011001011010111⟩ + 0.02|1011001011100100⟩ + 0.02|1011001011111101⟩ + 0.02|1011001100000001⟩ + 0.02|1011001100010111⟩ + 0.02|1011001100100100⟩ + 0.02|1011001100111101⟩ + 0.02|1011001101000001⟩ + 0.02|1011001101010111⟩ + 0.02|1011001101100100⟩ + 0.02|1011001101111101⟩ + 0.02|1011001110000001⟩ + 0.02|1011001110010111⟩ + 0.02|1011001110100100⟩ + 0.02|1011001110111101⟩ + 0.02|1011001111000001⟩ + 0.02|1011001111010111⟩ + 0.02|1011001111100100⟩ + 0.02|1011001111111101⟩ + 0.02|1011010000000001⟩ + 0.02|1011010000010111⟩ + 0.02|1011010000100100⟩ + 0.02|1011010000111101⟩ + 0.02|1011010001000001⟩ + 0.02|1011010001010111⟩ + 0.02|1011010001100100⟩ + 0.02|1011010001111101⟩ + 0.02|1011010010000001⟩ + 0.02|1011010010010111⟩ + 0.02|1011010010100100⟩ + 0.02|1011010010111101⟩ + 0.02|1011010011000001⟩ + 0.02|1011010011010111⟩ + 0.02|1011010011100100⟩ + 0.02|1011010011111101⟩ + 0.02|1011010100000001⟩ + 0.02|1011010100010111⟩ + 0.02|1011010100100100⟩ + 0.02|1011010100111101⟩ + 0.02|1011010101000001⟩ + 0.02|1011010101010111⟩ + 0.02|1011010101100100⟩ + 0.02|1011010101111101⟩ + 0.02|1011010110000001⟩ + 0.02|1011010110010111⟩ + 0.02|1011010110100100⟩ + 0.02|1011010110111101⟩ + 0.02|1011010111000001⟩ + 0.02|1011010111010111⟩ + 0.02|1011010111100100⟩ + 0.02|1011010111111101⟩ + 0.02|1011011000000001⟩ + 0.02|1011011000010111⟩ + 0.02|1011011000100100⟩ + 0.02|1011011000111101⟩ + 0.02|1011011001000001⟩ + 0.02|1011011001010111⟩ + 0.02|1011011001100100⟩ + 0.02|1011011001111101⟩ + 0.02|1011011010000001⟩ + 0.02|1011011010010111⟩ + 0.02|1011011010100100⟩ + 0.02|1011011010111101⟩ + 0.02|1011011011000001⟩ + 0.02|1011011011010111⟩ + 0.02|1011011011100100⟩ + 0.02|1011011011111101⟩ + 0.02|1011011100000001⟩ + 0.02|1011011100010111⟩ + 0.02|1011011100100100⟩ + 0.02|1011011100111101⟩ + 0.02|1011011101000001⟩ + 0.02|1011011101010111⟩ + 0.02|1011011101100100⟩ + 0.02|1011011101111101⟩ + 0.02|1011011110000001⟩ + 0.02|1011011110010111⟩ + 0.02|1011011110100100⟩ + 0.02|1011011110111101⟩ + 0.02|1011011111000001⟩ + 0.02|1011011111010111⟩ + 0.02|1011011111100100⟩ + 0.02|1011011111111101⟩ + 0.02|1011100000000001⟩ + 0.02|1011100000010111⟩ + 0.02|1011100000100100⟩ + 0.02|1011100000111101⟩ + 0.02|1011100001000001⟩ + 0.02|1011100001010111⟩ + 0.02|1011100001100100⟩ + 0.02|1011100001111101⟩ + 0.02|1011100010000001⟩ + 0.02|1011100010010111⟩ + 0.02|1011100010100100⟩ + 0.02|1011100010111101⟩ + 0.02|1011100011000001⟩ + 0.02|1011100011010111⟩ + 0.02|1011100011100100⟩ + 0.02|1011100011111101⟩ + 0.02|1011100100000001⟩ + 0.02|1011100100010111⟩ + 0.02|1011100100100100⟩ + 0.02|1011100100111101⟩ + 0.02|1011100101000001⟩ + 0.02|1011100101010111⟩ + 0.02|1011100101100100⟩ + 0.02|1011100101111101⟩ + 0.02|1011100110000001⟩ + 0.02|1011100110010111⟩ + 0.02|1011100110100100⟩ + 0.02|1011100110111101⟩ + 0.02|1011100111000001⟩ + 0.02|1011100111010111⟩ + 0.02|1011100111100100⟩ + 0.02|1011100111111101⟩ + 0.02|1011101000000001⟩ + 0.02|1011101000010111⟩ + 0.02|1011101000100100⟩ + 0.02|1011101000111101⟩ + 0.02|1011101001000001⟩ + 0.02|1011101001010111⟩ + 0.02|1011101001100100⟩ + 0.02|1011101001111101⟩ + 0.02|1011101010000001⟩ + 0.02|1011101010010111⟩ + 0.02|1011101010100100⟩ + 0.02|1011101010111101⟩ + 0.02|1011101011000001⟩ + 0.02|1011101011010111⟩ + 0.02|1011101011100100⟩ + 0.02|1011101011111101⟩ + 0.02|1011101100000001⟩ + 0.02|1011101100010111⟩ + 0.02|1011101100100100⟩ + 0.02|1011101100111101⟩ + 0.02|1011101101000001⟩ + 0.02|1011101101010111⟩ + 0.02|1011101101100100⟩ + 0.02|1011101101111101⟩ + 0.02|1011101110000001⟩ + 0.02|1011101110010111⟩ + 0.02|1011101110100100⟩ + 0.02|1011101110111101⟩ + 0.02|1011101111000001⟩ + 0.02|1011101111010111⟩ + 0.02|1011101111100100⟩ + 0.02|1011101111111101⟩ + 0.02|1011110000000001⟩ + 0.02|1011110000010111⟩ + 0.02|1011110000100100⟩ + 0.02|1011110000111101⟩ + 0.02|1011110001000001⟩ + 0.02|1011110001010111⟩ + 0.02|1011110001100100⟩ + 0.02|1011110001111101⟩ + 0.02|1011110010000001⟩ + 0.02|1011110010010111⟩ + 0.02|1011110010100100⟩ + 0.02|1011110010111101⟩ + 0.02|1011110011000001⟩ + 0.02|1011110011010111⟩ + 0.02|1011110011100100⟩ + 0.02|1011110011111101⟩ + 0.02|1011110100000001⟩ + 0.02|1011110100010111⟩ + 0.02|1011110100100100⟩ + 0.02|1011110100111101⟩ + 0.02|1011110101000001⟩ + 0.02|1011110101010111⟩ + 0.02|1011110101100100⟩ + 0.02|1011110101111101⟩ + 0.02|1011110110000001⟩ + 0.02|1011110110010111⟩ + 0.02|1011110110100100⟩ + 0.02|1011110110111101⟩ + 0.02|1011110111000001⟩ + 0.02|1011110111010111⟩ + 0.02|1011110111100100⟩ + 0.02|1011110111111101⟩ + 0.02|1011111000000001⟩ + 0.02|1011111000010111⟩ + 0.02|1011111000100100⟩ + 0.02|1011111000111101⟩ + 0.02|1011111001000001⟩ + 0.02|1011111001010111⟩ + 0.02|1011111001100100⟩ + 0.02|1011111001111101⟩ + 0.02|1011111010000001⟩ + 0.02|1011111010010111⟩ + 0.02|1011111010100100⟩ + 0.02|1011111010111101⟩ + 0.02|1011111011000001⟩ + 0.02|1011111011010111⟩ + 0.02|1011111011100100⟩ + 0.02|1011111011111101⟩ + 0.02|1011111100000001⟩ + 0.02|1011111100010111⟩ + 0.02|1011111100100100⟩ + 0.02|1011111100111101⟩ + 0.02|1011111101000001⟩ + 0.02|1011111101010111⟩ + 0.02|1011111101100100⟩ + 0.02|1011111101111101⟩ + 0.02|1011111110000001⟩ + 0.02|1011111110010111⟩ + 0.02|1011111110100100⟩ + 0.02|1011111110111101⟩ + 0.02|1011111111000001⟩ + 0.02|1011111111010111⟩ + 0.02|1011111111100100⟩ + 0.02|1011111111111101⟩ + 0.02|1100000000000001⟩ + 0.02|1100000000010111⟩ + 0.02|1100000000100100⟩ + 0.02|1100000000111101⟩ + 0.02|1100000001000001⟩ + 0.02|1100000001010111⟩ + 0.02|1100000001100100⟩ + 0.02|1100000001111101⟩ + 0.02|1100000010000001⟩ + 0.02|1100000010010111⟩ + 0.02|1100000010100100⟩ + 0.02|1100000010111101⟩ + 0.02|1100000011000001⟩ + 0.02|1100000011010111⟩ + 0.02|1100000011100100⟩ + 0.02|1100000011111101⟩ + 0.02|1100000100000001⟩ + 0.02|1100000100010111⟩ + 0.02|1100000100100100⟩ + 0.02|1100000100111101⟩ + 0.02|1100000101000001⟩ + 0.02|1100000101010111⟩ + 0.02|1100000101100100⟩ + 0.02|1100000101111101⟩ + 0.02|1100000110000001⟩ + 0.02|1100000110010111⟩ + 0.02|1100000110100100⟩ + 0.02|1100000110111101⟩ + 0.02|1100000111000001⟩ + 0.02|1100000111010111⟩ + 0.02|1100000111100100⟩ + 0.02|1100000111111101⟩ + 0.02|1100001000000001⟩ + 0.02|1100001000010111⟩ + 0.02|1100001000100100⟩ + 0.02|1100001000111101⟩ + 0.02|1100001001000001⟩ + 0.02|1100001001010111⟩ + 0.02|1100001001100100⟩ + 0.02|1100001001111101⟩ + 0.02|1100001010000001⟩ + 0.02|1100001010010111⟩ + 0.02|1100001010100100⟩ + 0.02|1100001010111101⟩ + 0.02|1100001011000001⟩ + 0.02|1100001011010111⟩ + 0.02|1100001011100100⟩ + 0.02|1100001011111101⟩ + 0.02|1100001100000001⟩ + 0.02|1100001100010111⟩ + 0.02|1100001100100100⟩ + 0.02|1100001100111101⟩ + 0.02|1100001101000001⟩ + 0.02|1100001101010111⟩ + 0.02|1100001101100100⟩ + 0.02|1100001101111101⟩ + 0.02|1100001110000001⟩ + 0.02|1100001110010111⟩ + 0.02|1100001110100100⟩ + 0.02|1100001110111101⟩ + 0.02|1100001111000001⟩ + 0.02|1100001111010111⟩ + 0.02|1100001111100100⟩ + 0.02|1100001111111101⟩ + 0.02|1100010000000001⟩ + 0.02|1100010000010111⟩ + 0.02|1100010000100100⟩ + 0.02|1100010000111101⟩ + 0.02|1100010001000001⟩ + 0.02|1100010001010111⟩ + 0.02|1100010001100100⟩ + 0.02|1100010001111101⟩ + 0.02|1100010010000001⟩ + 0.02|1100010010010111⟩ + 0.02|1100010010100100⟩ + 0.02|1100010010111101⟩ + 0.02|1100010011000001⟩ + 0.02|1100010011010111⟩ + 0.02|1100010011100100⟩ + 0.02|1100010011111101⟩ + 0.02|1100010100000001⟩ + 0.02|1100010100010111⟩ + 0.02|1100010100100100⟩ + 0.02|1100010100111101⟩ + 0.02|1100010101000001⟩ + 0.02|1100010101010111⟩ + 0.02|1100010101100100⟩ + 0.02|1100010101111101⟩ + 0.02|1100010110000001⟩ + 0.02|1100010110010111⟩ + 0.02|1100010110100100⟩ + 0.02|1100010110111101⟩ + 0.02|1100010111000001⟩ + 0.02|1100010111010111⟩ + 0.02|1100010111100100⟩ + 0.02|1100010111111101⟩ + 0.02|1100011000000001⟩ + 0.02|1100011000010111⟩ + 0.02|1100011000100100⟩ + 0.02|1100011000111101⟩ + 0.02|1100011001000001⟩ + 0.02|1100011001010111⟩ + 0.02|1100011001100100⟩ + 0.02|1100011001111101⟩ + 0.02|1100011010000001⟩ + 0.02|1100011010010111⟩ + 0.02|1100011010100100⟩ + 0.02|1100011010111101⟩ + 0.02|1100011011000001⟩ + 0.02|1100011011010111⟩ + 0.02|1100011011100100⟩ + 0.02|1100011011111101⟩ + 0.02|1100011100000001⟩ + 0.02|1100011100010111⟩ + 0.02|1100011100100100⟩ + 0.02|1100011100111101⟩ + 0.02|1100011101000001⟩ + 0.02|1100011101010111⟩ + 0.02|1100011101100100⟩ + 0.02|1100011101111101⟩ + 0.02|1100011110000001⟩ + 0.02|1100011110010111⟩ + 0.02|1100011110100100⟩ + 0.02|1100011110111101⟩ + 0.02|1100011111000001⟩ + 0.02|1100011111010111⟩ + 0.02|1100011111100100⟩ + 0.02|1100011111111101⟩ + 0.02|1100100000000001⟩ + 0.02|1100100000010111⟩ + 0.02|1100100000100100⟩ + 0.02|1100100000111101⟩ + 0.02|1100100001000001⟩ + 0.02|1100100001010111⟩ + 0.02|1100100001100100⟩ + 0.02|1100100001111101⟩ + 0.02|1100100010000001⟩ + 0.02|1100100010010111⟩ + 0.02|1100100010100100⟩ + 0.02|1100100010111101⟩ + 0.02|1100100011000001⟩ + 0.02|1100100011010111⟩ + 0.02|1100100011100100⟩ + 0.02|1100100011111101⟩ + 0.02|1100100100000001⟩ + 0.02|1100100100010111⟩ + 0.02|1100100100100100⟩ + 0.02|1100100100111101⟩ + 0.02|1100100101000001⟩ + 0.02|1100100101010111⟩ + 0.02|1100100101100100⟩ + 0.02|1100100101111101⟩ + 0.02|1100100110000001⟩ + 0.02|1100100110010111⟩ + 0.02|1100100110100100⟩ + 0.02|1100100110111101⟩ + 0.02|1100100111000001⟩ + 0.02|1100100111010111⟩ + 0.02|1100100111100100⟩ + 0.02|1100100111111101⟩ + 0.02|1100101000000001⟩ + 0.02|1100101000010111⟩ + 0.02|1100101000100100⟩ + 0.02|1100101000111101⟩ + 0.02|1100101001000001⟩ + 0.02|1100101001010111⟩ + 0.02|1100101001100100⟩ + 0.02|1100101001111101⟩ + 0.02|1100101010000001⟩ + 0.02|1100101010010111⟩ + 0.02|1100101010100100⟩ + 0.02|1100101010111101⟩ + 0.02|1100101011000001⟩ + 0.02|1100101011010111⟩ + 0.02|1100101011100100⟩ + 0.02|1100101011111101⟩ + 0.02|1100101100000001⟩ + 0.02|1100101100010111⟩ + 0.02|1100101100100100⟩ + 0.02|1100101100111101⟩ + 0.02|1100101101000001⟩ + 0.02|1100101101010111⟩ + 0.02|1100101101100100⟩ + 0.02|1100101101111101⟩ + 0.02|1100101110000001⟩ + 0.02|1100101110010111⟩ + 0.02|1100101110100100⟩ + 0.02|1100101110111101⟩ + 0.02|1100101111000001⟩ + 0.02|1100101111010111⟩ + 0.02|1100101111100100⟩ + 0.02|1100101111111101⟩ + 0.02|1100110000000001⟩ + 0.02|1100110000010111⟩ + 0.02|1100110000100100⟩ + 0.02|1100110000111101⟩ + 0.02|1100110001000001⟩ + 0.02|1100110001010111⟩ + 0.02|1100110001100100⟩ + 0.02|1100110001111101⟩ + 0.02|1100110010000001⟩ + 0.02|1100110010010111⟩ + 0.02|1100110010100100⟩ + 0.02|1100110010111101⟩ + 0.02|1100110011000001⟩ + 0.02|1100110011010111⟩ + 0.02|1100110011100100⟩ + 0.02|1100110011111101⟩ + 0.02|1100110100000001⟩ + 0.02|1100110100010111⟩ + 0.02|1100110100100100⟩ + 0.02|1100110100111101⟩ + 0.02|1100110101000001⟩ + 0.02|1100110101010111⟩ + 0.02|1100110101100100⟩ + 0.02|1100110101111101⟩ + 0.02|1100110110000001⟩ + 0.02|1100110110010111⟩ + 0.02|1100110110100100⟩ + 0.02|1100110110111101⟩ + 0.02|1100110111000001⟩ + 0.02|1100110111010111⟩ + 0.02|1100110111100100⟩ + 0.02|1100110111111101⟩ + 0.02|1100111000000001⟩ + 0.02|1100111000010111⟩ + 0.02|1100111000100100⟩ + 0.02|1100111000111101⟩ + 0.02|1100111001000001⟩ + 0.02|1100111001010111⟩ + 0.02|1100111001100100⟩ + 0.02|1100111001111101⟩ + 0.02|1100111010000001⟩ + 0.02|1100111010010111⟩ + 0.02|1100111010100100⟩ + 0.02|1100111010111101⟩ + 0.02|1100111011000001⟩ + 0.02|1100111011010111⟩ + 0.02|1100111011100100⟩ + 0.02|1100111011111101⟩ + 0.02|1100111100000001⟩ + 0.02|1100111100010111⟩ + 0.02|1100111100100100⟩ + 0.02|1100111100111101⟩ + 0.02|1100111101000001⟩ + 0.02|1100111101010111⟩ + 0.02|1100111101100100⟩ + 0.02|1100111101111101⟩ + 0.02|1100111110000001⟩ + 0.02|1100111110010111⟩ + 0.02|1100111110100100⟩ + 0.02|1100111110111101⟩ + 0.02|1100111111000001⟩ + 0.02|1100111111010111⟩ + 0.02|1100111111100100⟩ + 0.02|1100111111111101⟩ + 0.02|1101000000000001⟩ + 0.02|1101000000010111⟩ + 0.02|1101000000100100⟩ + 0.02|1101000000111101⟩ + 0.02|1101000001000001⟩ + 0.02|1101000001010111⟩ + 0.02|1101000001100100⟩ + 0.02|1101000001111101⟩ + 0.02|1101000010000001⟩ + 0.02|1101000010010111⟩ + 0.02|1101000010100100⟩ + 0.02|1101000010111101⟩ + 0.02|1101000011000001⟩ + 0.02|1101000011010111⟩ + 0.02|1101000011100100⟩ + 0.02|1101000011111101⟩ + 0.02|1101000100000001⟩ + 0.02|1101000100010111⟩ + 0.02|1101000100100100⟩ + 0.02|1101000100111101⟩ + 0.02|1101000101000001⟩ + 0.02|1101000101010111⟩ + 0.02|1101000101100100⟩ + 0.02|1101000101111101⟩ + 0.02|1101000110000001⟩ + 0.02|1101000110010111⟩ + 0.02|1101000110100100⟩ + 0.02|1101000110111101⟩ + 0.02|1101000111000001⟩ + 0.02|1101000111010111⟩ + 0.02|1101000111100100⟩ + 0.02|1101000111111101⟩ + 0.02|1101001000000001⟩ + 0.02|1101001000010111⟩ + 0.02|1101001000100100⟩ + 0.02|1101001000111101⟩ + 0.02|1101001001000001⟩ + 0.02|1101001001010111⟩ + 0.02|1101001001100100⟩ + 0.02|1101001001111101⟩ + 0.02|1101001010000001⟩ + 0.02|1101001010010111⟩ + 0.02|1101001010100100⟩ + 0.02|1101001010111101⟩ + 0.02|1101001011000001⟩ + 0.02|1101001011010111⟩ + 0.02|1101001011100100⟩ + 0.02|1101001011111101⟩ + 0.02|1101001100000001⟩ + 0.02|1101001100010111⟩ + 0.02|1101001100100100⟩ + 0.02|1101001100111101⟩ + 0.02|1101001101000001⟩ + 0.02|1101001101010111⟩ + 0.02|1101001101100100⟩ + 0.02|1101001101111101⟩ + 0.02|1101001110000001⟩ + 0.02|1101001110010111⟩ + 0.02|1101001110100100⟩ + 0.02|1101001110111101⟩ + 0.02|1101001111000001⟩ + 0.02|1101001111010111⟩ + 0.02|1101001111100100⟩ + 0.02|1101001111111101⟩ + 0.02|1101010000000001⟩ + 0.02|1101010000010111⟩ + 0.02|1101010000100100⟩ + 0.02|1101010000111101⟩ + 0.02|1101010001000001⟩ + 0.02|1101010001010111⟩ + 0.02|1101010001100100⟩ + 0.02|1101010001111101⟩ + 0.02|1101010010000001⟩ + 0.02|1101010010010111⟩ + 0.02|1101010010100100⟩ + 0.02|1101010010111101⟩ + 0.02|1101010011000001⟩ + 0.02|1101010011010111⟩ + 0.02|1101010011100100⟩ + 0.02|1101010011111101⟩ + 0.02|1101010100000001⟩ + 0.02|1101010100010111⟩ + 0.02|1101010100100100⟩ + 0.02|1101010100111101⟩ + 0.02|1101010101000001⟩ + 0.02|1101010101010111⟩ + 0.02|1101010101100100⟩ + 0.02|1101010101111101⟩ + 0.02|1101010110000001⟩ + 0.02|1101010110010111⟩ + 0.02|1101010110100100⟩ + 0.02|1101010110111101⟩ + 0.02|1101010111000001⟩ + 0.02|1101010111010111⟩ + 0.02|1101010111100100⟩ + 0.02|1101010111111101⟩ + 0.02|1101011000000001⟩ + 0.02|1101011000010111⟩ + 0.02|1101011000100100⟩ + 0.02|1101011000111101⟩ + 0.02|1101011001000001⟩ + 0.02|1101011001010111⟩ + 0.02|1101011001100100⟩ + 0.02|1101011001111101⟩ + 0.02|1101011010000001⟩ + 0.02|1101011010010111⟩ + 0.02|1101011010100100⟩ + 0.02|1101011010111101⟩ + 0.02|1101011011000001⟩ + 0.02|1101011011010111⟩ + 0.02|1101011011100100⟩ + 0.02|1101011011111101⟩ + 0.02|1101011100000001⟩ + 0.02|1101011100010111⟩ + 0.02|1101011100100100⟩ + 0.02|1101011100111101⟩ + 0.02|1101011101000001⟩ + 0.02|1101011101010111⟩ + 0.02|1101011101100100⟩ + 0.02|1101011101111101⟩ + 0.02|1101011110000001⟩ + 0.02|1101011110010111⟩ + 0.02|1101011110100100⟩ + 0.02|1101011110111101⟩ + 0.02|1101011111000001⟩ + 0.02|1101011111010111⟩ + 0.02|1101011111100100⟩ + 0.02|1101011111111101⟩ + 0.02|1101100000000001⟩ + 0.02|1101100000010111⟩ + 0.02|1101100000100100⟩ + 0.02|1101100000111101⟩ + 0.02|1101100001000001⟩ + 0.02|1101100001010111⟩ + 0.02|1101100001100100⟩ + 0.02|1101100001111101⟩ + 0.02|1101100010000001⟩ + 0.02|1101100010010111⟩ + 0.02|1101100010100100⟩ + 0.02|1101100010111101⟩ + 0.02|1101100011000001⟩ + 0.02|1101100011010111⟩ + 0.02|1101100011100100⟩ + 0.02|1101100011111101⟩ + 0.02|1101100100000001⟩ + 0.02|1101100100010111⟩ + 0.02|1101100100100100⟩ + 0.02|1101100100111101⟩ + 0.02|1101100101000001⟩ + 0.02|1101100101010111⟩ + 0.02|1101100101100100⟩ + 0.02|1101100101111101⟩ + 0.02|1101100110000001⟩ + 0.02|1101100110010111⟩ + 0.02|1101100110100100⟩ + 0.02|1101100110111101⟩ + 0.02|1101100111000001⟩ + 0.02|1101100111010111⟩ + 0.02|1101100111100100⟩ + 0.02|1101100111111101⟩ + 0.02|1101101000000001⟩ + 0.02|1101101000010111⟩ + 0.02|1101101000100100⟩ + 0.02|1101101000111101⟩ + 0.02|1101101001000001⟩ + 0.02|1101101001010111⟩ + 0.02|1101101001100100⟩ + 0.02|1101101001111101⟩ + 0.02|1101101010000001⟩ + 0.02|1101101010010111⟩ + 0.02|1101101010100100⟩ + 0.02|1101101010111101⟩ + 0.02|1101101011000001⟩ + 0.02|1101101011010111⟩ + 0.02|1101101011100100⟩ + 0.02|1101101011111101⟩ + 0.02|1101101100000001⟩ + 0.02|1101101100010111⟩ + 0.02|1101101100100100⟩ + 0.02|1101101100111101⟩ + 0.02|1101101101000001⟩ + 0.02|1101101101010111⟩ + 0.02|1101101101100100⟩ + 0.02|1101101101111101⟩ + 0.02|1101101110000001⟩ + 0.02|1101101110010111⟩ + 0.02|1101101110100100⟩ + 0.02|1101101110111101⟩ + 0.02|1101101111000001⟩ + 0.02|1101101111010111⟩ + 0.02|1101101111100100⟩ + 0.02|1101101111111101⟩ + 0.02|1101110000000001⟩ + 0.02|1101110000010111⟩ + 0.02|1101110000100100⟩ + 0.02|1101110000111101⟩ + 0.02|1101110001000001⟩ + 0.02|1101110001010111⟩ + 0.02|1101110001100100⟩ + 0.02|1101110001111101⟩ + 0.02|1101110010000001⟩ + 0.02|1101110010010111⟩ + 0.02|1101110010100100⟩ + 0.02|1101110010111101⟩ + 0.02|1101110011000001⟩ + 0.02|1101110011010111⟩ + 0.02|1101110011100100⟩ + 0.02|1101110011111101⟩ + 0.02|1101110100000001⟩ + 0.02|1101110100010111⟩ + 0.02|1101110100100100⟩ + 0.02|1101110100111101⟩ + 0.02|1101110101000001⟩ + 0.02|1101110101010111⟩ + 0.02|1101110101100100⟩ + 0.02|1101110101111101⟩ + 0.02|1101110110000001⟩ + 0.02|1101110110010111⟩ + 0.02|1101110110100100⟩ + 0.02|1101110110111101⟩ + 0.02|1101110111000001⟩ + 0.02|1101110111010111⟩ + 0.02|1101110111100100⟩ + 0.02|1101110111111101⟩ + 0.02|1101111000000001⟩ + 0.02|1101111000010111⟩ + 0.02|1101111000100100⟩ + 0.02|1101111000111101⟩ + 0.02|1101111001000001⟩ + 0.02|1101111001010111⟩ + 0.02|1101111001100100⟩ + 0.02|1101111001111101⟩ + 0.02|1101111010000001⟩ + 0.02|1101111010010111⟩ + 0.02|1101111010100100⟩ + 0.02|1101111010111101⟩ + 0.02|1101111011000001⟩ + 0.02|1101111011010111⟩ + 0.02|1101111011100100⟩ + 0.02|1101111011111101⟩ + 0.02|1101111100000001⟩ + 0.02|1101111100010111⟩ + 0.02|1101111100100100⟩ + 0.02|1101111100111101⟩ + 0.02|1101111101000001⟩ + 0.02|1101111101010111⟩ + 0.02|1101111101100100⟩ + 0.02|1101111101111101⟩ + 0.02|1101111110000001⟩ + 0.02|1101111110010111⟩ + 0.02|1101111110100100⟩ + 0.02|1101111110111101⟩ + 0.02|1101111111000001⟩ + 0.02|1101111111010111⟩ + 0.02|1101111111100100⟩ + 0.02|1101111111111101⟩ + 0.02|1110000000000001⟩ + 0.02|1110000000010111⟩ + 0.02|1110000000100100⟩ + 0.02|1110000000111101⟩ + 0.02|1110000001000001⟩ + 0.02|1110000001010111⟩ + 0.02|1110000001100100⟩ + 0.02|1110000001111101⟩ + 0.02|1110000010000001⟩ + 0.02|1110000010010111⟩ + 0.02|1110000010100100⟩ + 0.02|1110000010111101⟩ + 0.02|1110000011000001⟩ + 0.02|1110000011010111⟩ + 0.02|1110000011100100⟩ + 0.02|1110000011111101⟩ + 0.02|1110000100000001⟩ + 0.02|1110000100010111⟩ + 0.02|1110000100100100⟩ + 0.02|1110000100111101⟩ + 0.02|1110000101000001⟩ + 0.02|1110000101010111⟩ + 0.02|1110000101100100⟩ + 0.02|1110000101111101⟩ + 0.02|1110000110000001⟩ + 0.02|1110000110010111⟩ + 0.02|1110000110100100⟩ + 0.02|1110000110111101⟩ + 0.02|1110000111000001⟩ + 0.02|1110000111010111⟩ + 0.02|1110000111100100⟩ + 0.02|1110000111111101⟩ + 0.02|1110001000000001⟩ + 0.02|1110001000010111⟩ + 0.02|1110001000100100⟩ + 0.02|1110001000111101⟩ + 0.02|1110001001000001⟩ + 0.02|1110001001010111⟩ + 0.02|1110001001100100⟩ + 0.02|1110001001111101⟩ + 0.02|1110001010000001⟩ + 0.02|1110001010010111⟩ + 0.02|1110001010100100⟩ + 0.02|1110001010111101⟩ + 0.02|1110001011000001⟩ + 0.02|1110001011010111⟩ + 0.02|1110001011100100⟩ + 0.02|1110001011111101⟩ + 0.02|1110001100000001⟩ + 0.02|1110001100010111⟩ + 0.02|1110001100100100⟩ + 0.02|1110001100111101⟩ + 0.02|1110001101000001⟩ + 0.02|1110001101010111⟩ + 0.02|1110001101100100⟩ + 0.02|1110001101111101⟩ + 0.02|1110001110000001⟩ + 0.02|1110001110010111⟩ + 0.02|1110001110100100⟩ + 0.02|1110001110111101⟩ + 0.02|1110001111000001⟩ + 0.02|1110001111010111⟩ + 0.02|1110001111100100⟩ + 0.02|1110001111111101⟩ + 0.02|1110010000000001⟩ + 0.02|1110010000010111⟩ + 0.02|1110010000100100⟩ + 0.02|1110010000111101⟩ + 0.02|1110010001000001⟩ + 0.02|1110010001010111⟩ + 0.02|1110010001100100⟩ + 0.02|1110010001111101⟩ + 0.02|1110010010000001⟩ + 0.02|1110010010010111⟩ + 0.02|1110010010100100⟩ + 0.02|1110010010111101⟩ + 0.02|1110010011000001⟩ + 0.02|1110010011010111⟩ + 0.02|1110010011100100⟩ + 0.02|1110010011111101⟩ + 0.02|1110010100000001⟩ + 0.02|1110010100010111⟩ + 0.02|1110010100100100⟩ + 0.02|1110010100111101⟩ + 0.02|1110010101000001⟩ + 0.02|1110010101010111⟩ + 0.02|1110010101100100⟩ + 0.02|1110010101111101⟩ + 0.02|1110010110000001⟩ + 0.02|1110010110010111⟩ + 0.02|1110010110100100⟩ + 0.02|1110010110111101⟩ + 0.02|1110010111000001⟩ + 0.02|1110010111010111⟩ + 0.02|1110010111100100⟩ + 0.02|1110010111111101⟩ + 0.02|1110011000000001⟩ + 0.02|1110011000010111⟩ + 0.02|1110011000100100⟩ + 0.02|1110011000111101⟩ + 0.02|1110011001000001⟩ + 0.02|1110011001010111⟩ + 0.02|1110011001100100⟩ + 0.02|1110011001111101⟩ + 0.02|1110011010000001⟩ + 0.02|1110011010010111⟩ + 0.02|1110011010100100⟩ + 0.02|1110011010111101⟩ + 0.02|1110011011000001⟩ + 0.02|1110011011010111⟩ + 0.02|1110011011100100⟩ + 0.02|1110011011111101⟩ + 0.02|1110011100000001⟩ + 0.02|1110011100010111⟩ + 0.02|1110011100100100⟩ + 0.02|1110011100111101⟩ + 0.02|1110011101000001⟩ + 0.02|1110011101010111⟩ + 0.02|1110011101100100⟩ + 0.02|1110011101111101⟩ + 0.02|1110011110000001⟩ + 0.02|1110011110010111⟩ + 0.02|1110011110100100⟩ + 0.02|1110011110111101⟩ + 0.02|1110011111000001⟩ + 0.02|1110011111010111⟩ + 0.02|1110011111100100⟩ + 0.02|1110011111111101⟩ + 0.02|1110100000000001⟩ + 0.02|1110100000010111⟩ + 0.02|1110100000100100⟩ + 0.02|1110100000111101⟩ + 0.02|1110100001000001⟩ + 0.02|1110100001010111⟩ + 0.02|1110100001100100⟩ + 0.02|1110100001111101⟩ + 0.02|1110100010000001⟩ + 0.02|1110100010010111⟩ + 0.02|1110100010100100⟩ + 0.02|1110100010111101⟩ + 0.02|1110100011000001⟩ + 0.02|1110100011010111⟩ + 0.02|1110100011100100⟩ + 0.02|1110100011111101⟩ + 0.02|1110100100000001⟩ + 0.02|1110100100010111⟩ + 0.02|1110100100100100⟩ + 0.02|1110100100111101⟩ + 0.02|1110100101000001⟩ + 0.02|1110100101010111⟩ + 0.02|1110100101100100⟩ + 0.02|1110100101111101⟩ + 0.02|1110100110000001⟩ + 0.02|1110100110010111⟩ + 0.02|1110100110100100⟩ + 0.02|1110100110111101⟩ + 0.02|1110100111000001⟩ + 0.02|1110100111010111⟩ + 0.02|1110100111100100⟩ + 0.02|1110100111111101⟩ + 0.02|1110101000000001⟩ + 0.02|1110101000010111⟩ + 0.02|1110101000100100⟩ + 0.02|1110101000111101⟩ + 0.02|1110101001000001⟩ + 0.02|1110101001010111⟩ + 0.02|1110101001100100⟩ + 0.02|1110101001111101⟩ + 0.02|1110101010000001⟩ + 0.02|1110101010010111⟩ + 0.02|1110101010100100⟩ + 0.02|1110101010111101⟩ + 0.02|1110101011000001⟩ + 0.02|1110101011010111⟩ + 0.02|1110101011100100⟩ + 0.02|1110101011111101⟩ + 0.02|1110101100000001⟩ + 0.02|1110101100010111⟩ + 0.02|1110101100100100⟩ + 0.02|1110101100111101⟩ + 0.02|1110101101000001⟩ + 0.02|1110101101010111⟩ + 0.02|1110101101100100⟩ + 0.02|1110101101111101⟩ + 0.02|1110101110000001⟩ + 0.02|1110101110010111⟩ + 0.02|1110101110100100⟩ + 0.02|1110101110111101⟩ + 0.02|1110101111000001⟩ + 0.02|1110101111010111⟩ + 0.02|1110101111100100⟩ + 0.02|1110101111111101⟩ + 0.02|1110110000000001⟩ + 0.02|1110110000010111⟩ + 0.02|1110110000100100⟩ + 0.02|1110110000111101⟩ + 0.02|1110110001000001⟩ + 0.02|1110110001010111⟩ + 0.02|1110110001100100⟩ + 0.02|1110110001111101⟩ + 0.02|1110110010000001⟩ + 0.02|1110110010010111⟩ + 0.02|1110110010100100⟩ + 0.02|1110110010111101⟩ + 0.02|1110110011000001⟩ + 0.02|1110110011010111⟩ + 0.02|1110110011100100⟩ + 0.02|1110110011111101⟩ + 0.02|1110110100000001⟩ + 0.02|1110110100010111⟩ + 0.02|1110110100100100⟩ + 0.02|1110110100111101⟩ + 0.02|1110110101000001⟩ + 0.02|1110110101010111⟩ + 0.02|1110110101100100⟩ + 0.02|1110110101111101⟩ + 0.02|1110110110000001⟩ + 0.02|1110110110010111⟩ + 0.02|1110110110100100⟩ + 0.02|1110110110111101⟩ + 0.02|1110110111000001⟩ + 0.02|1110110111010111⟩ + 0.02|1110110111100100⟩ + 0.02|1110110111111101⟩ + 0.02|1110111000000001⟩ + 0.02|1110111000010111⟩ + 0.02|1110111000100100⟩ + 0.02|1110111000111101⟩ + 0.02|1110111001000001⟩ + 0.02|1110111001010111⟩ + 0.02|1110111001100100⟩ + 0.02|1110111001111101⟩ + 0.02|1110111010000001⟩ + 0.02|1110111010010111⟩ + 0.02|1110111010100100⟩ + 0.02|1110111010111101⟩ + 0.02|1110111011000001⟩ + 0.02|1110111011010111⟩ + 0.02|1110111011100100⟩ + 0.02|1110111011111101⟩ + 0.02|1110111100000001⟩ + 0.02|1110111100010111⟩ + 0.02|1110111100100100⟩ + 0.02|1110111100111101⟩ + 0.02|1110111101000001⟩ + 0.02|1110111101010111⟩ + 0.02|1110111101100100⟩ + 0.02|1110111101111101⟩ + 0.02|1110111110000001⟩ + 0.02|1110111110010111⟩ + 0.02|1110111110100100⟩ + 0.02|1110111110111101⟩ + 0.02|1110111111000001⟩ + 0.02|1110111111010111⟩ + 0.02|1110111111100100⟩ + 0.02|1110111111111101⟩ + 0.02|1111000000000001⟩ + 0.02|1111000000010111⟩ + 0.02|1111000000100100⟩ + 0.02|1111000000111101⟩ + 0.02|1111000001000001⟩ + 0.02|1111000001010111⟩ + 0.02|1111000001100100⟩ + 0.02|1111000001111101⟩ + 0.02|1111000010000001⟩ + 0.02|1111000010010111⟩ + 0.02|1111000010100100⟩ + 0.02|1111000010111101⟩ + 0.02|1111000011000001⟩ + 0.02|1111000011010111⟩ + 0.02|1111000011100100⟩ + 0.02|1111000011111101⟩ + 0.02|1111000100000001⟩ + 0.02|1111000100010111⟩ + 0.02|1111000100100100⟩ + 0.02|1111000100111101⟩ + 0.02|1111000101000001⟩ + 0.02|1111000101010111⟩ + 0.02|1111000101100100⟩ + 0.02|1111000101111101⟩ + 0.02|1111000110000001⟩ + 0.02|1111000110010111⟩ + 0.02|1111000110100100⟩ + 0.02|1111000110111101⟩ + 0.02|1111000111000001⟩ + 0.02|1111000111010111⟩ + 0.02|1111000111100100⟩ + 0.02|1111000111111101⟩ + 0.02|1111001000000001⟩ + 0.02|1111001000010111⟩ + 0.02|1111001000100100⟩ + 0.02|1111001000111101⟩ + 0.02|1111001001000001⟩ + 0.02|1111001001010111⟩ + 0.02|1111001001100100⟩ + 0.02|1111001001111101⟩ + 0.02|1111001010000001⟩ + 0.02|1111001010010111⟩ + 0.02|1111001010100100⟩ + 0.02|1111001010111101⟩ + 0.02|1111001011000001⟩ + 0.02|1111001011010111⟩ + 0.02|1111001011100100⟩ + 0.02|1111001011111101⟩ + 0.02|1111001100000001⟩ + 0.02|1111001100010111⟩ + 0.02|1111001100100100⟩ + 0.02|1111001100111101⟩ + 0.02|1111001101000001⟩ + 0.02|1111001101010111⟩ + 0.02|1111001101100100⟩ + 0.02|1111001101111101⟩ + 0.02|1111001110000001⟩ + 0.02|1111001110010111⟩ + 0.02|1111001110100100⟩ + 0.02|1111001110111101⟩ + 0.02|1111001111000001⟩ + 0.02|1111001111010111⟩ + 0.02|1111001111100100⟩ + 0.02|1111001111111101⟩ + 0.02|1111010000000001⟩ + 0.02|1111010000010111⟩ + 0.02|1111010000100100⟩ + 0.02|1111010000111101⟩ + 0.02|1111010001000001⟩ + 0.02|1111010001010111⟩ + 0.02|1111010001100100⟩ + 0.02|1111010001111101⟩ + 0.02|1111010010000001⟩ + 0.02|1111010010010111⟩ + 0.02|1111010010100100⟩ + 0.02|1111010010111101⟩ + 0.02|1111010011000001⟩ + 0.02|1111010011010111⟩ + 0.02|1111010011100100⟩ + 0.02|1111010011111101⟩ + 0.02|1111010100000001⟩ + 0.02|1111010100010111⟩ + 0.02|1111010100100100⟩ + 0.02|1111010100111101⟩ + 0.02|1111010101000001⟩ + 0.02|1111010101010111⟩ + 0.02|1111010101100100⟩ + 0.02|1111010101111101⟩ + 0.02|1111010110000001⟩ + 0.02|1111010110010111⟩ + 0.02|1111010110100100⟩ + 0.02|1111010110111101⟩ + 0.02|1111010111000001⟩ + 0.02|1111010111010111⟩ + 0.02|1111010111100100⟩ + 0.02|1111010111111101⟩ + 0.02|1111011000000001⟩ + 0.02|1111011000010111⟩ + 0.02|1111011000100100⟩ + 0.02|1111011000111101⟩ + 0.02|1111011001000001⟩ + 0.02|1111011001010111⟩ + 0.02|1111011001100100⟩ + 0.02|1111011001111101⟩ + 0.02|1111011010000001⟩ + 0.02|1111011010010111⟩ + 0.02|1111011010100100⟩ + 0.02|1111011010111101⟩ + 0.02|1111011011000001⟩ + 0.02|1111011011010111⟩ + 0.02|1111011011100100⟩ + 0.02|1111011011111101⟩ + 0.02|1111011100000001⟩ + 0.02|1111011100010111⟩ + 0.02|1111011100100100⟩ + 0.02|1111011100111101⟩ + 0.02|1111011101000001⟩ + 0.02|1111011101010111⟩ + 0.02|1111011101100100⟩ + 0.02|1111011101111101⟩ + 0.02|1111011110000001⟩ + 0.02|1111011110010111⟩ + 0.02|1111011110100100⟩ + 0.02|1111011110111101⟩ + 0.02|1111011111000001⟩ + 0.02|1111011111010111⟩ + 0.02|1111011111100100⟩ + 0.02|1111011111111101⟩ + 0.02|1111100000000001⟩ + 0.02|1111100000010111⟩ + 0.02|1111100000100100⟩ + 0.02|1111100000111101⟩ + 0.02|1111100001000001⟩ + 0.02|1111100001010111⟩ + 0.02|1111100001100100⟩ + 0.02|1111100001111101⟩ + 0.02|1111100010000001⟩ + 0.02|1111100010010111⟩ + 0.02|1111100010100100⟩ + 0.02|1111100010111101⟩ + 0.02|1111100011000001⟩ + 0.02|1111100011010111⟩ + 0.02|1111100011100100⟩ + 0.02|1111100011111101⟩ + 0.02|1111100100000001⟩ + 0.02|1111100100010111⟩ + 0.02|1111100100100100⟩ + 0.02|1111100100111101⟩ + 0.02|1111100101000001⟩ + 0.02|1111100101010111⟩ + 0.02|1111100101100100⟩ + 0.02|1111100101111101⟩ + 0.02|1111100110000001⟩ + 0.02|1111100110010111⟩ + 0.02|1111100110100100⟩ + 0.02|1111100110111101⟩ + 0.02|1111100111000001⟩ + 0.02|1111100111010111⟩ + 0.02|1111100111100100⟩ + 0.02|1111100111111101⟩ + 0.02|1111101000000001⟩ + 0.02|1111101000010111⟩ + 0.02|1111101000100100⟩ + 0.02|1111101000111101⟩ + 0.02|1111101001000001⟩ + 0.02|1111101001010111⟩ + 0.02|1111101001100100⟩ + 0.02|1111101001111101⟩ + 0.02|1111101010000001⟩ + 0.02|1111101010010111⟩ + 0.02|1111101010100100⟩ + 0.02|1111101010111101⟩ + 0.02|1111101011000001⟩ + 0.02|1111101011010111⟩ + 0.02|1111101011100100⟩ + 0.02|1111101011111101⟩ + 0.02|1111101100000001⟩ + 0.02|1111101100010111⟩ + 0.02|1111101100100100⟩ + 0.02|1111101100111101⟩ + 0.02|1111101101000001⟩ + 0.02|1111101101010111⟩ + 0.02|1111101101100100⟩ + 0.02|1111101101111101⟩ + 0.02|1111101110000001⟩ + 0.02|1111101110010111⟩ + 0.02|1111101110100100⟩ + 0.02|1111101110111101⟩ + 0.02|1111101111000001⟩ + 0.02|1111101111010111⟩ + 0.02|1111101111100100⟩ + 0.02|1111101111111101⟩ + 0.02|1111110000000001⟩ + 0.02|1111110000010111⟩ + 0.02|1111110000100100⟩ + 0.02|1111110000111101⟩ + 0.02|1111110001000001⟩ + 0.02|1111110001010111⟩ + 0.02|1111110001100100⟩ + 0.02|1111110001111101⟩ + 0.02|1111110010000001⟩ + 0.02|1111110010010111⟩ + 0.02|1111110010100100⟩ + 0.02|1111110010111101⟩ + 0.02|1111110011000001⟩ + 0.02|1111110011010111⟩ + 0.02|1111110011100100⟩ + 0.02|1111110011111101⟩ + 0.02|1111110100000001⟩ + 0.02|1111110100010111⟩ + 0.02|1111110100100100⟩ + 0.02|1111110100111101⟩ + 0.02|1111110101000001⟩ + 0.02|1111110101010111⟩ + 0.02|1111110101100100⟩ + 0.02|1111110101111101⟩ + 0.02|1111110110000001⟩ + 0.02|1111110110010111⟩ + 0.02|1111110110100100⟩ + 0.02|1111110110111101⟩ + 0.02|1111110111000001⟩ + 0.02|1111110111010111⟩ + 0.02|1111110111100100⟩ + 0.02|1111110111111101⟩ + 0.02|1111111000000001⟩ + 0.02|1111111000010111⟩ + 0.02|1111111000100100⟩ + 0.02|1111111000111101⟩ + 0.02|1111111001000001⟩ + 0.02|1111111001010111⟩ + 0.02|1111111001100100⟩ + 0.02|1111111001111101⟩ + 0.02|1111111010000001⟩ + 0.02|1111111010010111⟩ + 0.02|1111111010100100⟩ + 0.02|1111111010111101⟩ + 0.02|1111111011000001⟩ + 0.02|1111111011010111⟩ + 0.02|1111111011100100⟩ + 0.02|1111111011111101⟩ + 0.02|1111111100000001⟩ + 0.02|1111111100010111⟩ + 0.02|1111111100100100⟩ + 0.02|1111111100111101⟩ + 0.02|1111111101000001⟩ + 0.02|1111111101010111⟩ + 0.02|1111111101100100⟩ + 0.02|1111111101111101⟩ + 0.02|1111111110000001⟩ + 0.02|1111111110010111⟩ + 0.02|1111111110100100⟩ + 0.02|1111111110111101⟩ + 0.02|1111111111000001⟩ + 0.02|1111111111010111⟩ + 0.02|1111111111100100⟩ + 0.02|1111111111111101⟩
###Markdown
Let's check the first 5 states. The first 12 bits represent the first register and the last 4 bits represent the second register. $\ket{000000000000}\ket{0001} = \ket{0}\ket{1}$$\ket{000000000001}\ket{0111} = \ket{1}\ket{7}$$\ket{000000000010}\ket{0100} = \ket{2}\ket{4}$$\ket{000000000011}\ket{1101} = \ket{3}\ket{13}$$\ket{000000000100}\ket{0001} = \ket{4}\ket{1}$ Task 3Measure the second register and sample the circuit. Next, simulate the circuit and print the obtained state using dirac_notation(). Check the first five states and convert to integer representation. Solution
###Code
#Measure the target register
circuit.append(cirq.measure(*target, key='result'))
#Sample the circuit
print('Sample the circuit:')
samples=s.run(circuit, repetitions=1000)
# Print a histogram of results
print(samples.histogram(key='result'))
#Simulate the circuit. One of the outcomes is picked due to measurement
results=s.simulate(circuit)
print(results.dirac_notation())
###Output
Sample the circuit:
Counter({1: 276, 7: 245, 4: 240, 13: 239})
0.03|0000000000111101⟩ + 0.03|0000000001111101⟩ + 0.03|0000000010111101⟩ + 0.03|0000000011111101⟩ + 0.03|0000000100111101⟩ + 0.03|0000000101111101⟩ + 0.03|0000000110111101⟩ + 0.03|0000000111111101⟩ + 0.03|0000001000111101⟩ + 0.03|0000001001111101⟩ + 0.03|0000001010111101⟩ + 0.03|0000001011111101⟩ + 0.03|0000001100111101⟩ + 0.03|0000001101111101⟩ + 0.03|0000001110111101⟩ + 0.03|0000001111111101⟩ + 0.03|0000010000111101⟩ + 0.03|0000010001111101⟩ + 0.03|0000010010111101⟩ + 0.03|0000010011111101⟩ + 0.03|0000010100111101⟩ + 0.03|0000010101111101⟩ + 0.03|0000010110111101⟩ + 0.03|0000010111111101⟩ + 0.03|0000011000111101⟩ + 0.03|0000011001111101⟩ + 0.03|0000011010111101⟩ + 0.03|0000011011111101⟩ + 0.03|0000011100111101⟩ + 0.03|0000011101111101⟩ + 0.03|0000011110111101⟩ + 0.03|0000011111111101⟩ + 0.03|0000100000111101⟩ + 0.03|0000100001111101⟩ + 0.03|0000100010111101⟩ + 0.03|0000100011111101⟩ + 0.03|0000100100111101⟩ + 0.03|0000100101111101⟩ + 0.03|0000100110111101⟩ + 0.03|0000100111111101⟩ + 0.03|0000101000111101⟩ + 0.03|0000101001111101⟩ + 0.03|0000101010111101⟩ + 0.03|0000101011111101⟩ + 0.03|0000101100111101⟩ + 0.03|0000101101111101⟩ + 0.03|0000101110111101⟩ + 0.03|0000101111111101⟩ + 0.03|0000110000111101⟩ + 0.03|0000110001111101⟩ + 0.03|0000110010111101⟩ + 0.03|0000110011111101⟩ + 0.03|0000110100111101⟩ + 0.03|0000110101111101⟩ + 0.03|0000110110111101⟩ + 0.03|0000110111111101⟩ + 0.03|0000111000111101⟩ + 0.03|0000111001111101⟩ + 0.03|0000111010111101⟩ + 0.03|0000111011111101⟩ + 0.03|0000111100111101⟩ + 0.03|0000111101111101⟩ + 0.03|0000111110111101⟩ + 0.03|0000111111111101⟩ + 0.03|0001000000111101⟩ + 0.03|0001000001111101⟩ + 0.03|0001000010111101⟩ + 0.03|0001000011111101⟩ + 0.03|0001000100111101⟩ + 0.03|0001000101111101⟩ + 0.03|0001000110111101⟩ + 0.03|0001000111111101⟩ + 0.03|0001001000111101⟩ + 0.03|0001001001111101⟩ + 0.03|0001001010111101⟩ + 0.03|0001001011111101⟩ + 0.03|0001001100111101⟩ + 0.03|0001001101111101⟩ + 0.03|0001001110111101⟩ + 0.03|0001001111111101⟩ + 0.03|0001010000111101⟩ + 0.03|0001010001111101⟩ + 0.03|0001010010111101⟩ + 0.03|0001010011111101⟩ + 0.03|0001010100111101⟩ + 0.03|0001010101111101⟩ + 0.03|0001010110111101⟩ + 0.03|0001010111111101⟩ + 0.03|0001011000111101⟩ + 0.03|0001011001111101⟩ + 0.03|0001011010111101⟩ + 0.03|0001011011111101⟩ + 0.03|0001011100111101⟩ + 0.03|0001011101111101⟩ + 0.03|0001011110111101⟩ + 0.03|0001011111111101⟩ + 0.03|0001100000111101⟩ + 0.03|0001100001111101⟩ + 0.03|0001100010111101⟩ + 0.03|0001100011111101⟩ + 0.03|0001100100111101⟩ + 0.03|0001100101111101⟩ + 0.03|0001100110111101⟩ + 0.03|0001100111111101⟩ + 0.03|0001101000111101⟩ + 0.03|0001101001111101⟩ + 0.03|0001101010111101⟩ + 0.03|0001101011111101⟩ + 0.03|0001101100111101⟩ + 0.03|0001101101111101⟩ + 0.03|0001101110111101⟩ + 0.03|0001101111111101⟩ + 0.03|0001110000111101⟩ + 0.03|0001110001111101⟩ + 0.03|0001110010111101⟩ + 0.03|0001110011111101⟩ + 0.03|0001110100111101⟩ + 0.03|0001110101111101⟩ + 0.03|0001110110111101⟩ + 0.03|0001110111111101⟩ + 0.03|0001111000111101⟩ + 0.03|0001111001111101⟩ + 0.03|0001111010111101⟩ + 0.03|0001111011111101⟩ + 0.03|0001111100111101⟩ + 0.03|0001111101111101⟩ + 0.03|0001111110111101⟩ + 0.03|0001111111111101⟩ + 0.03|0010000000111101⟩ + 0.03|0010000001111101⟩ + 0.03|0010000010111101⟩ + 0.03|0010000011111101⟩ + 0.03|0010000100111101⟩ + 0.03|0010000101111101⟩ + 0.03|0010000110111101⟩ + 0.03|0010000111111101⟩ + 0.03|0010001000111101⟩ + 0.03|0010001001111101⟩ + 0.03|0010001010111101⟩ + 0.03|0010001011111101⟩ + 0.03|0010001100111101⟩ + 0.03|0010001101111101⟩ + 0.03|0010001110111101⟩ + 0.03|0010001111111101⟩ + 0.03|0010010000111101⟩ + 0.03|0010010001111101⟩ + 0.03|0010010010111101⟩ + 0.03|0010010011111101⟩ + 0.03|0010010100111101⟩ + 0.03|0010010101111101⟩ + 0.03|0010010110111101⟩ + 0.03|0010010111111101⟩ + 0.03|0010011000111101⟩ + 0.03|0010011001111101⟩ + 0.03|0010011010111101⟩ + 0.03|0010011011111101⟩ + 0.03|0010011100111101⟩ + 0.03|0010011101111101⟩ + 0.03|0010011110111101⟩ + 0.03|0010011111111101⟩ + 0.03|0010100000111101⟩ + 0.03|0010100001111101⟩ + 0.03|0010100010111101⟩ + 0.03|0010100011111101⟩ + 0.03|0010100100111101⟩ + 0.03|0010100101111101⟩ + 0.03|0010100110111101⟩ + 0.03|0010100111111101⟩ + 0.03|0010101000111101⟩ + 0.03|0010101001111101⟩ + 0.03|0010101010111101⟩ + 0.03|0010101011111101⟩ + 0.03|0010101100111101⟩ + 0.03|0010101101111101⟩ + 0.03|0010101110111101⟩ + 0.03|0010101111111101⟩ + 0.03|0010110000111101⟩ + 0.03|0010110001111101⟩ + 0.03|0010110010111101⟩ + 0.03|0010110011111101⟩ + 0.03|0010110100111101⟩ + 0.03|0010110101111101⟩ + 0.03|0010110110111101⟩ + 0.03|0010110111111101⟩ + 0.03|0010111000111101⟩ + 0.03|0010111001111101⟩ + 0.03|0010111010111101⟩ + 0.03|0010111011111101⟩ + 0.03|0010111100111101⟩ + 0.03|0010111101111101⟩ + 0.03|0010111110111101⟩ + 0.03|0010111111111101⟩ + 0.03|0011000000111101⟩ + 0.03|0011000001111101⟩ + 0.03|0011000010111101⟩ + 0.03|0011000011111101⟩ + 0.03|0011000100111101⟩ + 0.03|0011000101111101⟩ + 0.03|0011000110111101⟩ + 0.03|0011000111111101⟩ + 0.03|0011001000111101⟩ + 0.03|0011001001111101⟩ + 0.03|0011001010111101⟩ + 0.03|0011001011111101⟩ + 0.03|0011001100111101⟩ + 0.03|0011001101111101⟩ + 0.03|0011001110111101⟩ + 0.03|0011001111111101⟩ + 0.03|0011010000111101⟩ + 0.03|0011010001111101⟩ + 0.03|0011010010111101⟩ + 0.03|0011010011111101⟩ + 0.03|0011010100111101⟩ + 0.03|0011010101111101⟩ + 0.03|0011010110111101⟩ + 0.03|0011010111111101⟩ + 0.03|0011011000111101⟩ + 0.03|0011011001111101⟩ + 0.03|0011011010111101⟩ + 0.03|0011011011111101⟩ + 0.03|0011011100111101⟩ + 0.03|0011011101111101⟩ + 0.03|0011011110111101⟩ + 0.03|0011011111111101⟩ + 0.03|0011100000111101⟩ + 0.03|0011100001111101⟩ + 0.03|0011100010111101⟩ + 0.03|0011100011111101⟩ + 0.03|0011100100111101⟩ + 0.03|0011100101111101⟩ + 0.03|0011100110111101⟩ + 0.03|0011100111111101⟩ + 0.03|0011101000111101⟩ + 0.03|0011101001111101⟩ + 0.03|0011101010111101⟩ + 0.03|0011101011111101⟩ + 0.03|0011101100111101⟩ + 0.03|0011101101111101⟩ + 0.03|0011101110111101⟩ + 0.03|0011101111111101⟩ + 0.03|0011110000111101⟩ + 0.03|0011110001111101⟩ + 0.03|0011110010111101⟩ + 0.03|0011110011111101⟩ + 0.03|0011110100111101⟩ + 0.03|0011110101111101⟩ + 0.03|0011110110111101⟩ + 0.03|0011110111111101⟩ + 0.03|0011111000111101⟩ + 0.03|0011111001111101⟩ + 0.03|0011111010111101⟩ + 0.03|0011111011111101⟩ + 0.03|0011111100111101⟩ + 0.03|0011111101111101⟩ + 0.03|0011111110111101⟩ + 0.03|0011111111111101⟩ + 0.03|0100000000111101⟩ + 0.03|0100000001111101⟩ + 0.03|0100000010111101⟩ + 0.03|0100000011111101⟩ + 0.03|0100000100111101⟩ + 0.03|0100000101111101⟩ + 0.03|0100000110111101⟩ + 0.03|0100000111111101⟩ + 0.03|0100001000111101⟩ + 0.03|0100001001111101⟩ + 0.03|0100001010111101⟩ + 0.03|0100001011111101⟩ + 0.03|0100001100111101⟩ + 0.03|0100001101111101⟩ + 0.03|0100001110111101⟩ + 0.03|0100001111111101⟩ + 0.03|0100010000111101⟩ + 0.03|0100010001111101⟩ + 0.03|0100010010111101⟩ + 0.03|0100010011111101⟩ + 0.03|0100010100111101⟩ + 0.03|0100010101111101⟩ + 0.03|0100010110111101⟩ + 0.03|0100010111111101⟩ + 0.03|0100011000111101⟩ + 0.03|0100011001111101⟩ + 0.03|0100011010111101⟩ + 0.03|0100011011111101⟩ + 0.03|0100011100111101⟩ + 0.03|0100011101111101⟩ + 0.03|0100011110111101⟩ + 0.03|0100011111111101⟩ + 0.03|0100100000111101⟩ + 0.03|0100100001111101⟩ + 0.03|0100100010111101⟩ + 0.03|0100100011111101⟩ + 0.03|0100100100111101⟩ + 0.03|0100100101111101⟩ + 0.03|0100100110111101⟩ + 0.03|0100100111111101⟩ + 0.03|0100101000111101⟩ + 0.03|0100101001111101⟩ + 0.03|0100101010111101⟩ + 0.03|0100101011111101⟩ + 0.03|0100101100111101⟩ + 0.03|0100101101111101⟩ + 0.03|0100101110111101⟩ + 0.03|0100101111111101⟩ + 0.03|0100110000111101⟩ + 0.03|0100110001111101⟩ + 0.03|0100110010111101⟩ + 0.03|0100110011111101⟩ + 0.03|0100110100111101⟩ + 0.03|0100110101111101⟩ + 0.03|0100110110111101⟩ + 0.03|0100110111111101⟩ + 0.03|0100111000111101⟩ + 0.03|0100111001111101⟩ + 0.03|0100111010111101⟩ + 0.03|0100111011111101⟩ + 0.03|0100111100111101⟩ + 0.03|0100111101111101⟩ + 0.03|0100111110111101⟩ + 0.03|0100111111111101⟩ + 0.03|0101000000111101⟩ + 0.03|0101000001111101⟩ + 0.03|0101000010111101⟩ + 0.03|0101000011111101⟩ + 0.03|0101000100111101⟩ + 0.03|0101000101111101⟩ + 0.03|0101000110111101⟩ + 0.03|0101000111111101⟩ + 0.03|0101001000111101⟩ + 0.03|0101001001111101⟩ + 0.03|0101001010111101⟩ + 0.03|0101001011111101⟩ + 0.03|0101001100111101⟩ + 0.03|0101001101111101⟩ + 0.03|0101001110111101⟩ + 0.03|0101001111111101⟩ + 0.03|0101010000111101⟩ + 0.03|0101010001111101⟩ + 0.03|0101010010111101⟩ + 0.03|0101010011111101⟩ + 0.03|0101010100111101⟩ + 0.03|0101010101111101⟩ + 0.03|0101010110111101⟩ + 0.03|0101010111111101⟩ + 0.03|0101011000111101⟩ + 0.03|0101011001111101⟩ + 0.03|0101011010111101⟩ + 0.03|0101011011111101⟩ + 0.03|0101011100111101⟩ + 0.03|0101011101111101⟩ + 0.03|0101011110111101⟩ + 0.03|0101011111111101⟩ + 0.03|0101100000111101⟩ + 0.03|0101100001111101⟩ + 0.03|0101100010111101⟩ + 0.03|0101100011111101⟩ + 0.03|0101100100111101⟩ + 0.03|0101100101111101⟩ + 0.03|0101100110111101⟩ + 0.03|0101100111111101⟩ + 0.03|0101101000111101⟩ + 0.03|0101101001111101⟩ + 0.03|0101101010111101⟩ + 0.03|0101101011111101⟩ + 0.03|0101101100111101⟩ + 0.03|0101101101111101⟩ + 0.03|0101101110111101⟩ + 0.03|0101101111111101⟩ + 0.03|0101110000111101⟩ + 0.03|0101110001111101⟩ + 0.03|0101110010111101⟩ + 0.03|0101110011111101⟩ + 0.03|0101110100111101⟩ + 0.03|0101110101111101⟩ + 0.03|0101110110111101⟩ + 0.03|0101110111111101⟩ + 0.03|0101111000111101⟩ + 0.03|0101111001111101⟩ + 0.03|0101111010111101⟩ + 0.03|0101111011111101⟩ + 0.03|0101111100111101⟩ + 0.03|0101111101111101⟩ + 0.03|0101111110111101⟩ + 0.03|0101111111111101⟩ + 0.03|0110000000111101⟩ + 0.03|0110000001111101⟩ + 0.03|0110000010111101⟩ + 0.03|0110000011111101⟩ + 0.03|0110000100111101⟩ + 0.03|0110000101111101⟩ + 0.03|0110000110111101⟩ + 0.03|0110000111111101⟩ + 0.03|0110001000111101⟩ + 0.03|0110001001111101⟩ + 0.03|0110001010111101⟩ + 0.03|0110001011111101⟩ + 0.03|0110001100111101⟩ + 0.03|0110001101111101⟩ + 0.03|0110001110111101⟩ + 0.03|0110001111111101⟩ + 0.03|0110010000111101⟩ + 0.03|0110010001111101⟩ + 0.03|0110010010111101⟩ + 0.03|0110010011111101⟩ + 0.03|0110010100111101⟩ + 0.03|0110010101111101⟩ + 0.03|0110010110111101⟩ + 0.03|0110010111111101⟩ + 0.03|0110011000111101⟩ + 0.03|0110011001111101⟩ + 0.03|0110011010111101⟩ + 0.03|0110011011111101⟩ + 0.03|0110011100111101⟩ + 0.03|0110011101111101⟩ + 0.03|0110011110111101⟩ + 0.03|0110011111111101⟩ + 0.03|0110100000111101⟩ + 0.03|0110100001111101⟩ + 0.03|0110100010111101⟩ + 0.03|0110100011111101⟩ + 0.03|0110100100111101⟩ + 0.03|0110100101111101⟩ + 0.03|0110100110111101⟩ + 0.03|0110100111111101⟩ + 0.03|0110101000111101⟩ + 0.03|0110101001111101⟩ + 0.03|0110101010111101⟩ + 0.03|0110101011111101⟩ + 0.03|0110101100111101⟩ + 0.03|0110101101111101⟩ + 0.03|0110101110111101⟩ + 0.03|0110101111111101⟩ + 0.03|0110110000111101⟩ + 0.03|0110110001111101⟩ + 0.03|0110110010111101⟩ + 0.03|0110110011111101⟩ + 0.03|0110110100111101⟩ + 0.03|0110110101111101⟩ + 0.03|0110110110111101⟩ + 0.03|0110110111111101⟩ + 0.03|0110111000111101⟩ + 0.03|0110111001111101⟩ + 0.03|0110111010111101⟩ + 0.03|0110111011111101⟩ + 0.03|0110111100111101⟩ + 0.03|0110111101111101⟩ + 0.03|0110111110111101⟩ + 0.03|0110111111111101⟩ + 0.03|0111000000111101⟩ + 0.03|0111000001111101⟩ + 0.03|0111000010111101⟩ + 0.03|0111000011111101⟩ + 0.03|0111000100111101⟩ + 0.03|0111000101111101⟩ + 0.03|0111000110111101⟩ + 0.03|0111000111111101⟩ + 0.03|0111001000111101⟩ + 0.03|0111001001111101⟩ + 0.03|0111001010111101⟩ + 0.03|0111001011111101⟩ + 0.03|0111001100111101⟩ + 0.03|0111001101111101⟩ + 0.03|0111001110111101⟩ + 0.03|0111001111111101⟩ + 0.03|0111010000111101⟩ + 0.03|0111010001111101⟩ + 0.03|0111010010111101⟩ + 0.03|0111010011111101⟩ + 0.03|0111010100111101⟩ + 0.03|0111010101111101⟩ + 0.03|0111010110111101⟩ + 0.03|0111010111111101⟩ + 0.03|0111011000111101⟩ + 0.03|0111011001111101⟩ + 0.03|0111011010111101⟩ + 0.03|0111011011111101⟩ + 0.03|0111011100111101⟩ + 0.03|0111011101111101⟩ + 0.03|0111011110111101⟩ + 0.03|0111011111111101⟩ + 0.03|0111100000111101⟩ + 0.03|0111100001111101⟩ + 0.03|0111100010111101⟩ + 0.03|0111100011111101⟩ + 0.03|0111100100111101⟩ + 0.03|0111100101111101⟩ + 0.03|0111100110111101⟩ + 0.03|0111100111111101⟩ + 0.03|0111101000111101⟩ + 0.03|0111101001111101⟩ + 0.03|0111101010111101⟩ + 0.03|0111101011111101⟩ + 0.03|0111101100111101⟩ + 0.03|0111101101111101⟩ + 0.03|0111101110111101⟩ + 0.03|0111101111111101⟩ + 0.03|0111110000111101⟩ + 0.03|0111110001111101⟩ + 0.03|0111110010111101⟩ + 0.03|0111110011111101⟩ + 0.03|0111110100111101⟩ + 0.03|0111110101111101⟩ + 0.03|0111110110111101⟩ + 0.03|0111110111111101⟩ + 0.03|0111111000111101⟩ + 0.03|0111111001111101⟩ + 0.03|0111111010111101⟩ + 0.03|0111111011111101⟩ + 0.03|0111111100111101⟩ + 0.03|0111111101111101⟩ + 0.03|0111111110111101⟩ + 0.03|0111111111111101⟩ + 0.03|1000000000111101⟩ + 0.03|1000000001111101⟩ + 0.03|1000000010111101⟩ + 0.03|1000000011111101⟩ + 0.03|1000000100111101⟩ + 0.03|1000000101111101⟩ + 0.03|1000000110111101⟩ + 0.03|1000000111111101⟩ + 0.03|1000001000111101⟩ + 0.03|1000001001111101⟩ + 0.03|1000001010111101⟩ + 0.03|1000001011111101⟩ + 0.03|1000001100111101⟩ + 0.03|1000001101111101⟩ + 0.03|1000001110111101⟩ + 0.03|1000001111111101⟩ + 0.03|1000010000111101⟩ + 0.03|1000010001111101⟩ + 0.03|1000010010111101⟩ + 0.03|1000010011111101⟩ + 0.03|1000010100111101⟩ + 0.03|1000010101111101⟩ + 0.03|1000010110111101⟩ + 0.03|1000010111111101⟩ + 0.03|1000011000111101⟩ + 0.03|1000011001111101⟩ + 0.03|1000011010111101⟩ + 0.03|1000011011111101⟩ + 0.03|1000011100111101⟩ + 0.03|1000011101111101⟩ + 0.03|1000011110111101⟩ + 0.03|1000011111111101⟩ + 0.03|1000100000111101⟩ + 0.03|1000100001111101⟩ + 0.03|1000100010111101⟩ + 0.03|1000100011111101⟩ + 0.03|1000100100111101⟩ + 0.03|1000100101111101⟩ + 0.03|1000100110111101⟩ + 0.03|1000100111111101⟩ + 0.03|1000101000111101⟩ + 0.03|1000101001111101⟩ + 0.03|1000101010111101⟩ + 0.03|1000101011111101⟩ + 0.03|1000101100111101⟩ + 0.03|1000101101111101⟩ + 0.03|1000101110111101⟩ + 0.03|1000101111111101⟩ + 0.03|1000110000111101⟩ + 0.03|1000110001111101⟩ + 0.03|1000110010111101⟩ + 0.03|1000110011111101⟩ + 0.03|1000110100111101⟩ + 0.03|1000110101111101⟩ + 0.03|1000110110111101⟩ + 0.03|1000110111111101⟩ + 0.03|1000111000111101⟩ + 0.03|1000111001111101⟩ + 0.03|1000111010111101⟩ + 0.03|1000111011111101⟩ + 0.03|1000111100111101⟩ + 0.03|1000111101111101⟩ + 0.03|1000111110111101⟩ + 0.03|1000111111111101⟩ + 0.03|1001000000111101⟩ + 0.03|1001000001111101⟩ + 0.03|1001000010111101⟩ + 0.03|1001000011111101⟩ + 0.03|1001000100111101⟩ + 0.03|1001000101111101⟩ + 0.03|1001000110111101⟩ + 0.03|1001000111111101⟩ + 0.03|1001001000111101⟩ + 0.03|1001001001111101⟩ + 0.03|1001001010111101⟩ + 0.03|1001001011111101⟩ + 0.03|1001001100111101⟩ + 0.03|1001001101111101⟩ + 0.03|1001001110111101⟩ + 0.03|1001001111111101⟩ + 0.03|1001010000111101⟩ + 0.03|1001010001111101⟩ + 0.03|1001010010111101⟩ + 0.03|1001010011111101⟩ + 0.03|1001010100111101⟩ + 0.03|1001010101111101⟩ + 0.03|1001010110111101⟩ + 0.03|1001010111111101⟩ + 0.03|1001011000111101⟩ + 0.03|1001011001111101⟩ + 0.03|1001011010111101⟩ + 0.03|1001011011111101⟩ + 0.03|1001011100111101⟩ + 0.03|1001011101111101⟩ + 0.03|1001011110111101⟩ + 0.03|1001011111111101⟩ + 0.03|1001100000111101⟩ + 0.03|1001100001111101⟩ + 0.03|1001100010111101⟩ + 0.03|1001100011111101⟩ + 0.03|1001100100111101⟩ + 0.03|1001100101111101⟩ + 0.03|1001100110111101⟩ + 0.03|1001100111111101⟩ + 0.03|1001101000111101⟩ + 0.03|1001101001111101⟩ + 0.03|1001101010111101⟩ + 0.03|1001101011111101⟩ + 0.03|1001101100111101⟩ + 0.03|1001101101111101⟩ + 0.03|1001101110111101⟩ + 0.03|1001101111111101⟩ + 0.03|1001110000111101⟩ + 0.03|1001110001111101⟩ + 0.03|1001110010111101⟩ + 0.03|1001110011111101⟩ + 0.03|1001110100111101⟩ + 0.03|1001110101111101⟩ + 0.03|1001110110111101⟩ + 0.03|1001110111111101⟩ + 0.03|1001111000111101⟩ + 0.03|1001111001111101⟩ + 0.03|1001111010111101⟩ + 0.03|1001111011111101⟩ + 0.03|1001111100111101⟩ + 0.03|1001111101111101⟩ + 0.03|1001111110111101⟩ + 0.03|1001111111111101⟩ + 0.03|1010000000111101⟩ + 0.03|1010000001111101⟩ + 0.03|1010000010111101⟩ + 0.03|1010000011111101⟩ + 0.03|1010000100111101⟩ + 0.03|1010000101111101⟩ + 0.03|1010000110111101⟩ + 0.03|1010000111111101⟩ + 0.03|1010001000111101⟩ + 0.03|1010001001111101⟩ + 0.03|1010001010111101⟩ + 0.03|1010001011111101⟩ + 0.03|1010001100111101⟩ + 0.03|1010001101111101⟩ + 0.03|1010001110111101⟩ + 0.03|1010001111111101⟩ + 0.03|1010010000111101⟩ + 0.03|1010010001111101⟩ + 0.03|1010010010111101⟩ + 0.03|1010010011111101⟩ + 0.03|1010010100111101⟩ + 0.03|1010010101111101⟩ + 0.03|1010010110111101⟩ + 0.03|1010010111111101⟩ + 0.03|1010011000111101⟩ + 0.03|1010011001111101⟩ + 0.03|1010011010111101⟩ + 0.03|1010011011111101⟩ + 0.03|1010011100111101⟩ + 0.03|1010011101111101⟩ + 0.03|1010011110111101⟩ + 0.03|1010011111111101⟩ + 0.03|1010100000111101⟩ + 0.03|1010100001111101⟩ + 0.03|1010100010111101⟩ + 0.03|1010100011111101⟩ + 0.03|1010100100111101⟩ + 0.03|1010100101111101⟩ + 0.03|1010100110111101⟩ + 0.03|1010100111111101⟩ + 0.03|1010101000111101⟩ + 0.03|1010101001111101⟩ + 0.03|1010101010111101⟩ + 0.03|1010101011111101⟩ + 0.03|1010101100111101⟩ + 0.03|1010101101111101⟩ + 0.03|1010101110111101⟩ + 0.03|1010101111111101⟩ + 0.03|1010110000111101⟩ + 0.03|1010110001111101⟩ + 0.03|1010110010111101⟩ + 0.03|1010110011111101⟩ + 0.03|1010110100111101⟩ + 0.03|1010110101111101⟩ + 0.03|1010110110111101⟩ + 0.03|1010110111111101⟩ + 0.03|1010111000111101⟩ + 0.03|1010111001111101⟩ + 0.03|1010111010111101⟩ + 0.03|1010111011111101⟩ + 0.03|1010111100111101⟩ + 0.03|1010111101111101⟩ + 0.03|1010111110111101⟩ + 0.03|1010111111111101⟩ + 0.03|1011000000111101⟩ + 0.03|1011000001111101⟩ + 0.03|1011000010111101⟩ + 0.03|1011000011111101⟩ + 0.03|1011000100111101⟩ + 0.03|1011000101111101⟩ + 0.03|1011000110111101⟩ + 0.03|1011000111111101⟩ + 0.03|1011001000111101⟩ + 0.03|1011001001111101⟩ + 0.03|1011001010111101⟩ + 0.03|1011001011111101⟩ + 0.03|1011001100111101⟩ + 0.03|1011001101111101⟩ + 0.03|1011001110111101⟩ + 0.03|1011001111111101⟩ + 0.03|1011010000111101⟩ + 0.03|1011010001111101⟩ + 0.03|1011010010111101⟩ + 0.03|1011010011111101⟩ + 0.03|1011010100111101⟩ + 0.03|1011010101111101⟩ + 0.03|1011010110111101⟩ + 0.03|1011010111111101⟩ + 0.03|1011011000111101⟩ + 0.03|1011011001111101⟩ + 0.03|1011011010111101⟩ + 0.03|1011011011111101⟩ + 0.03|1011011100111101⟩ + 0.03|1011011101111101⟩ + 0.03|1011011110111101⟩ + 0.03|1011011111111101⟩ + 0.03|1011100000111101⟩ + 0.03|1011100001111101⟩ + 0.03|1011100010111101⟩ + 0.03|1011100011111101⟩ + 0.03|1011100100111101⟩ + 0.03|1011100101111101⟩ + 0.03|1011100110111101⟩ + 0.03|1011100111111101⟩ + 0.03|1011101000111101⟩ + 0.03|1011101001111101⟩ + 0.03|1011101010111101⟩ + 0.03|1011101011111101⟩ + 0.03|1011101100111101⟩ + 0.03|1011101101111101⟩ + 0.03|1011101110111101⟩ + 0.03|1011101111111101⟩ + 0.03|1011110000111101⟩ + 0.03|1011110001111101⟩ + 0.03|1011110010111101⟩ + 0.03|1011110011111101⟩ + 0.03|1011110100111101⟩ + 0.03|1011110101111101⟩ + 0.03|1011110110111101⟩ + 0.03|1011110111111101⟩ + 0.03|1011111000111101⟩ + 0.03|1011111001111101⟩ + 0.03|1011111010111101⟩ + 0.03|1011111011111101⟩ + 0.03|1011111100111101⟩ + 0.03|1011111101111101⟩ + 0.03|1011111110111101⟩ + 0.03|1011111111111101⟩ + 0.03|1100000000111101⟩ + 0.03|1100000001111101⟩ + 0.03|1100000010111101⟩ + 0.03|1100000011111101⟩ + 0.03|1100000100111101⟩ + 0.03|1100000101111101⟩ + 0.03|1100000110111101⟩ + 0.03|1100000111111101⟩ + 0.03|1100001000111101⟩ + 0.03|1100001001111101⟩ + 0.03|1100001010111101⟩ + 0.03|1100001011111101⟩ + 0.03|1100001100111101⟩ + 0.03|1100001101111101⟩ + 0.03|1100001110111101⟩ + 0.03|1100001111111101⟩ + 0.03|1100010000111101⟩ + 0.03|1100010001111101⟩ + 0.03|1100010010111101⟩ + 0.03|1100010011111101⟩ + 0.03|1100010100111101⟩ + 0.03|1100010101111101⟩ + 0.03|1100010110111101⟩ + 0.03|1100010111111101⟩ + 0.03|1100011000111101⟩ + 0.03|1100011001111101⟩ + 0.03|1100011010111101⟩ + 0.03|1100011011111101⟩ + 0.03|1100011100111101⟩ + 0.03|1100011101111101⟩ + 0.03|1100011110111101⟩ + 0.03|1100011111111101⟩ + 0.03|1100100000111101⟩ + 0.03|1100100001111101⟩ + 0.03|1100100010111101⟩ + 0.03|1100100011111101⟩ + 0.03|1100100100111101⟩ + 0.03|1100100101111101⟩ + 0.03|1100100110111101⟩ + 0.03|1100100111111101⟩ + 0.03|1100101000111101⟩ + 0.03|1100101001111101⟩ + 0.03|1100101010111101⟩ + 0.03|1100101011111101⟩ + 0.03|1100101100111101⟩ + 0.03|1100101101111101⟩ + 0.03|1100101110111101⟩ + 0.03|1100101111111101⟩ + 0.03|1100110000111101⟩ + 0.03|1100110001111101⟩ + 0.03|1100110010111101⟩ + 0.03|1100110011111101⟩ + 0.03|1100110100111101⟩ + 0.03|1100110101111101⟩ + 0.03|1100110110111101⟩ + 0.03|1100110111111101⟩ + 0.03|1100111000111101⟩ + 0.03|1100111001111101⟩ + 0.03|1100111010111101⟩ + 0.03|1100111011111101⟩ + 0.03|1100111100111101⟩ + 0.03|1100111101111101⟩ + 0.03|1100111110111101⟩ + 0.03|1100111111111101⟩ + 0.03|1101000000111101⟩ + 0.03|1101000001111101⟩ + 0.03|1101000010111101⟩ + 0.03|1101000011111101⟩ + 0.03|1101000100111101⟩ + 0.03|1101000101111101⟩ + 0.03|1101000110111101⟩ + 0.03|1101000111111101⟩ + 0.03|1101001000111101⟩ + 0.03|1101001001111101⟩ + 0.03|1101001010111101⟩ + 0.03|1101001011111101⟩ + 0.03|1101001100111101⟩ + 0.03|1101001101111101⟩ + 0.03|1101001110111101⟩ + 0.03|1101001111111101⟩ + 0.03|1101010000111101⟩ + 0.03|1101010001111101⟩ + 0.03|1101010010111101⟩ + 0.03|1101010011111101⟩ + 0.03|1101010100111101⟩ + 0.03|1101010101111101⟩ + 0.03|1101010110111101⟩ + 0.03|1101010111111101⟩ + 0.03|1101011000111101⟩ + 0.03|1101011001111101⟩ + 0.03|1101011010111101⟩ + 0.03|1101011011111101⟩ + 0.03|1101011100111101⟩ + 0.03|1101011101111101⟩ + 0.03|1101011110111101⟩ + 0.03|1101011111111101⟩ + 0.03|1101100000111101⟩ + 0.03|1101100001111101⟩ + 0.03|1101100010111101⟩ + 0.03|1101100011111101⟩ + 0.03|1101100100111101⟩ + 0.03|1101100101111101⟩ + 0.03|1101100110111101⟩ + 0.03|1101100111111101⟩ + 0.03|1101101000111101⟩ + 0.03|1101101001111101⟩ + 0.03|1101101010111101⟩ + 0.03|1101101011111101⟩ + 0.03|1101101100111101⟩ + 0.03|1101101101111101⟩ + 0.03|1101101110111101⟩ + 0.03|1101101111111101⟩ + 0.03|1101110000111101⟩ + 0.03|1101110001111101⟩ + 0.03|1101110010111101⟩ + 0.03|1101110011111101⟩ + 0.03|1101110100111101⟩ + 0.03|1101110101111101⟩ + 0.03|1101110110111101⟩ + 0.03|1101110111111101⟩ + 0.03|1101111000111101⟩ + 0.03|1101111001111101⟩ + 0.03|1101111010111101⟩ + 0.03|1101111011111101⟩ + 0.03|1101111100111101⟩ + 0.03|1101111101111101⟩ + 0.03|1101111110111101⟩ + 0.03|1101111111111101⟩ + 0.03|1110000000111101⟩ + 0.03|1110000001111101⟩ + 0.03|1110000010111101⟩ + 0.03|1110000011111101⟩ + 0.03|1110000100111101⟩ + 0.03|1110000101111101⟩ + 0.03|1110000110111101⟩ + 0.03|1110000111111101⟩ + 0.03|1110001000111101⟩ + 0.03|1110001001111101⟩ + 0.03|1110001010111101⟩ + 0.03|1110001011111101⟩ + 0.03|1110001100111101⟩ + 0.03|1110001101111101⟩ + 0.03|1110001110111101⟩ + 0.03|1110001111111101⟩ + 0.03|1110010000111101⟩ + 0.03|1110010001111101⟩ + 0.03|1110010010111101⟩ + 0.03|1110010011111101⟩ + 0.03|1110010100111101⟩ + 0.03|1110010101111101⟩ + 0.03|1110010110111101⟩ + 0.03|1110010111111101⟩ + 0.03|1110011000111101⟩ + 0.03|1110011001111101⟩ + 0.03|1110011010111101⟩ + 0.03|1110011011111101⟩ + 0.03|1110011100111101⟩ + 0.03|1110011101111101⟩ + 0.03|1110011110111101⟩ + 0.03|1110011111111101⟩ + 0.03|1110100000111101⟩ + 0.03|1110100001111101⟩ + 0.03|1110100010111101⟩ + 0.03|1110100011111101⟩ + 0.03|1110100100111101⟩ + 0.03|1110100101111101⟩ + 0.03|1110100110111101⟩ + 0.03|1110100111111101⟩ + 0.03|1110101000111101⟩ + 0.03|1110101001111101⟩ + 0.03|1110101010111101⟩ + 0.03|1110101011111101⟩ + 0.03|1110101100111101⟩ + 0.03|1110101101111101⟩ + 0.03|1110101110111101⟩ + 0.03|1110101111111101⟩ + 0.03|1110110000111101⟩ + 0.03|1110110001111101⟩ + 0.03|1110110010111101⟩ + 0.03|1110110011111101⟩ + 0.03|1110110100111101⟩ + 0.03|1110110101111101⟩ + 0.03|1110110110111101⟩ + 0.03|1110110111111101⟩ + 0.03|1110111000111101⟩ + 0.03|1110111001111101⟩ + 0.03|1110111010111101⟩ + 0.03|1110111011111101⟩ + 0.03|1110111100111101⟩ + 0.03|1110111101111101⟩ + 0.03|1110111110111101⟩ + 0.03|1110111111111101⟩ + 0.03|1111000000111101⟩ + 0.03|1111000001111101⟩ + 0.03|1111000010111101⟩ + 0.03|1111000011111101⟩ + 0.03|1111000100111101⟩ + 0.03|1111000101111101⟩ + 0.03|1111000110111101⟩ + 0.03|1111000111111101⟩ + 0.03|1111001000111101⟩ + 0.03|1111001001111101⟩ + 0.03|1111001010111101⟩ + 0.03|1111001011111101⟩ + 0.03|1111001100111101⟩ + 0.03|1111001101111101⟩ + 0.03|1111001110111101⟩ + 0.03|1111001111111101⟩ + 0.03|1111010000111101⟩ + 0.03|1111010001111101⟩ + 0.03|1111010010111101⟩ + 0.03|1111010011111101⟩ + 0.03|1111010100111101⟩ + 0.03|1111010101111101⟩ + 0.03|1111010110111101⟩ + 0.03|1111010111111101⟩ + 0.03|1111011000111101⟩ + 0.03|1111011001111101⟩ + 0.03|1111011010111101⟩ + 0.03|1111011011111101⟩ + 0.03|1111011100111101⟩ + 0.03|1111011101111101⟩ + 0.03|1111011110111101⟩ + 0.03|1111011111111101⟩ + 0.03|1111100000111101⟩ + 0.03|1111100001111101⟩ + 0.03|1111100010111101⟩ + 0.03|1111100011111101⟩ + 0.03|1111100100111101⟩ + 0.03|1111100101111101⟩ + 0.03|1111100110111101⟩ + 0.03|1111100111111101⟩ + 0.03|1111101000111101⟩ + 0.03|1111101001111101⟩ + 0.03|1111101010111101⟩ + 0.03|1111101011111101⟩ + 0.03|1111101100111101⟩ + 0.03|1111101101111101⟩ + 0.03|1111101110111101⟩ + 0.03|1111101111111101⟩ + 0.03|1111110000111101⟩ + 0.03|1111110001111101⟩ + 0.03|1111110010111101⟩ + 0.03|1111110011111101⟩ + 0.03|1111110100111101⟩ + 0.03|1111110101111101⟩ + 0.03|1111110110111101⟩ + 0.03|1111110111111101⟩ + 0.03|1111111000111101⟩ + 0.03|1111111001111101⟩ + 0.03|1111111010111101⟩ + 0.03|1111111011111101⟩ + 0.03|1111111100111101⟩ + 0.03|1111111101111101⟩ + 0.03|1111111110111101⟩ + 0.03|1111111111111101⟩
###Markdown
In our case, the second register is measured as $\ket{1101}$ which is $\ket{13}$. Note that you might get a different measurement result. Let's check the first 5 states. $\ket{000000000011}=\ket{3}$ $\ket{000000000111} = \ket{7}$$\ket{000000001011} = \ket{11}$$\ket{000000001111} = \ket{15}$ $\ket{000000010011} = \ket{19}$ Task 4Apply $QFT^{\dagger}$ to the first register and measure (Don't forget to provide a different key for measurement). Sample the circuit and check whether you get the outcomes 0, 1024, 2048 and 3072. Solution
###Code
# %load iqft.py
import cirq
from cirq.circuits import InsertStrategy
def iqft(n,qubits,circuit):
#Define shorthand notation for Hadamard and Swap gates
H = cirq.H
swap = cirq.SWAP
#Swap the qubits
for i in range(n//2):
circuit.append(swap(qubits[i],qubits[n-i-1]), strategy = InsertStrategy.NEW)
#For each qubit
for i in range(n-1,-1,-1):
#Apply CR_k gates where j is the control and i is the target
k=n-i #We start with k=n-i
for j in range(n-1,i,-1):
#Define and apply CR_k gate
crk = cirq.CZPowGate(exponent = -2/2**(k))
circuit.append(crk(qubits[j],qubits[i]),strategy = InsertStrategy.NEW)
k=k-1 #Decrement at each step
#Apply Hadamard to the qubit
circuit.append(H(qubits[i]),strategy = InsertStrategy.NEW)
#Call inverse qft method
iqft(t,control,circuit)
#Measure the control register
circuit.append(cirq.measure(*control, key='result2'))
#Sample the circuit
print('Sample the circuit:')
samples=s.run(circuit, repetitions=1000)
# Print a histogram of results
results = samples.histogram(key='result2')
print(results)
import matplotlib.pyplot as plt
results = dict(sorted(results.items()))
freqs = list(results.values())
states = list(results.keys())
fig = plt.figure(figsize = (10, 5))
plt.bar(results.keys(), results.values(), width = 10)
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.